-
jwangzzz authorede2c256f5
DenseFusion
News
We have released the code and arXiv preprint for our new project 6-PACK which is based on this work and used for category-level 6D pose tracking.
Table of Content
- Overview
- Requirements
- Code Structure
- Datasets
- Training
- Evaluation
- Results
- Trained Checkpoints
- Tips for your own dataset
- Citations
- License
Overview
This repository is the implementation code of the paper "DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion"(arXiv, Project, Video) by Wang et al. at Stanford Vision and Learning Lab and Stanford People, AI & Robots Group. The model takes an RGB-D image as input and predicts the 6D pose of the each object in the frame. This network is implemented using PyTorch and the rest of the framework is in Python. Since this project focuses on the 6D pose estimation process, we do not specifically limit the choice of the segmentation models. You can choose your preferred semantic-segmentation/instance-segmentation methods according to your needs. In this repo, we provide our full implementation code of the DenseFusion model, Iterative Refinement model and a vanilla SegNet semantic-segmentation model used in our real-robot grasping experiment. The ROS code of the real robot grasping experiment is not included.
Requirements
- Python 2.7/3.5/3.6 (If you want to use Python2.7 to run this repo, please rebuild the
lib/knn/
(with PyTorch 0.4.1).) - PyTorch 0.4.1 (PyTroch 1.0 branch)
- PIL
- scipy
- numpy
- pyyaml
- logging
- matplotlib
- CUDA 7.5/8.0/9.0 (Required. CPU-only will lead to extreme slow training speed because of the loss calculation of the symmetry objects (pixel-wise nearest neighbour loss).)