Skip to content
Snippets Groups Projects
Commit 75bea92c authored by Guillaume Duret's avatar Guillaume Duret
Browse files

first readme for FruitBin

parent 4852ee50
No related branches found
No related tags found
No related merge requests found
......@@ -8,6 +8,7 @@
We have released the code and arXiv preprint for our new project [6-PACK](https://sites.google.com/view/6packtracking) which is based on this work and used for category-level 6D pose tracking.
## Table of Content
- [FruitBin training]
- [Overview](#overview)
- [Requirements](#requirements)
- [Code Structure](#code-structure)
......@@ -22,6 +23,138 @@ We have released the code and arXiv preprint for our new project [6-PACK](https:
- [Citations](#citations)
- [License](#license)
## Getting started for FruitBin.
To clone the repository do the commands :
```
git clone https://gitlab.liris.cnrs.fr/gduret/DenseFusion
git checkout jz
```
If the dataset is download from https://datasets.liris.cnrs.fr/fruitbin-version1, then the splitting of the data is done, a rearaangment will have to be done folowing these constrains :
```
RGB_Resised -> rgb
Depth_resized -> depth
Instance_Mask_resized -> mask
apple2 -> 01
apricot -> 02
banana1 -> 03
kiwi1 -> 04
lemon2 -> 05
orange2 -> 06
peach1 -> 07
pear2 -> 08
```
Before considering the training, the data have to be arranged in a specific way. Following the steps described in the link : (https://gitlab.liris.cnrs.fr/gduret/fruitbin) with the command :
```
python3 compute_label.py --path_dataset=/gpfsscratch/rech/uli/ubn15wo/FruitBin1/FruitBin_low_1_0.7_1.0/ --target_folder=Generated_Cameras --path_DF_data=/gpfsscratch/rech/uli/ubn15wo/DenseFusion01_Cameras/datasets/linemod/Linemod_preprocessed/data --occ_data=""
```
The expect folders result architectures is :
```
├── datasets
│ ├── linemod
│ │ ├── dataset_config
│ │ ├── Linemod_preprocessed
│ │ │ ├── data
│ │ │ │ ├── 01
│ │ │ │ │ ├── depth
│ │ │ │ │ ├── mask
│ │ │ │ │ └── rgb
│ │ │ │ ├── 02
│ │ │ │ │ ├── depth
│ │ │ │ │ ├── mask
│ │ │ │ │ └── rgb
│ │ │ │ ├── 03
│ │ │ │ │ ├── depth
│ │ │ │ │ ├── mask
│ │ │ │ │ └── rgb
│ │ │ │ ├── 04
│ │ │ │ │ ├── depth
│ │ │ │ │ ├── mask
│ │ │ │ │ └── rgb
│ │ │ │ ├── 05
│ │ │ │ │ ├── depth
│ │ │ │ │ ├── mask
│ │ │ │ │ └── rgb
│ │ │ │ ├── 06
│ │ │ │ │ ├── depth
│ │ │ │ │ ├── mask
│ │ │ │ │ └── rgb
│ │ │ │ ├── 07
│ │ │ │ │ ├── depth
│ │ │ │ │ ├── mask
│ │ │ │ │ └── rgb
│ │ │ │ └── 08
│ │ │ │ ├── depth
│ │ │ │ ├── mask
│ │ │ │ └── rgb
│ │ │ └── models
```
If the processed has been successful, the training command is :
```
bash ./experiments/scripts/train_linemod.sh
```
To evaluate the model, segmentation result have to be saved in the folder :
To do so, it can be used the script densefusion_gt_to_segnet.sh from the repo Fruibin : https://gitlab.liris.cnrs.fr/gduret/fruitbin
The command can run as follow :
```
bash densefusion_gt_to_segnet.sh /gpfsscratch/rech/uli/ubn15wo/FruitBin1/FruitBin_low_1_0.7_1.0/Generated_Cameras_Evaluating /gpfsscratch/rech/uli/ubn15wo/DenseFusion/datasets/linemod/Linemod_preprocessed/segnet_results
```
The expected result architecture is (only label_gt is used) :
```
├── datasets
│ ├── linemod
│ │ ├── dataset_config
│ │ ├── Linemod_preprocessed
│ │ │ └── segnet_results
│ │ │ ├── 01_label_gt
│ │ │ ├── 01_label_yolo
│ │ │ ├── 02_label_gt
│ │ │ ├── 02_label_yolo
│ │ │ ├── 03_label_gt
│ │ │ ├── 03_label_yolo
│ │ │ ├── 04_label_gt
│ │ │ ├── 04_label_yolo
│ │ │ ├── 05_label_gt
│ │ │ ├── 05_label_yolo
│ │ │ ├── 06_label_gt
│ │ │ ├── 06_label_yolo
│ │ │ ├── 07_label_gt
│ │ │ ├── 07_label_yolo
│ │ │ ├── 08_label_gt
│ │ │ └── 08_label_yolo
```
The evaluation command is :
```
bash ./experiments/scripts/eval_linemod.sh
```
eval_linemod.sh have to be modified to select the corresponding trained model.
## Overview
This repository is the implementation code of the paper "DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion"([arXiv](https://arxiv.org/abs/1901.04780), [Project](https://sites.google.com/view/densefusion), [Video](https://www.youtube.com/watch?v=SsE5-FuK5jo)) by Wang et al. at [Stanford Vision and Learning Lab](http://svl.stanford.edu/) and [Stanford People, AI & Robots Group](http://pair.stanford.edu/). The model takes an RGB-D image as input and predicts the 6D pose of the each object in the frame. This network is implemented using [PyTorch](https://pytorch.org/) and the rest of the framework is in Python. Since this project focuses on the 6D pose estimation process, we do not specifically limit the choice of the segmentation models. You can choose your preferred semantic-segmentation/instance-segmentation methods according to your needs. In this repo, we provide our full implementation code of the DenseFusion model, Iterative Refinement model and a vanilla SegNet semantic-segmentation model used in our real-robot grasping experiment. The ROS code of the real robot grasping experiment is not included.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment