pvnet_method
Getting started with FruitBin
To clone the repository do the commands :
git clone https://gitlab.liris.cnrs.fr/maali/pvnet_method
git checkout dev
If the dataset is download from https://datasets.liris.cnrs.fr/fruitbin-version1, then the splitting of the data is described with txt files, a rearangment will have to be done to create different folder for Train, Eval and Testing.
For using this PVnet code for the dataset FruitBin, Preprossesing and splitting of the data have to be done following the steps described in https://gitlab.liris.cnrs.fr/gduret/fruitbin .
An example of folder architecture can be :
├──Generated_Cameras_Training
│ ├── Fruit_i
│ │ ├── Bbox
│ │ ├── Bbox_3d_Gen
│ │ ├── Depth_Gen
│ │ ├── Depth_resized
│ │ ├── FPS
│ │ ├── FPS_resized
│ │ ├── Instance_Mask
│ │ ├── Instance_Mask_resized
│ │ ├── Labels
│ │ ├── Meta_Gen
│ │ ├── Models
│ │ ├── Pose_transformed
│ │ ├── RGB_Gen
│ │ ├── RGB_resized
│ ├── Fruit_j...
├──Generated_Cameras_Evaluating
│ ├── Fruit_i
│ │ ├── Bbox
│ │ ├── Bbox_3d_Gen
│ │ ├── Depth_Gen
│ │ ├── Depth_resized
│ │ ├── FPS
│ │ ├── FPS_resized
│ │ ├── Instance_Mask
│ │ ├── Instance_Mask_resized
│ │ ├── Labels
│ │ ├── Meta_Gen
│ │ ├── Models
│ │ ├── Pose_transformed
│ │ ├── RGB_Gen
│ │ ├── RGB_resized
│ ├── Fruit_j...
├──Generated_Cameras_Testing
│ ├── Fruit_i
│ │ ├── Bbox
│ │ ├── Bbox_3d_Gen
│ │ ├── Depth_Gen
│ │ ├── Depth_resized
│ │ ├── FPS
│ │ ├── FPS_resized
│ │ ├── Instance_Mask
│ │ ├── Instance_Mask_resized
│ │ ├── Labels
│ │ ├── Meta_Gen
│ │ ├── Models
│ │ ├── Pose_transformed
│ │ ├── RGB_Gen
│ │ ├── RGB_resized
│ ├── Fruit_j...
If the processed has been successful, the training command for each fruit is :
python models.py -cls_name fruit_name --path_base Path_of_the_preprocessed_data_folder --training_folder Name_folder_training --evaluation_folder Name_folder_evaluation
An example can be :
python models.py -cls_name kiwi1 --path_base /gpfsscratch/rech/uli/ubn15wo/FruitBin1/FruitBin_low_1_0.7 --training_folder Generated_Cameras_Training --evaluation_folder Generated_Cameras_Evaluating
To evaluate the model, segmentation result have to be saved :
The command can run as follow :
python predict_pose.py --path_data=Path_of_the_preprocessed_data_folder --folder_evaluation=Name_folder_evaluation --class_name=fruit_name
as the example :
python predict_pose.py --path_data=/gpfsscratch/rech/uli/ubn15wo/FruitBin1/FruitBin_low_1_0.7 --folder_evaluation=Generated_Cameras_Evaluating --class_name=kiwi1
The evaluation command is :
python eval_pose.py --path_data=Path_of_the_preprocessed_data_folder --folder_evaluation=Name_folder_evaluation --class_name=fruit_name --symmetry=is_symetric
as the example :
python eval_pose.py --path_data=/gpfsscratch/rech/uli/ubn15wo/FruitBin1/FruitBin_low_1_0.7/ --folder_evaluation=Generated_Cameras_Evaluating/ --class_name=kiwi1 --symmetry=True
For information, in FruitBin, all fruit are considered symetric except the pear and the banana.
Authors and acknowledgment
This code is from Pvnet, If you find this code useful for your research, please use the following BibTeX entry.
@inproceedings{peng2019pvnet,
title={PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation},
author={Peng, Sida and Liu, Yuan and Huang, Qixing and Zhou, Xiaowei and Bao, Hujun},
booktitle={CVPR},
year={2019}
}