Skip to content
Snippets Groups Projects
Commit 0a14ee23 authored by Guillaume Duret's avatar Guillaume Duret
Browse files

First readme

parent 96594ae6
No related branches found
No related tags found
No related merge requests found
# GUIMOD # FruitBin
FruitBin is the repository used to precess the dataset from the PickSim genaration (https://gitlab.liris.cnrs.fr/gduret/PickSim ).
## Getting started ## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps. The expected use is using the followinf command :
```
python main.py --World_begin="$id_begin" --Nb_world="$Nb" --dataset_id="$id_dataset" --occlusion_target_min=$occlusion_min --occlusion_target_max=$occlusion_max --rearrange=$rearrange --compute=$compute
```
as the exemple :
```
python main.py --World_begin=1 --Nb_world=10000 --dataset_id=1 --occlusion_target_min=0.7 --occlusion_target_max=1.0 --rearrange=yes --compute=yes
```
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files The following table present the different input parameters :
| Parameters | Information |
| :---: | :---: |
| Parameters | Information |
| World_begin | the id of the first scene id that we want to process |
| Nb_world | The number of scene we want to process |
| dataset_id | The id of the dataset we want to process, it is used to give the path of the data we want to process |
| rearrange | This parameter with expected value of "no or yes", decide if the script is doing a rearangement of the data from the data generated from PickSim |
| compute | This parameter with expected value of "no or yes", decide if the script is doing a post processing for a specific scenario |
| occlusion_target_min | In the case of a scenario preprossesing, the value give the lower band of visibility rate that we want the filtering |
| occlusion_target_max | In the case of a scenario preprossesing, the value give the uper band of visibility rate that we want the filtering |
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command: Additionally, more parameters can be changes in the main.py as :
| Parameters | Information |
| :---: | :---: |
| Nb_camera | The number of camera |
| dataset_src | The path directed to the data generated by PickSim that we want to process |
| dataset_path | The path destination for the rearanged dataset |
| dataset_id | The id of the dataset we want to process, it is used to give the path of the data we want to process |
| choice | This parameter with expected value of "low or high", decide if the script is doing a post processing for ground_truth_depth or ground_truth_rgb |
| list_categories | This the list of category present in the dataset that we want to consider |
| new_size | new size nedeed for resizing features for the futur training |
## Rearrange step
The rearange step will move data from the arrangement of Picksim as :
``` ```
cd existing_repo ├── [Scene Id]
git remote add origin https://gitlab.liris.cnrs.fr/maali/guimod.git | ├── Meta.json
git branch -M main │ ├── [camera_i]
git push -uf origin main | | ├── color
| | │ └── image
| | ├── depth
| | │ ├── depth_map
| | │ ├── image
| | │ ├── normals_map
| | │ ├── pointcloud
| | │ └── reflectance_map
| | ├── ground_truth_depth
| | │ ├── 2d_detection
| | │ ├── 2d_detection_loose
| | │ ├── 3d_detection
| | │ ├── 3d_pose
| | │ ├── id_map
| | │ ├── instance_map
| | │ ├── occlusion
| | │ └── semantic_map
| | ├── ground_truth_rgb
| | │ ├── 2d_detection
| | │ ├── 2d_detection_loose
| | │ ├── 3d_detection
| | │ ├── 3d_pose
| | │ ├── id_map
| | │ ├── instance_map
| | │ ├── occlusion
| | │ └── semantic_map
| | ├── infra1
| | │ └── image
| | └── infra2
| | └── image
``` ```
## Integrate with your tools to the following organisation :
```
├── Bbox_2d
├── Bbox_2d_loose
├── Bbox_3d
├── Depth
├── Instance_Segmentation
├── Meta
├── Occlusion
├── Pose
├── RGB
├── Semantic_Segmentation
```
The following table present informations about the different folder of the PickSim generation that actually correspond to different sensors in the simulation within Gazebo to match the different sensors of the realsense camera D415.
| Parameters | Information |
| :---: | :---: |
| Meta.json | scene oriented meta file, it enumerate all the data recorded during the scene while giving to the user the list of category simulated in the scen and the corresponding id instances.
| Scene Id | The scene id, the data generated is scene oriented |
| camera_i | The camera id, the data generated is camera oriented |
| color | The name of the sensor recording rgb image with a resolution of 1920x1080 (matching the data of the camera realsense D415) |
| depth | The name of the sensor recording depth data such as depth or point cloud with a resolution of 1280x720 (matching the data of the camera realsense D415 ) |
| ground_truth_depth | Recorded features from a new vision plugin for the depth sensor with a resolution of 1920x1080 |
| ground_truth_rgb | Recorded features from a new vision plugin for the depth sensor with a resolution of 1280x720 |
| infra1 | black and white infra chanel 1 with a resolution of |
| infra2 | black and white infra chanel 1 with a resolution of |
- [ ] [Set up project integrations](https://gitlab.liris.cnrs.fr/maali/guimod/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy The resulted data is organized by features type. The following table present some information about the resulted data and the corresponding data from the raw data of PickSim :
| Parameters | equivalent PickSim | Information |
| :---: | :---: | :---: |
| Meta | [Scene Id]/Meta.json |
| Bbox_2d | [Scene Id]/[camera_i]/[ground_truth_depth]/2d_detection |In the pre prossessing part, and for the training of 6D pose estiamtion, all data are fruits based : |
| Bbox_2d_loose | [Scene Id]/[camera_i]/[ground_truth_depth]/2d_detection | Liste of meta data describing fruit based meta data such as Scene Id, Camera Id, list of instance id realted to the fruit or the list of occlsion rate associated.
| Bbox_3d | [Scene Id]/[camera_i]/[ground_truth_depth]/3d_detection | Bboxes |
| Depth | [Scene Id]/[camera_i]/[depth]/images | 3D Bboxes |
| Instance_Segmentation | [Scene Id]/[camera_i]/ground_truth_depth/id_map | The id of the dataset we want to process, it is used to give the path of the data we want to process |
| Occlusion | [Scene Id]/[camera_i]/ground_truth_depth/occlusion | This parameter with expected value of "no or yes", decide if the script is doing a rearangement of the data from the data generated from PickSim |
| Pose | [Scene Id]/[camera_i]/ground_truth_depth/3d_pose | This parameter with expected value of "no or yes", decide if the script is doing a post processing for a specific scenario |
| RGB | [Scene Id]/[camera_i]/color/image | In the case of a scenario preprossesing, the value give the lower band of occlusion rate that we want the filtering |
| Semantic_Segmentation | [Scene Id]/[camera_i]/ground_truth_depth/sematic_map | In the case of a scenario preprossesing, the value give the uper band of occlusion rate that we want the filtering |
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html) The ground truth annotation only take ground_truth_depth to consider RGB-D data with the same resolution. However all the features will be added into the dataset for potential use of the community (https://datasets.liris.cnrs.fr/fruitbin-version1 )
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README ## Compute step for PVnet and the the scenarios
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template. The compute step will take as input the rearanged data and process it for the futur training. It will create and process the data into the nedeed format of Pvnet 6D pose estiamtion model. Moroever for the training, all data is category/fruit realted as presented in the following architectures:
## Suggestions for a good README ```
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information. ├── Fruit_i
│ ├── Bbox
│ ├── Bbox_3d_Gen
│ ├── Depth_Gen
│ ├── Depth_resized
│ ├── FPS
│ ├── FPS_resized
│ ├── Instance_Mask
│ ├── Instance_Mask_resized
│ ├── Labels
│ ├── Meta_Gen
│ ├── Models
│ ├── Pose_transformed
│ ├── RGB_Gen
│ ├── RGB_resized
│ └── Splitting
```
The following table give information about the different features generated. Moroever this processed data take into account the filtering parameters given into the main.py script such as the wanted occlusion. Also, for the purpose of FruitBin, the splitting for the scene scenario is done for 6000 scenes for training, 2000 scenes for evalution and testing for a total of 10000. Additionally, camera take 9 camera as training and 3 for evaluating and testing for a total of 15.
## Name | Parameters | Information |
Choose a self-explaining name for your project. | :---: | :---: |
| Fruit_i | The fruit we are considering, the following data are category oriented |
| Meta_Gen | Liste of meta data describing fruit based meta data such as Scene Id, Camera Id, list of instance id realted to the fruit or the list of occlsion rate associated.
| BBox | Bboxes |
| Bbox_3d_Gen | 3D Bboxes |
| Depth_Gen | Depth map data with a resolution of 1280x720 |
| Depth_resized | Depth map data with a resolution of 640x480 for training |
| FPS | FPS(farthest point sampling) key point for the image of 1280x720 for Pvnet |
| FPS_resized | FPS data with a resolution of 640x480 for training for Pvnet |
| Instance_Mask | Instance mask data with a resolution of 1280x720 |
| Instance_Mask_resized | Instance mask data with a resolution of 640x480 for training |
| Labels | Instance mask in the Yolov8 format (it is actully generates with the script compute label as explain bellow) |
| Models | The meshes of the 8 fruit in a common ply format |
| Pose_transformed | 6D pose annotation in the format of PVNet |
| RGB_Gen | RGB image data with a resolution of 1280x720 |
| RGB_resized | RGB image data with a resolution of 1280x720 |
| Splitting | Folder only available when dataset downloaded online, it gives a list of .txt splutting files for different scenarios and for describing the split train/eval/test |
## Description The following step fully prepare the data for PVnet training. More information can be found into the link : TODO
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals ## Compute step for Densefusion
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage The last step is to prepare the data for the training of Densefusion. To do so the command is as exemple :
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap ```
If you have ideas for releases in the future, it is a good idea to list them in the README. python3 compute_label.py --path_dataset=/gpfsscratch/rech/uli/ubn15wo/FruitBin1/FruitBin_low_1_0.7_1.0/ --target_folder=Generated_Cameras --path_DF_data=/gpfsscratch/rech/uli/ubn15wo/DenseFusion01_Cameras/datasets/linemod/Linemod_preprocessed/data --occ_data=""
```
The following table present the different input parameters :
| Parameters | Information |
| :---: | :---: |
| path_dataset | The Path of the preprocessed dataset generated during the compute previous step. |
| target_folder | The exact scerario we want to consider for densefusion |
| path_DF_data | The path of the Densefusion folder where we want to put the data for training |
| occ_data | This additional parameter will change the name of the resulted .txt densefusion splitting in the case of consideration of multiple scenario in the same folder |
## Contributing With this step, the preprocessing for DenseFusion is done. See the following link for more information : https://gitlab.liris.cnrs.fr/gduret/DenseFusion
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment ## Authors and acknowledgment
Show your appreciation to those who have contributed to the project. Show your appreciation to those who have contributed to the project.
...@@ -88,5 +209,3 @@ Show your appreciation to those who have contributed to the project. ...@@ -88,5 +209,3 @@ Show your appreciation to those who have contributed to the project.
## License ## License
For open source projects, say how it is licensed. For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment