diff --git a/README.md b/README.md index d0b39e6503414f751fa837719d87b864558e81c4..d19039748b3a91765f77566feec8f27dd7aa6572 100644 --- a/README.md +++ b/README.md @@ -28,6 +28,11 @@ We provide scripts and guidance to generate your own images along with a link to ## Requirements #### Blender This script has been developed and ran using Blender 3.2.0. Blender can be donwloaded [here](https://www.blender.org/download/). +The following addons should also be enabled in Edit > Preferences > Add-Ons : +- MHX2 : http://download.tuxfamily.org/makehuman/plugins/mhx-blender-latest.zip +- Import images as planes +- Lighting : Sun position + #### MakeHuman Human models were generated using [MakeHuman Community](http://www.makehumancommunity.org/content/downloads.html). We used a modified skeleton rig that you can find in the `mh_models/` directory of this repo. Once MakeHuman is installed, copy the two "blender.*" files in `MAKEHUMAN-COMMUNITY-ROOT\makehuman\data\rigs`. We also highly recommand to install [MassProduce](http://www.makehumancommunity.org/content/plugins.html) plugin to generate rapidely several models. @@ -44,25 +49,39 @@ By default, you should place the mhx2 files in `mh_models/exports` and the assoc #### Cars models You can use 3D car models you got. Car models should be imported in an empty Blender file, with its location and rotation set to 0 and a real-size scale. The origin of the model should be placed on the driver since it will be used to place the driver. You can set an Empty Arrows object as the parent in Blender to achieve this goal, like below : + <img src="./car_origin_blender.png" alt= "car_origin_blender" width="200"/> Furthermore, you can add empty meshes such as cylinders or cubes to force the hand to be placed inside or outside. To do this, rename them "IN_..." or "OUT_...". We recommand placing a "OUT_" cylinder around the wheel, and a "IN_" box to delimit the inside of the car cockpit. Finally, you can put target for the wrists that will be used for the manual poses. To do this, place Empty Arrows in the desired location and rotatio, with the name "Target_R_..." or "Target_L_...", such as: + <img src="./target_examples.png" alt= "target_examples" height="200"/> -# Valid visibility module -You can download pretrained weights [here](http://dionysos.univ-lyon2.fr/~ccrispim/VisPred/models). -For example, run: -``` -python pose_estimation/valid.py \ - --cfg experiments/coco/resnet50/256x192_vis_freeze.yaml \ - --flip-test \ - --model-file models/pytorch/pose_coco/coco_vis2_raise_soft.pth.tar -``` +Save each .blend file in a different folder with the required textures, and place these folders in a `car_models` folder. + +#### Backgrounds +Backgrounds images should be high resolution images of size 1800x800 pix, placed in the `backgrounds` folder. The image name should start either by "day_..." or "night_...", depending of the time of the day it depicts. + +# Run +To generate a scene, open the `base.blend` file in blender. Then, go to the scripting tabs and open `global_script.py`. Here are the different parameters you might want to change in this file: +- line 15: the absolute path to the cloned repository +- lines 77-84: relative paths to the blend car files, with the object name you will give to the car model in the generated scene +- line 97: path to the human models +- line 186: relative path to the output folder +- line 204: number of different scene (configuration of person, car, bacground) to generate +- line 205: number of pose per scene + +Be aware that the last poses will be targeted poses (if you put targets in you car models). Therefore, if you set a number of pose too low, no random pose will be generated. +Then run the script in Blender script. Depending on the number of scene and poses requested, it can take several minutes / hours. During this time, Blender will not be usable. +When the script stops, you should end up with images in the output folder. You can also manipulate the last generated scene in the Layout tab, where each pose corresponds to a time stamp. +<img src="./scene_complete.png" alt= "scene_complete"/> + +# Generated dataset +Our dataset with 200k images used in our paper can be downloaded from [here](http://dionysos.univ-lyon2.fr/~ccrispim/SyntheDrivers/synthe_drivers-dataset.zip). # Citation If you use our network or our code, please cite: @@ -85,14 +104,3 @@ the context of the AutoBehave project. <img style="margin-right: 20px" src="assets/logo_liris.png" alt="LIRIS logo" height="75" width="160"/> <img style="margin-left: 20px" src="assets/logo_ra.png" alt="RA logo" height="60" width="262"/> </div> - - - - -# Synthe_DriPE - - -### Addons Blender: -MHX2 : http://download.tuxfamily.org/makehuman/plugins/mhx-blender-latest.zip -Import images as planes -Lighting : Sun position diff --git a/scene_complete.png b/scene_complete.png new file mode 100644 index 0000000000000000000000000000000000000000..ff0ea40b637778c142e885ae8cc020e60ab95d10 Binary files /dev/null and b/scene_complete.png differ