Skip to content
Snippets Groups Projects
Unverified Commit d98a0376 authored by Adam Krupicka's avatar Adam Krupicka Committed by GitHub
Browse files

Fix internal repo links in nerf_dataset_tips.md

They seem to need a leading slash to resolve properly, otherwise they are linking to files inside the `docs` directory.
parent d7f11701
No related branches found
No related tags found
No related merge requests found
# Tips for training NeRF models with Instant Neural Graphics Primitives # Tips for training NeRF models with Instant Neural Graphics Primitives
Our NeRF implementation expects initial camera parameters to be provided in a `transforms.json` file in a format compatible with [the original NeRF codebase](https://www.matthewtancik.com/nerf). Our NeRF implementation expects initial camera parameters to be provided in a `transforms.json` file in a format compatible with [the original NeRF codebase](https://www.matthewtancik.com/nerf).
We provide a script as a convenience, [scripts/colmap2nerf.py](scripts/colmap2nerf.py), that can be used to process a video file or sequence of images, using the open source [COLMAP](https://colmap.github.io/) structure from motion software to extract the necessary camera data. We provide a script as a convenience, [scripts/colmap2nerf.py](/scripts/colmap2nerf.py), that can be used to process a video file or sequence of images, using the open source [COLMAP](https://colmap.github.io/) structure from motion software to extract the necessary camera data.
The training process can be quite picky about the dataset. The training process can be quite picky about the dataset.
For example, it is important for the dataset to have good coverage, to not contain mislabelled camera data, and to not contain blurry frames (motion blur and defocus blur are both problematic). For example, it is important for the dataset to have good coverage, to not contain mislabelled camera data, and to not contain blurry frames (motion blur and defocus blur are both problematic).
...@@ -20,13 +20,13 @@ This is outside the scope of the __instant-ngp__ implementation. ...@@ -20,13 +20,13 @@ This is outside the scope of the __instant-ngp__ implementation.
__instant-ngp__'s NeRF implementation by default only marches rays through a unit bounding box that is from `[0, 0, 0]` to `[1, 1, 1]`. __instant-ngp__'s NeRF implementation by default only marches rays through a unit bounding box that is from `[0, 0, 0]` to `[1, 1, 1]`.
The data loader by default takes the camera transforms in the input JSON file, and scales the positions by `0.33` and offsets by `[0.5, 0.5, 0.5]` in order to map the origin of the input data to the center of this cube. The data loader by default takes the camera transforms in the input JSON file, and scales the positions by `0.33` and offsets by `[0.5, 0.5, 0.5]` in order to map the origin of the input data to the center of this cube.
The scale factor is chosen to suit the synthetic datasets in the original NeRF paper, as well as the output of our [scripts/colmap2nerf.py](scripts/colmap2nerf.py) script. The scale factor is chosen to suit the synthetic datasets in the original NeRF paper, as well as the output of our [scripts/colmap2nerf.py](/scripts/colmap2nerf.py) script.
It is worth checking the alignment of your cameras to this bounding box by checking both "Visualize cameras" and "Visualize unit cube" in the "Debug visualization" rollup of the UI, as shown below: It is worth checking the alignment of your cameras to this bounding box by checking both "Visualize cameras" and "Visualize unit cube" in the "Debug visualization" rollup of the UI, as shown below:
<img src="assets/nerfbox.jpg" width="100%"/> <img src="assets/nerfbox.jpg" width="100%"/>
For natural scenes where there is a background visible outside the unit cube, it is necessary to set the parameter `aabb_scale` in the `transforms.json` file to a power of 2 integer up to 16 (that is 1, 2, 4, 8, or 16), at the outermost scope (same nesting as e.g. the existing `camera_angle_x` parameter). See [data/nerf/fox/transforms.json](data/nerf/fox/transforms.json) for an example. For natural scenes where there is a background visible outside the unit cube, it is necessary to set the parameter `aabb_scale` in the `transforms.json` file to a power of 2 integer up to 16 (that is 1, 2, 4, 8, or 16), at the outermost scope (same nesting as e.g. the existing `camera_angle_x` parameter). See [data/nerf/fox/transforms.json](/data/nerf/fox/transforms.json) for an example.
The effect can be seen in the image below: The effect can be seen in the image below:
...@@ -54,7 +54,7 @@ See [nerf_loader.cu](src/nerf_loader.cu) for implementation details and addition ...@@ -54,7 +54,7 @@ See [nerf_loader.cu](src/nerf_loader.cu) for implementation details and addition
Make sure that you have installed [COLMAP](https://colmap.github.io/) and that it is available in your PATH. If you are using a video file as input, also be sure to install [FFmpeg](https://www.ffmpeg.org/) and make sure that it is available in your PATH. Make sure that you have installed [COLMAP](https://colmap.github.io/) and that it is available in your PATH. If you are using a video file as input, also be sure to install [FFmpeg](https://www.ffmpeg.org/) and make sure that it is available in your PATH.
To check that this is the case, from a terminal window, you should be able to run `colmap` and `ffmpeg -?` and see some help text from each. To check that this is the case, from a terminal window, you should be able to run `colmap` and `ffmpeg -?` and see some help text from each.
If you are training from a video file, run the [scripts/colmap2nerf.py](scripts/colmap2nerf.py) script from the folder containing the video, with the following recommended parameters: If you are training from a video file, run the [scripts/colmap2nerf.py](/scripts/colmap2nerf.py) script from the folder containing the video, with the following recommended parameters:
```sh ```sh
data-folder$ python [path-to-instant-ngp]/scripts/colmap2nerf.py --video_in <filename of video> --video_fps 2 --run_colmap --aabb_scale 16 data-folder$ python [path-to-instant-ngp]/scripts/colmap2nerf.py --video_in <filename of video> --video_fps 2 --run_colmap --aabb_scale 16
...@@ -71,9 +71,9 @@ data-folder$ python [path-to-instant-ngp]/scripts/colmap2nerf.py --colmap_matche ...@@ -71,9 +71,9 @@ data-folder$ python [path-to-instant-ngp]/scripts/colmap2nerf.py --colmap_matche
The script will run FFmpeg and/or COLMAP as needed, followed by a conversion step to the required `transforms.json` format, which will be written in the current directory. The script will run FFmpeg and/or COLMAP as needed, followed by a conversion step to the required `transforms.json` format, which will be written in the current directory.
By default, the script invokes colmap with the "sequential matcher", which is suitable for images taken from a smoothly changing camera path, as in a video. The exhaustive matcher is more appropriate if the images are in no particular order, as shown in the image example above. By default, the script invokes colmap with the "sequential matcher", which is suitable for images taken from a smoothly changing camera path, as in a video. The exhaustive matcher is more appropriate if the images are in no particular order, as shown in the image example above.
For more options, you can run the script with `--help`. For more advanced uses of COLMAP or for challenging scenes, please see the [COLMAP documentation](https://colmap.github.io/cli.html); you may need to modify the [scripts/colmap2nerf.py](scripts/colmap2nerf.py) script itself. For more options, you can run the script with `--help`. For more advanced uses of COLMAP or for challenging scenes, please see the [COLMAP documentation](https://colmap.github.io/cli.html); you may need to modify the [scripts/colmap2nerf.py](/scripts/colmap2nerf.py) script itself.
The `aabb_scale` parameter is the most important `instant-ngp` specific parameter. It specifies the extent of the scene, defaulting to 1; that is, the scene is scaled such that the camera positions are at an average distance of 1 unit from the origin. For small synthetic scenes such as the original NeRF dataset, the default `aabb_scale` of 1 is ideal and leads to fastest training. The NeRF model makes the assumption that the training images can entirely be explained by a scene contained within this bounding box. However, for natural scenes where there is a background that extends beyond this bounding box, the NeRF model will struggle and may hallucinate "floaters" at the boundaries of the box. By setting `aabb_scale` to a larger power of 2 (up to a maximum of 16), the NeRF model will extend rays to a much larger bounding box. Note that this can impact training speed slightly. If in doubt, for natural scenes, start with an `aabb_scale` of 16, and subsequently reduce it if possible. The value can be directly edited in the `transforms.json` output file, without re-running the [scripts/colmap2nerf.py](scripts/colmap2nerf.py) script. The `aabb_scale` parameter is the most important `instant-ngp` specific parameter. It specifies the extent of the scene, defaulting to 1; that is, the scene is scaled such that the camera positions are at an average distance of 1 unit from the origin. For small synthetic scenes such as the original NeRF dataset, the default `aabb_scale` of 1 is ideal and leads to fastest training. The NeRF model makes the assumption that the training images can entirely be explained by a scene contained within this bounding box. However, for natural scenes where there is a background that extends beyond this bounding box, the NeRF model will struggle and may hallucinate "floaters" at the boundaries of the box. By setting `aabb_scale` to a larger power of 2 (up to a maximum of 16), the NeRF model will extend rays to a much larger bounding box. Note that this can impact training speed slightly. If in doubt, for natural scenes, start with an `aabb_scale` of 16, and subsequently reduce it if possible. The value can be directly edited in the `transforms.json` output file, without re-running the [scripts/colmap2nerf.py](/scripts/colmap2nerf.py) script.
Assuming success, you can now train your NeRF model as follows, starting in the `instant-ngp` folder: Assuming success, you can now train your NeRF model as follows, starting in the `instant-ngp` folder:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment