@@ -73,7 +73,7 @@ If the build succeeds, you can now run the code via the `build/testbed` executab
...
@@ -73,7 +73,7 @@ If the build succeeds, you can now run the code via the `build/testbed` executab
If automatic GPU architecture detection fails, (as can happen if you have multiple GPUs installed), set the `TCNN_CUDA_ARCHITECTURES` enivonment variable for the GPU you would like to use. The following table lists the values for common GPUs. If your GPU is not listed, consult [this exhaustive list](https://developer.nvidia.com/cuda-gpus).
If automatic GPU architecture detection fails, (as can happen if you have multiple GPUs installed), set the `TCNN_CUDA_ARCHITECTURES` enivonment variable for the GPU you would like to use. The following table lists the values for common GPUs. If your GPU is not listed, consult [this exhaustive list](https://developer.nvidia.com/cuda-gpus).
Here are the main keyboard controls for the testbed application.
| Key | Meaning |
| :-------------: | ------------- |
| WASD | Forward / pan left / backward / pan right. |
| Spacebar / C | Move up / down. |
| = or + / - or _ | Increase / decrease camera velocity. |
| E / Shift+E | Increase / decrease exposure. |
| T | Toggle training. After around two minutes training tends to settle down, so can be toggled off. |
| R | Reload network from file. |
| Shift+R | Reset camera. |
| O | Toggle visualization or accumulated error map. |
| G | Toggle visualization of the ground truth. |
| M | Toggle multi-view visualization of layers of the neural model. See the paper's video for a little more explanation. |
| , / . | Shows the previous / next visualized layer; hit M to escape. |
| 1-8 | Switches among various render modes, with 2 being the standard one. You can see the list of render mode names in the control interface. |
There are many controls in the __instant-ngp__ GUI when the testbed program is run.
First, note that this GUI can be moved and resized, as can the "Camera path" GUI (which first must be expanded to be used).
Some popular user controls in __instant-ngp__ are:
* __Snapshot:__ use Save to save the NeRF solution generated, Load to reload. Necessary if you want to make an animation.
* __Rendering -> DLSS:__ toggling this on and setting "DLSS sharpening" below it to 1.0 can often improve rendering quality.
* __Rendering -> Crop size:__ trim back the surrounding environment to focus on the model. "Crop aabb" lets you move the center of the volume of interest and fine tune. See more about this feature in [our NeRF training & dataset tips](https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md).
The "Camera path" GUI lets you set frames along a path. "Add from cam" is the main button you'll want to push, then saving out the camera keyframes using "Save" to create a `base_cam.json` file. There is a bit more information about the GUI [in this post](https://developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/) and [in this (bit dated) video](https://www.youtube.com/watch?v=z3-fjYzd0BA).
## Python bindings
## Python bindings
To conduct controlled experiments in an automated fashion, all features from the interactive testbed (and more!) have Python bindings that can be easily instrumented.
To conduct controlled experiments in an automated fashion, all features from the interactive testbed (and more!) have Python bindings that can be easily instrumented.
For an example of how the `./build/testbed` application can be implemented and extended from within Python, see `./scripts/run.py`, which supports a superset of the command line arguments that `./build/testbed` does.
For an example of how the `./build/testbed` application can be implemented and extended from within Python, see `./scripts/run.py`, which supports a superset of the command line arguments that `./build/testbed` does.
Here is a typical command line using `scripts/run.py` to generate a 5-second flythrough of the fox dataset to the (default) file `movie.mp4`, after using the testbed to save a NeRF solution file and a set of camera key frames:
If you'd rather build new models from the hash encoding and fast neural networks, consider the [__tiny-cuda-nn__'s PyTorch extension](https://github.com/nvlabs/tiny-cuda-nn#pytorch-extension).
If you'd rather build new models from the hash encoding and fast neural networks, consider the [__tiny-cuda-nn__'s PyTorch extension](https://github.com/nvlabs/tiny-cuda-nn#pytorch-extension).
parser.add_argument("--screenshot_dir",default="",help="Which directory to output screenshots to.")
parser.add_argument("--screenshot_dir",default="",help="Which directory to output screenshots to.")
parser.add_argument("--screenshot_spp",type=int,default=16,help="Number of samples per pixel in screenshots.")
parser.add_argument("--screenshot_spp",type=int,default=16,help="Number of samples per pixel in screenshots.")
parser.add_argument("--video_camera_path",default="",help="The camera path to render.")
parser.add_argument("--video_camera_path",default="",help="The camera path to render, e.g., base_cam.json.")
parser.add_argument("--video_camera_smoothing",action="store_true",help="Applies additional smoothing to the camera trajectory with the caveat that the endpoint of the camera path may not be reached.")
parser.add_argument("--video_camera_smoothing",action="store_true",help="Applies additional smoothing to the camera trajectory with the caveat that the endpoint of the camera path may not be reached.")
parser.add_argument("--video_fps",type=int,default=60,help="Number of frames per second.")
parser.add_argument("--video_fps",type=int,default=60,help="Number of frames per second.")
parser.add_argument("--video_n_seconds",type=int,default=1,help="Number of seconds the rendered video should be long.")
parser.add_argument("--video_n_seconds",type=int,default=1,help="Number of seconds the rendered video should be long.")
...
@@ -62,7 +62,7 @@ def parse_args():
...
@@ -62,7 +62,7 @@ def parse_args():
parser.add_argument("--n_steps",type=int,default=-1,help="Number of steps to train for before quitting.")
parser.add_argument("--n_steps",type=int,default=-1,help="Number of steps to train for before quitting.")
parser.add_argument("--second_window",action="store_true",help="Open a second window containing a copy of the main output.")
parser.add_argument("--second_window",action="store_true",help="Open a second window containing a copy of the main output.")
parser.add_argument("--sharpen",default=0,help="Set amount of sharpening applied to NeRF training images.")
parser.add_argument("--sharpen",default=0,help="Set amount of sharpening applied to NeRF training images. Range 0.0 to 1.0.")