Real-time training progress on the image task where the neural network learns the mapping from 2D coordinates to RGB colors of a high-resolution image. Note that in this video, the network is trained from scratch - but converges so quickly you may miss it if you blink! <br/>
</p>
</figure>
</figure>
<pclass="caption_justify">
Real-time training progress on the image task where the neural network learns the mapping from 2D coordinates to RGB colors of a high-resolution image. Note that in this video, the network is trained from scratch - but converges so quickly you may miss it if you blink! <br/>
Real-time training progress on various SDF datsets. Training data is generated on the fly from the ground-truth mesh using the <ahref="https://developer.nvidia.com/optix">NVIDIA OptiX raytracing framework</a>.
</p>
</figure>
</figure>
<pclass="caption_justify">
Real-time training progress on various SDF datsets. Training data is generated on the fly from the ground-truth mesh using the <ahref="https://developer.nvidia.com/optix">NVIDIA OptiX raytracing framework</a>.
Direct visualization of a <em>neural radiance cache</em>, in which the network predicts outgoing radiance at the first non-specular vertex of each pixel's path, and is trained on-line from rays generated by a real-time pathtracer. On the left, we show results using the triangle wave encoding of <ahref="https://research.nvidia.com/publication/2021-06_Real-time-Neural-Radiance">[Müller et al. 2021]</a>; on the right, the new multiresolution hash encoding allows the network to learn much sharper details, for example in the shadow regions.
</p>
</figure>
</figure>
<pclass="caption_justify">
Direct visualization of a <em>neural radiance cache</em>, in which the network predicts outgoing radiance at the first non-specular vertex of each pixel's path, and is trained on-line from rays generated by a real-time pathtracer. On the left, we show results using the triangle wave encoding of <ahref="https://research.nvidia.com/publication/2021-06_Real-time-Neural-Radiance">[Müller et al. 2021]</a>; on the right, the new multiresolution hash encoding allows the network to learn much sharper details, for example in the shadow regions.
</p>
</section>
</section>
...
@@ -514,9 +514,11 @@ figure {
...
@@ -514,9 +514,11 @@ figure {
<br/>
<br/>
<em>Girl With a Pearl Earing</em> renovation by Koorosh Orooj <ahref="http://profoundism.com/free_licenses.html">(CC BY-SA 4.0 License)</a>
<em>Girl With a Pearl Earing</em> renovation by Koorosh Orooj <ahref="http://profoundism.com/free_licenses.html">(CC BY-SA 4.0 License)</a>
<br/>
<br/>
<em>Tokyo</em> gigapixel photograph by Trevor Dobson <ahref="https://creativecommons.org/licenses/by-nc-nd/2.0/">(CC BY-NC-ND 2.0 License)</a>
<br/>
<em>Lucy</em> model from the <ahref="http://graphics.stanford.edu/data/3Dscanrep/">Stanford 3D scan repository</a>
<em>Lucy</em> model from the <ahref="http://graphics.stanford.edu/data/3Dscanrep/">Stanford 3D scan repository</a>
<br/>
<br/>
<em>Factory robot dataset by Arman Toornias and Saurabh Jain.</em>
<em>Factory robot</em> dataset by Arman Toornias and Saurabh Jain.
<br/>
<br/>
<em>Disney Cloud</em> model by Walt Disney Animation Studios. (<ahref="https://media.disneyanimation.com/uploads/production/data_set_asset/6/asset/License_Cloud.pdf">CC BY-SA 3.0</a>)
<em>Disney Cloud</em> model by Walt Disney Animation Studios. (<ahref="https://media.disneyanimation.com/uploads/production/data_set_asset/6/asset/License_Cloud.pdf">CC BY-SA 3.0</a>)