diff --git a/docs/index.html b/docs/index.html index fe313e915f715e6268c76ca208fe4408bae0b9fe..915414e62b1a865fe1102109b3eefe10e14d2805 100644 --- a/docs/index.html +++ b/docs/index.html @@ -333,10 +333,10 @@ figure { <source src="assets/tokyo_online_training_counter.mp4" type="video/mp4"> Your browser does not support the video tag. </video> - <p class="caption"> - Real-time training progress on the image task where the neural network learns the mapping from 2D coordinates to RGB colors of a high-resolution image. Note that in this video, the network is trained from scratch - but converges so quickly you may miss it if you blink! <br/> - </p> </figure> + <p class="caption_justify"> + Real-time training progress on the image task where the neural network learns the mapping from 2D coordinates to RGB colors of a high-resolution image. Note that in this video, the network is trained from scratch - but converges so quickly you may miss it if you blink! <br/> + </p> <h3>Neural Radiance Fields</h3> <hr> @@ -437,10 +437,10 @@ figure { <source src="assets/sdf_grid_lq.mp4" type="video/mp4"> Your browser does not support the video tag. </video> - <p class="caption"> - Real-time training progress on various SDF datsets. Training data is generated on the fly from the ground-truth mesh using the <a href="https://developer.nvidia.com/optix">NVIDIA OptiX raytracing framework</a>. - </p> </figure> + <p class="caption_justify"> + Real-time training progress on various SDF datsets. Training data is generated on the fly from the ground-truth mesh using the <a href="https://developer.nvidia.com/optix">NVIDIA OptiX raytracing framework</a>. + </p> <h3>Neural Radiance Cache</h3> <hr> @@ -449,10 +449,10 @@ figure { <source src="assets/nrc_new_vs_old.mp4" type="video/mp4"> Your browser does not support the video tag. </video> - <p class="caption"> - Direct visualization of a <em>neural radiance cache</em>, in which the network predicts outgoing radiance at the first non-specular vertex of each pixel's path, and is trained on-line from rays generated by a real-time pathtracer. On the left, we show results using the triangle wave encoding of <a href="https://research.nvidia.com/publication/2021-06_Real-time-Neural-Radiance">[Müller et al. 2021]</a>; on the right, the new multiresolution hash encoding allows the network to learn much sharper details, for example in the shadow regions. - </p> </figure> + <p class="caption_justify"> + Direct visualization of a <em>neural radiance cache</em>, in which the network predicts outgoing radiance at the first non-specular vertex of each pixel's path, and is trained on-line from rays generated by a real-time pathtracer. On the left, we show results using the triangle wave encoding of <a href="https://research.nvidia.com/publication/2021-06_Real-time-Neural-Radiance">[Müller et al. 2021]</a>; on the right, the new multiresolution hash encoding allows the network to learn much sharper details, for example in the shadow regions. + </p> </section> @@ -514,9 +514,11 @@ figure { <br/> <em>Girl With a Pearl Earing</em> renovation by Koorosh Orooj <a href="http://profoundism.com/free_licenses.html">(CC BY-SA 4.0 License)</a> <br/> + <em>Tokyo</em> gigapixel photograph by Trevor Dobson <a href="https://creativecommons.org/licenses/by-nc-nd/2.0/">(CC BY-NC-ND 2.0 License)</a> + <br/> <em>Lucy</em> model from the <a href="http://graphics.stanford.edu/data/3Dscanrep/">Stanford 3D scan repository</a> <br/> - <em>Factory robot dataset by Arman Toornias and Saurabh Jain.</em> + <em>Factory robot</em> dataset by Arman Toornias and Saurabh Jain. <br/> <em>Disney Cloud</em> model by Walt Disney Animation Studios. (<a href="https://media.disneyanimation.com/uploads/production/data_set_asset/6/asset/License_Cloud.pdf">CC BY-SA 3.0</a>) <br/>