diff --git a/docs/index.html b/docs/index.html index 7548dca1ba452d3ee82031d423c5e62b2440f17e..5b9a02d6f6df03cf59bd20afd80c8c04018bb2da 100644 --- a/docs/index.html +++ b/docs/index.html @@ -1,11 +1,8 @@ -<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> +<!DOCTYPE html> <meta charset="utf-8"> <html> -<script src="http://www.google.com/jsapi" type="text/javascript"></script> -<script type="text/javascript">google.load("jquery", "1.3.2");</script> - <style type="text/css"> body { font-family: "Titillium Web", "HelveticaNeue-Light", "Helvetica Neue Light", "Helvetica Neue", Helvetica, Arial, "Lucida Grande", sans-serif; @@ -206,23 +203,21 @@ figure { .venue { color: #1367a7; } - </style> -<!--<script type="text/javascript" src="../js/hidebib.js"></script>--> - <link href='https://fonts.googleapis.com/css?family=Titillium+Web:400,600,400italic,600italic,300,300italic' rel='stylesheet' type='text/css'> - <head> - <title>Instant Neural Graphics Primitives with a Multiresolution Hash Encoding</title> - <meta property="og:description" content="Instant Neural Graphics Primitives with a Multiresolution Hash Encoding"/> - <link href="https://fonts.googleapis.com/css2?family=Material+Icons" rel="stylesheet"> - - <meta name="twitter:card" content="summary_large_image"> - <meta name="twitter:creator" content="@mmalex"> - <meta name="twitter:title" content="Instant Neural Graphics Primitives with a Multiresolution Hash Encoding"> - <meta name="twitter:description" content="A new paper from NVIDIA Research which presents a method for instant training & rendering of high-quality neural graphics primitives."> - <meta name="twitter:image" content="https://nvlabs.github.io/instant-ngp/assets/twitter.jpg"> - </head> - - <body> +<link href='https://fonts.googleapis.com/css?family=Titillium+Web:400,600,400italic,600italic,300,300italic' rel='stylesheet' type='text/css'> +<head> + <title>Instant Neural Graphics Primitives with a Multiresolution Hash Encoding</title> + <meta property="og:description" content="Instant Neural Graphics Primitives with a Multiresolution Hash Encoding"/> + <link href="https://fonts.googleapis.com/css2?family=Material+Icons" rel="stylesheet"> + + <meta name="twitter:card" content="summary_large_image"> + <meta name="twitter:creator" content="@mmalex"> + <meta name="twitter:title" content="Instant Neural Graphics Primitives with a Multiresolution Hash Encoding"> + <meta name="twitter:description" content="A new paper from NVIDIA Research which presents a method for instant training & rendering of high-quality neural graphics primitives."> + <meta name="twitter:image" content="https://nvlabs.github.io/instant-ngp/assets/twitter.jpg"> +</head> + +<body> <div class="container"> <div class="paper-title"> <h1>Instant Neural Graphics Primitives with a Multiresolution Hash Encoding</h1> @@ -239,25 +234,23 @@ figure { <div class="affil-row"> <div class="col-1 text-center">NVIDIA</div> </div> - <!-- <div class="affil-row"> - <div class="venue text-center"><b>arXiv</b></div> - </div> --> <div style="clear: both"> <div class="paper-btn-parent"> - <a class="paper-btn" href="assets/mueller2022instant.pdf"> - <span class="material-icons"> description </span> - Paper - </a> - <a class="paper-btn" href="assets/mueller2022instant.mp4"> - <span class="material-icons"> videocam </span> - Video - </a> - <a class="paper-btn" href="https://github.com/NVlabs/instant-ngp"> - <span class="material-icons"> code </span> - Code - </a> - </div></div> + <a class="paper-btn" href="assets/mueller2022instant.pdf"> + <span class="material-icons"> description </span> + Paper + </a> + <a class="paper-btn" href="assets/mueller2022instant.mp4"> + <span class="material-icons"> videocam </span> + Video + </a> + <a class="paper-btn" href="https://github.com/NVlabs/instant-ngp"> + <span class="material-icons"> code </span> + Code + </a> + </div> + </div> </div> <section id="teaser-videos"> @@ -291,18 +284,16 @@ figure { </video> </figure> - <figure style="width: 100%; float: left"> <p class="caption_justify"> We demonstrate near-instant training of neural graphics primitives on a single GPU for multiple tasks. In <b>gigapixel image</b> we represent an image by a neural network. <b>SDF</b> learns a signed distance function in 3D space whose zero level-set represents a 2D surface. - <b>NeRF</b> <a href="https://research.nvidia.com/publication/2021-06_Real-time-Neural-Radiance">[Mildenhall et al. 2020]</a> uses 2D images and their camera poses to reconstruct a volumetric radiance-and-density field that is visualized using ray marching. + <b>NeRF</b> <a href="https://www.matthewtancik.com/nerf">[Mildenhall et al. 2020]</a> uses 2D images and their camera poses to reconstruct a volumetric radiance-and-density field that is visualized using ray marching. Lastly, <b>neural volume</b> learns a denoised radiance and density field directly from a volumetric path tracer. In all tasks, our encoding and its efficient implementation provide clear benefits: instant training, high quality, and simplicity. Our encoding is task-agnostic: we use the same implementation and hyperparameters across all tasks and only vary the hash table size which trades off quality and performance. </p> </figure> </section> - <section id="news"> <h2>News</h2> <hr> @@ -334,7 +325,7 @@ figure { </video> </figure> <p class="caption_justify"> - Real-time training progress on the image task where the neural network learns the mapping from 2D coordinates to RGB colors of a high-resolution image. Note that in this video, the network is trained from scratch - but converges so quickly you may miss it if you blink! <br/> + Real-time training progress on the image task where the neural network learns the mapping from 2D coordinates to RGB colors of a high-resolution image. Note that in this video, the network is trained from scratch—but converges so quickly you may miss it if you blink!<br/> </p> <h3>Neural Radiance Fields</h3>