diff --git a/README.md b/README.md
index 45f28a6d9d6009f9e88d7ddba461f7ff4e90f516..e8ffc1cccf98b9448990376986c77c8f22e7116e 100644
--- a/README.md
+++ b/README.md
@@ -10,9 +10,9 @@ In each case, we train and render a MLP with multiresolution hash input encoding
 > __Instant Neural Graphics Primitives with a Multiresolution Hash Encoding__  
 > [Thomas Müller](https://tom94.net), [Alex Evans](https://research.nvidia.com/person/alex-evans), [Christoph Schied](https://research.nvidia.com/person/christoph-schied), [Alexander Keller](https://research.nvidia.com/person/alex-keller)  
 > _ACM Transactions on Graphics (__SIGGRAPH__), July 2022_  
-> __[Project page](https://nvlabs.github.io/instant-ngp) / [Paper](https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.pdf) / [Video](https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.mp4) / [Presentation](https://tom94.net/data/publications/mueller22instant/mueller22instant-gtc.mp4) / [BibTeX](https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.bib)__
+> __[Project page](https://nvlabs.github.io/instant-ngp) / [Paper](https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.pdf) / [Video](https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.mp4) / [Presentation](https://tom94.net/data/publications/mueller22instant/mueller22instant-gtc.mp4) / [Real-Time Live](https://tom94.net/data/publications/mueller22instant/mueller22instant-rtl.mp4) / [BibTeX](https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.bib)__
 
-To get started with NVIDIA Instant NeRF, check out the [blog post](https://developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/) and [SIGGRAPH tutorial](https://www.nvidia.com/en-us/on-demand/session/siggraph2022-sigg22-s-16/)
+To get started with NVIDIA Instant NeRF, check out the [blog post](https://developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/) and [SIGGRAPH tutorial](https://www.nvidia.com/en-us/on-demand/session/siggraph2022-sigg22-s-16/).
 
 For business inquiries, please submit the [NVIDIA research licensing form](https://www.nvidia.com/en-us/research/inquiries/).
 
diff --git a/docs/index.html b/docs/index.html
index 0b848d9198a51174f8503d50959ea5a464d1fe86..354165423f33951147e2314a4057c54085aa4892 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -174,7 +174,7 @@ figure {
 
   display: inline-block;
   margin: 8px;
-  padding: 8px 8px;
+  padding: 8px 18px;
 
   border-width: 0;
   outline: none;
@@ -183,7 +183,8 @@ figure {
   background-color: #1367a7;
   color: #ecf0f1 !important;
   font-size: 20px;
-  width: 100px;
+  width: auto;
+  height: auto;
   font-weight: 600;
 }
 .paper-btn-parent {
@@ -213,7 +214,7 @@ figure {
 	<meta name="twitter:card" content="summary_large_image">
 	<meta name="twitter:creator" content="@mmalex">
 	<meta name="twitter:title" content="Instant Neural Graphics Primitives with a Multiresolution Hash Encoding">
-	<meta name="twitter:description" content="A new paper from NVIDIA Research which presents a method for instant training & rendering of high-quality neural graphics primitives.">
+	<meta name="twitter:description" content="A paper from NVIDIA Research which presents a method for instant training & rendering of high-quality neural graphics primitives.">
 	<meta name="twitter:image" content="https://nvlabs.github.io/instant-ngp/assets/twitter.jpg">
 </head>
 
@@ -249,6 +250,14 @@ figure {
 					<span class="material-icons"> videocam </span>
 					Video
 				</a>
+				<a class="paper-btn" href="https://tom94.net/data/publications/mueller22instant/mueller22instant-gtc.mp4">
+					<span class="material-icons"> videocam </span>
+					Presentation
+				</a>
+				<a class="paper-btn" href="https://tom94.net/data/publications/mueller22instant/mueller22instant-rtl.mp4">
+					<span class="material-icons"> videocam </span>
+					Real-Time Live
+				</a>
 				<a class="paper-btn" href="https://github.com/NVlabs/instant-ngp">
 					<span class="material-icons"> code </span>
 					Code
@@ -303,6 +312,7 @@ figure {
 		<h2>News</h2>
 		<hr>
 		<div class="row">
+			<div><span class="material-icons"> emoji_events </span> [Nov 10th 2022] Listed in <a href="https://developer.nvidia.com/blog/time-magazine-names-nvidia-instant-nerf-a-best-invention-of-2022/">TIME's Best Inventions of 2022</a>.</div>
 			<div><span class="material-icons"> emoji_events </span> [July 7th 2022] Paper won the <a href="https://blog.siggraph.org/2022/07/siggraph-2022-technical-papers-awards-best-papers-and-honorable-mentions.html/">SIGGRAPH Best Paper Award</a>.</div>
 			<div><span class="material-icons"> description </span> [May 3rd 2022] Paper accepted to <a href="https://s2022.siggraph.org">ACM Transactions on Graphics (SIGGRAPH 2022)</a>.</div>
 			<div><span class="material-icons"> description </span> [Jan 19th 2022] Paper released on <a href="https://arxiv.org/abs/2201.05989">arXiv</a>.</div>
diff --git a/docs/nerf_dataset_tips.md b/docs/nerf_dataset_tips.md
index 2bfc2d52d5ff0f82d474efda3e2c9beec35fb816..acedc7f9bd20c8a0689e55e06c71476fb3c5f1f0 100644
--- a/docs/nerf_dataset_tips.md
+++ b/docs/nerf_dataset_tips.md
@@ -50,11 +50,13 @@ You can set any of the following parameters, where the listed values are the def
 See [nerf_loader.cu](src/nerf_loader.cu) for implementation details and additional options.
 
 ## Preparing new NeRF datasets
+
 To train on self-captured data, one has to process the data into an existing format supported by Instant-NGP. We provide scripts to support two complementary approaches:
 - [COLMAP](#COLMAP)
 - [Record3D](#Record3D) (based on ARKit)
 
 ### COLMAP
+
 Make sure that you have installed [COLMAP](https://colmap.github.io/) and that it is available in your PATH. If you are using a video file as input, also be sure to install [FFmpeg](https://www.ffmpeg.org/) and make sure that it is available in your PATH.
 To check that this is the case, from a terminal window, you should be able to run `colmap` and `ffmpeg -?` and see some help text from each.
 
@@ -86,6 +88,7 @@ instant-ngp$ ./build/testbed --mode nerf --scene [path to training data folder c
 ```
 
 ### Record3D
+
 With an >=iPhone 12 Pro, one can use [Record3D](https://record3d.app/) to collect data and avoid COLMAP. [Record3D](https://record3d.app/) is an iOS app that relies on ARKit to estimate each image's camera pose. It is more robust than COLMAP for scenes that lack textures or contain repetitive patterns. To train Instant-NGPs with Record3D data, follow these steps: 
 
 1. Record a video and export with the "Shareable/Internal format (.r3d)".
@@ -107,4 +110,4 @@ With an >=iPhone 12 Pro, one can use [Record3D](https://record3d.app/) to collec
 The NeRF model trains best with between 50-150 images which exhibit minimal scene movement, motion blur or other blurring artefacts. The quality of reconstruction is predicated on COLMAP being able to extract accurate camera parameters from the images.
 Review the earlier sections for information on how to verify this.
 
-The `colmap2nerf.py` script assumes that the training images are all pointing approximately at a shared point of interest, which it places at the origin. This point is found by taking a weighted average of the closest points of approach between the rays through the central pixel of all pairs of training images. In practice, this means that the script works best when the training images have been captured pointing inwards towards the object of interest, although they do not need to complete a full 360 view of it. Any background visible behind the object of interest will still be reconstructed if `aabb_scale` is set to a number larger than 1, as explained above.
\ No newline at end of file
+The `colmap2nerf.py` script assumes that the training images are all pointing approximately at a shared point of interest, which it places at the origin. This point is found by taking a weighted average of the closest points of approach between the rays through the central pixel of all pairs of training images. In practice, this means that the script works best when the training images have been captured pointing inwards towards the object of interest, although they do not need to complete a full 360 view of it. Any background visible behind the object of interest will still be reconstructed if `aabb_scale` is set to a number larger than 1, as explained above.