diff --git a/README.md b/README.md
index c3b6e31197c0d60fa3dac50d94dc20b5d15c51f2..56ff448af83e25055919dc39bea4d62fa667e9e8 100644
--- a/README.md
+++ b/README.md
@@ -32,7 +32,7 @@ SBl, MSPN and RSN.
 Furthermore, we provide the code to evaluate HPE networks with [mAPK metric](#evaluation), our keypoint-centered metric.
 
 # Dataset
-DriPE dataset can be found [here](). We provide the 10k images, 
+DriPE dataset can be found [here](http://dionysos.univ-lyon2.fr/~ccrispim/DriPE/DriPE.zip). We provide the 10k images, 
 along with keypoint annotations, split as:
 * 6.4k for training
 * 1.3k for validation
@@ -51,7 +51,7 @@ We used in our study three architectures:
 * __RSN__: Learning Delicate Local Representations for Multi-Person Pose Estimation (Cai 2020) [GitHub](https://github.com/caiyuanhao1998/RSN)
 
 We used for training and for inference the code provided by the authors in the three linked repositories.
-Weights of the trained model evaluated in our study can be found [here]().
+Weights of the trained model evaluated in our study can be found [here](http://dionysos.univ-lyon2.fr/~ccrispim/DriPE/models).
 More details about the training can be found in our [paper](https://openaccess.thecvf.com/content/ICCV2021W/AVVision/papers/Guesdon_DriPE_A_Dataset_for_Human_Pose_Estimation_in_Real-World_Driving_ICCVW_2021_paper.pdf).
 
 ##### **HPE on the COCO 2017 validation set.**