ICCV: International Conference on Computer Vision 2021
<br>
Workshop AVVision : Autonomous Vehicle Vision
</h3>
</div>
# Table of content
-[Overview](#overview)
-[Dataset](#dataset)
-[Networks](#networks)
-[Evaluation](#evaluation)
-[Citation](#citation)
-[Acknowledgements](#acknowledgements)
# Overview
This repository contains the materials presented in the paper
[DriPE: A Dataset for Human Pose Estimation in Real-World Driving Settings](https://openaccess.thecvf.com/content/ICCV2021W/AVVision/papers/Guesdon_DriPE_A_Dataset_for_Human_Pose_Estimation_in_Real-World_Driving_ICCVW_2021_paper.pdf).
We provide the link to download the DriPE [dataset](#dataset),
along with trained weights for the three [networks](#networks) presented in this paper:
SBl, MSPN and RSN.
Furthermore, we provide the code to evaluate HPE networks with [mAPK metric](#evaluation), our keypoint-centered metric.
# Dataset
DriPE dataset can be download [here](http://dionysos.univ-lyon2.fr/~ccrispim/DriPE/DriPE.zip). We provide 10k images,
along with keypoint annotations, split as:
* 6.4k for training
* 1.3k for validation
* 1.3k for testing
The annotation files follow the COCO annotation style, with 17 keypoints.
More information can be found [here](https://cocodataset.org/#format-data).
##### **DriPE image samples**

# Networks
We used in our study three architectures:
* __SBl__: Simple Baselines for Human Pose Estimation and Tracking (Xiao 2018) [GitHub](https://github.com/microsoft/human-pose-estimation.pytorch)
* __MSPN__: Rethinking on Multi-Stage Networks for Human Pose Estimation (Li 2019) [GitHub](https://github.com/megvii-detection/MSPN)
* __RSN__: Learning Delicate Local Representations for Multi-Person Pose Estimation (Cai 2020) [GitHub](https://github.com/caiyuanhao1998/RSN)
We used for training and for inference the code provided by the authors in the three linked repositories.
Weights of the trained model evaluated in our study can be found [here](http://dionysos.univ-lyon2.fr/~ccrispim/DriPE/models).
More details about the training can be found in our [paper](https://openaccess.thecvf.com/content/ICCV2021W/AVVision/papers/Guesdon_DriPE_A_Dataset_for_Human_Pose_Estimation_in_Real-World_Driving_ICCVW_2021_paper.pdf).
##### **HPE on the COCO 2017 validation set.**
AP OKS (\%) | AP | AP<sup>50</sup> | AP<sup>75</sup> | AP<sup>L</sup> | AR | AR<sup>50</sup> | AR<sup>75</sup> | AR<sup>L</sup>
If you use this dataset or code in your research, please send us an email with the following details and we will update our webpage with your results.
* Performance (%)
* Experimental Setup
* Paper details
The DRIPE dataset is only to be used for scientific purposes. It must not be republished other than by the original authors. The scientific use includes processing the data and showing it in publications and presentations. If you use it, please cite:
```
@InProceedings{Guesdon_2021_ICCV,
author = {Guesdon, Romain and Crispim-Junior, Carlos and Tougne, Laure},
title = {DriPE: A Dataset for Human Pose Estimation in Real-World Driving Settings},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {October},
year = {2021},
pages = {2865-2874}
}
```
# Acknowledgments
This work was supported by the Pack Ambition Recherche 2019 funding of the French AURA Region in