Skip to content
Snippets Groups Projects
Romain Guesdon's avatar
4baa215d
Name Last commit Last update
json
.gitignore
README.md
eval_mapk.py
requirements.txt

DriPE: A Dataset for Human Pose Estimation in Real-World Driving Settings

Romain Guesdon, Carlos Crispim-Junior, Laure Tougne

ICCV: International Conference on Computer Vision 2021
Workshop AVVision : Autonomous Vehicle Vision

Table of content

Overview

This repository contains materials from the paper DriPE: A Dataset for Human Pose Estimation in Real-World Driving Settings.

We provide the link to download the DriPE dataset, along with trained weights for the three networks presented in this paper: SBl, MSPN and RSN. Furthermore, we provide the code to evaluate HPE networks with mAPK metric, our keypoint-centered metric.

Dataset

DriPE dataset can be found here. We provide the 10k images, along with keypoint annotations, split as:

  • 6.4k for training
  • 1.3k for validation
  • 1.3k for testing

Annotations follow the COCO annotation style, with 17 keypoints. More information can be found here.

Networks

We used in our study three architectures:

  • SBl: Simple Baselines for Human Pose Estimation and Tracking (Xiao 2018) GitHub
  • MSPN: Rethinking on Multi-Stage Networks for Human Pose Estimation (Li 2019) GitHub
  • RSN: Learning Delicate Local Representations for Multi-Person Pose Estimation (Cai 2020) GitHub

We used for training and for inference the code provided by the authors in the three linked repositories. Weights of the trained model evaluated in our study can be found here. More details about the training can be found in our paper.

HPE on the COCO 2017 validation set.
AP OKS (%) AP AP50 AP75 APL AR AR50 AR75 ARL
SBl 72 92 80 77 76 93 82 80
MSPN 77 94 85 82 80 95 87 85
RSN 76 94 84 81 79 94 85 84
HPE on the DriPE test set.
AP OKS (%) AP AP50 AP75 APL AR AR50 AR75 ARL
SBl 75 99 91 75 81 99 94 81
MSPN 81 99 97 81 85 99 97 85
RSN 75 99 93 75 79 99 95 79

Evaluation

Evaluation is performed using two metrics:

  • AP OKS, the original metric from COCO dataset, which is already implemented in the cocoapi and in the three network repositories
  • mAPK, our new keypoint-centered metric. We provide script for evaluate the network predictions in this repository.

Evaluation with mAPK can be used by running the eval_mpk.py script.

Usage: python eval_mapk.py [json_prediction_path] [json_annotation_path]
Paths can be absolute, relative to the script or relative to the respective json/gts or json/preds directory.
    -h, --help\tdisplay this help message and exit

We provide in this repo one annotation file and one prediction. To evaluate these predictions, run:

python eval_mapk.py keypoints_out_SBL_autob_test-repo.json autob_coco_test.json

Expected results are : F1 score: 0.733

Metric Head Should. Elbow Wrist Hip Knee Ankle All Mean Std
AP 0.30 0.86 0.78 0.92 0.91 0.76 0.13 0.68 0.67 0.29
AR 0.87 0.92 0.93 0.96 0.88 0.61 0.05 0.80 0.75 0.31

Citation

If you use this dataset or code in your research, please cite the paper:

@InProceedings{Guesdon_2021_ICCV,
    author    = {Guesdon, Romain and Crispim-Junior, Carlos and Tougne, Laure},
    title     = {DriPE: A Dataset for Human Pose Estimation in Real-World Driving Settings},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
    month     = {October},
    year      = {2021},
    pages     = {2865-2874}
}

Acknowledgments

This work was supported by the Pack Ambition Recherche 2019 funding of the French AURA Region in the context of the AutoBehave project.

LIRIS logo RA logo