From 584efd68810c4d2f818207f2719a6ae886aac825 Mon Sep 17 00:00:00 2001 From: jwangzzz <j96w@qq.com> Date: Sat, 13 Apr 2019 20:48:25 +0800 Subject: [PATCH] compatible with py2 --- README.md | 4 ++-- tools/eval_linemod.py | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index baa59b0..178624e 100644 --- a/README.md +++ b/README.md @@ -21,13 +21,13 @@ ## Overview -This repository is the implementation code of the paper "DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion"([arXiv](https://arxiv.org/abs/1901.04780), [Project](https://sites.google.com/view/densefusion), [Video](https://www.youtube.com/watch?v=SsE5-FuK5jo)) by Chen et al. at [Stanford Vision and Learning Lab](http://svl.stanford.edu/) and [Stanford People, AI & Robots Group](http://pair.stanford.edu/). The model takes an RGB-D image as input and predicts the 6D pose of the each object in the frame. This network is implemented using [PyTorch](https://pytorch.org/) and the rest of the framework is in Python. Since this project focuses on the 6D pose estimation process, we do not specifically limit the choice of the segmentation models. You can choose your preferred semantic-segmentation/instance-segmentation methods according to your needs. In this repo, we provide our full implementation code of the DenseFusion model, Iterative Refinement model and a vanilla SegNet semantic-segmentation model used in our real-robot grasping experiment. The ROS code of the real robot grasping experiment is not included. +This repository is the implementation code of the paper "DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion"([arXiv](https://arxiv.org/abs/1901.04780), [Project](https://sites.google.com/view/densefusion), [Video](https://www.youtube.com/watch?v=SsE5-FuK5jo)) by Wang et al. at [Stanford Vision and Learning Lab](http://svl.stanford.edu/) and [Stanford People, AI & Robots Group](http://pair.stanford.edu/). The model takes an RGB-D image as input and predicts the 6D pose of the each object in the frame. This network is implemented using [PyTorch](https://pytorch.org/) and the rest of the framework is in Python. Since this project focuses on the 6D pose estimation process, we do not specifically limit the choice of the segmentation models. You can choose your preferred semantic-segmentation/instance-segmentation methods according to your needs. In this repo, we provide our full implementation code of the DenseFusion model, Iterative Refinement model and a vanilla SegNet semantic-segmentation model used in our real-robot grasping experiment. The ROS code of the real robot grasping experiment is not included. ## Requirements * Python 2.7/3.5/3.6 (If you want to use Python2.7 to run this repo, please rebuild the `lib/knn/` (with PyTorch 0.4.1).) -* [PyTorch 0.4.1](https://pytorch.org/)([PyTroch 1.0 branch](<https://github.com/j96w/DenseFusion/tree/Pytorch-1.0>)) +* [PyTorch 0.4.1](https://pytorch.org/) ([PyTroch 1.0 branch](<https://github.com/j96w/DenseFusion/tree/Pytorch-1.0>)) * PIL * scipy * numpy diff --git a/tools/eval_linemod.py b/tools/eval_linemod.py index 26ffdf2..7850d84 100644 --- a/tools/eval_linemod.py +++ b/tools/eval_linemod.py @@ -139,8 +139,8 @@ for i, data in enumerate(testdataloader, 0): num_count[idx[0].item()] += 1 for i in range(num_objects): - print('Object {0} success rate: {1}'.format(objlist[i], success_count[i] / num_count[i])) - fw.write('Object {0} success rate: {1}\n'.format(objlist[i], success_count[i] / num_count[i])) -print('ALL success rate: {0}'.format(sum(success_count) / sum(num_count))) -fw.write('ALL success rate: {0}\n'.format(sum(success_count) / sum(num_count))) + print('Object {0} success rate: {1}'.format(objlist[i], float(success_count[i]) / num_count[i])) + fw.write('Object {0} success rate: {1}\n'.format(objlist[i], float(success_count[i]) / num_count[i])) +print('ALL success rate: {0}'.format(float(sum(success_count)) / sum(num_count))) +fw.write('ALL success rate: {0}\n'.format(float(sum(success_count)) / sum(num_count))) fw.close() -- GitLab