Code repository for the paper: Reconstructing Hands in 3D with Transformers
Georgios Pavlakos, Dandan Shan, Ilija Radosavovic, Angjoo Kanazawa, David Fouhey, Jitendra Malik
- [2024/06] HaMeR received the 2nd place award in the Ego-Pose Hands task of the Ego-Exo4D Challenge! Please check the validation report.
- [2024/05] We have released the evaluation pipeline!
- [2024/05] We have released the HInt dataset annotations! Please check here.
- [2023/12] Original release!
I have pushed a pre-compiled docker image if you don't want to go through the hassle of making your own container.
docker pull chaitanya1chawla/hamer_container:hamer_image
docker run -it --gpus all chaitanya1chawla/hamer_container:hamer_image
Setup docker container with cuda image:
# Pull image and run container:
docker pull nvcr.io/nvidia/cuda:11.7.0-devel-ubuntu22.04 # devel image because base image doesn't support cuda/nvcc etc.
docker run -it --gpus all nvcr.io/nvidia/cuda:11.7.0-devel-ubuntu22.04
# Inside container -
# Setup environment:
apt-get update && apt-get upgrade
apt install python3
apt install python3-pip
apt-get install ffmpeg libsm6 libxext6
python3 -m pip install numpy matplotlib scikit-learn scikit-image opencv-python opencv-contrib-python
python3 -m pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
apt-get install libglfw3-dev libgles2-mesa-dev
# Clone detectron2:
apt install git
python3 -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
pip install "numpy<2"
First you need to clone the repo:
git clone --recursive https://github.com/chaitanya1chawla/hamer.git
cd hamer
pip install -e .[all]
cd third-party
git clone https://github.com/ViTAE-Transformer/ViTPose.git
cd ViTPose
pip install -v -e third-party/ViTPose
You also need to download the trained models:
bash fetch_demo_data.sh
Besides these files, you also need to download the MANO model. Please visit the MANO website and register to get access to the downloads section. We only require the right hand model. You need to put MANO_RIGHT.pkl
under the _DATA/data/mano
folder.
First you need to clone the repo:
git clone --recursive https://github.com/geopavlakos/hamer.git
cd hamer
We recommend creating a virtual environment for HaMeR. You can use venv:
python3.10 -m venv .hamer
source .hamer/bin/activate
or alternatively conda:
conda create --name hamer python=3.10
conda activate hamer
Then, you can install the rest of the dependencies. This is for CUDA 11.7, but you can adapt accordingly:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu117
pip install -e .[all] # install detectron2 separately
pip install -v -e third-party/ViTPose
Install Detectron2:
# Create conda env
conda create --name detectron2 python==3.9 -y
conda activate detectron2
# Install torch
pip install torch torchvision
# Install gcc and g++ with conda
conda install -c conda-forge pybind11
conda install -c conda-forge gxx
conda install -c anaconda gcc_linux-64
conda upgrade -c conda-forge --all
# I had to add a version to the gcc install, and used conda-forge:
conda install -c conda-forge gcc_linux-64=13.2.0
# Install detectron2 (specific version)
pip install 'git+https://github.com/facebookresearch/[email protected]'
You also need to download the trained models:
bash fetch_demo_data.sh
Besides these files, you also need to download the MANO model. Please visit the MANO website and register to get access to the downloads section. We only require the right hand model. You need to put MANO_RIGHT.pkl
under the _DATA/data/mano
folder.
If you wish to use HaMeR with Docker, you can use the following command:
docker compose -f ./docker/docker-compose.yml up -d
After the image is built successfully, enter the container and run the steps as above:
docker compose -f ./docker/docker-compose.yml exec hamer-dev /bin/bash
Continue with the installation steps:
bash fetch_demo_data.sh
python demo.py \
--img_folder example_data --out_folder demo_out \
--batch_size=48 --side_view --save_mesh --full_frame
We have released the annotations for the HInt dataset. Please follow the instructions here
First, download the training data to ./hamer_training_data/
by running:
bash fetch_training_data.sh
Then you can start training using the following command:
python train.py exp_name=hamer data=mix_all experiment=hamer_vit_transformer trainer=gpu launcher=local
Checkpoints and logs will be saved to ./logs/
.
Download the evaluation metadata to ./hamer_evaluation_data/
. Additionally, download the FreiHAND, HO-3D, and HInt dataset images and update the corresponding paths in hamer/configs/datasets_eval.yaml
.
Run evaluation on multiple datasets as follows, results are stored in results/eval_regression.csv
.
python eval.py --dataset 'FREIHAND-VAL,HO3D-VAL,NEWDAYS-TEST-ALL,NEWDAYS-TEST-VIS,NEWDAYS-TEST-OCC,EPICK-TEST-ALL,EPICK-TEST-VIS,EPICK-TEST-OCC,EGO4D-TEST-ALL,EGO4D-TEST-VIS,EGO4D-TEST-OCC'
Results for HInt are stored in results/eval_regression.csv
. For FreiHAND and HO-3D you get as output a .json
file that can be used for evaluation using their corresponding evaluation processes.
Parts of the code are taken or adapted from the following repos:
Additionally, we thank StabilityAI for a generous compute grant that enabled this work.
If you find this code useful for your research, please consider citing the following paper:
@inproceedings{pavlakos2024reconstructing,
title={Reconstructing Hands in 3{D} with Transformers},
author={Pavlakos, Georgios and Shan, Dandan and Radosavovic, Ilija and Kanazawa, Angjoo and Fouhey, David and Malik, Jitendra},
booktitle={CVPR},
year={2024}
}