diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..577fb3a
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,15 @@
+.PHONY: clean fix imports sort
+
+## Delete all compiled Python files
+clean:
+ find . -type f -name "*.py[co]" -delete
+ find . -type d -name "__pycache__" -delete
+ rm -rf condor_logs/*
+ find run_scripts/* -delete
+ find logs/* -delete
+fix:
+ black src common scripts_method scripts_data
+sort:
+ isort src common scripts_method scripts_data --wrap-length=1 --combine-as --trailing-comma --use-parentheses
+imports:
+ autoflake -i -r --remove-all-unused-imports src common scripts_method scripts_data
diff --git a/README.md b/README.md
index 92be543..76c12e2 100644
--- a/README.md
+++ b/README.md
@@ -1,13 +1,11 @@
-## ARCTIC ❄️: A Dataset for Dexterous Bimanual Hand-Object Manipulation
+## ARCTIC 🥶: A Dataset for Dexterous Bimanual Hand-Object Manipulation
-
-
-[ [Project Page](https://arctic.is.tue.mpg.de) ][ [Paper](https://download.is.tue.mpg.de/arctic/arctic_april_24.pdf) ][ [Video](https://www.youtube.com/watch?v=bvMm8gfFbZ8) ]
+[ [Project Page](https://arctic.is.tue.mpg.de) ][ [Paper](https://download.is.tue.mpg.de/arctic/arctic_april_24.pdf) ][ [Video](https://www.youtube.com/watch?v=bvMm8gfFbZ8) ] [[Register ARCTIC Account]](https://arctic.is.tue.mpg.de/register.php)
@@ -18,19 +16,105 @@
This is a repository for preprocessing, splitting, visualizing, and rendering (RGB, depth, segmentation masks) the ARCTIC dataset.
Further, here, we provide code to reproduce our baseline models in our CVPR 2023 paper (Vancouver, British Columbia 🇨🇦) and developing custom models.
-
-> [**ARCTIC: A Dataset for Dexterous Bimanual Hand-Object Manipulation**](https://arctic.is.tue.mpg.de)
-> [Zicong Fan](https://zc-alexfan.github.io),
-> [Omid Taheri](https://ps.is.mpg.de/person/otaheri),
-> [Dimitrios Tzionas](https://ps.is.mpg.de/employees/dtzionas),
-> [Muhammed Kocabas](https://ps.is.tuebingen.mpg.de/person/mkocabas),
-> [Manuel Kaufmann](https://ait.ethz.ch/people/kamanuel/),
-> [Michael J. Black](https://ps.is.tuebingen.mpg.de/person/black),
-> [Otmar Hilliges](https://ait.ethz.ch/people/hilliges)
-> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
-
Our dataset contains heavily dexterous motion:
+
+### Why use ARCTIC?
+
+Summary on dataset:
+- It contains 2.1M high-resolution images paired with annotated frames, enabling large-scale machine learning.
+- Images are from 8x 3rd-person views and 1x egocentric view (for mixed-reality setting).
+- It includes 3D groundtruth for SMPL-X, MANO, articulated objects.
+- It is captured in a MoCap setup using 54 high-end Vicon cameras.
+- It features highly dexterous bimanual manipulation motion (beyond quasi-static grasping).
+
+Potential tasks with ARCTIC:
+- Generating [hand grasp](https://korrawe.github.io/HALO/HALO.html) or [motion](https://github.com/cghezhang/ManipNet) with articulated objects
+- Generating [full-body grasp](https://grab.is.tue.mpg.de/) or [motion](https://goal.is.tue.mpg.de/) with articulated objects
+- Benchmarking performance of articulated object pose estimators from [depth images](https://articulated-pose.github.io/) with human in the scene
+- Studying our [NEW tasks](https://download.is.tue.mpg.de/arctic/arctic_april_24.pdf) of consistent motion reconstruction and interaction field estimation
+- Studying egocentric hand-object reconstruction
+- Reconstructing [full-body with hands and articulated objects](https://3dlg-hcvc.github.io/3dhoi/) from RGB images
+
+
+Check out our [project page](https://arctic.is.tue.mpg.de) for more details.
+
+### News
+
+- 2023.05.04: ARCTIC dataset with code for dataloaders, visualizers, models is officially announced (version 1.0)!
+- 2023.03.25: ARCTIC ☃️ dataset (version 0.1) is available! 🎉
+
+### Features
+
+
+
+
+
+- Instructions to download the ARCTIC dataset.
+- Scripts to process our dataset and to build data splits.
+- Rendering scripts to render our 3D data into RGB, depth, and segmentation masks.
+- A viewer to interact with our dataset.
+- Instructions to setup data, code, and environment to train our baselines.
+- A generalized codebase to train, visualize and evaluate the results of ArcticNet and InterField for the ARCTIC benchmark.
+- A viewer to interact with the prediction.
+
+TODOs:
+
+- [ ] Add more documentation to code
+- [ ] Utils to upload test set results to our evaluation server
+- [ ] Clean code further
+
+
+### Getting started
+
+Get a copy of the code:
+```bash
+git clone https://github.com/zc-alexfan/arctic.git
+```
+
+- Setup environment: see [`docs/setup.md`](docs/setup.md)
+- Download and visualize ARCTIC dataset: see [`docs/data/README.md`](docs/data/README.md)
+- Training, evaluating for our ARCTIC baselines: see [`docs/model/README.md`](docs/model/README.md).
+- FAQ: see [`docs/faq.md`](docs/faq.md)
+
+### License
+
+See [LICENSE](LICENSE).
+
+### Citation
+
+```bibtex
+@inproceedings{fan2023arctic,
+ title = {{ARCTIC}: A Dataset for Dexterous Bimanual Hand-Object Manipulation},
+ author = {Fan, Zicong and Taheri, Omid and Tzionas, Dimitrios and Kocabas, Muhammed and Kaufmann, Manuel and Black, Michael J. and Hilliges, Otmar},
+ booktitle = {Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
+ year = {2023}
+}
+```
+
+Our paper benefits a lot from [aitviewer](https://github.com/eth-ait/aitviewer). If you find our viewer useful, to appreciate their hard work, consider citing:
+
+```bibtex
+@software{kaufmann_vechev_aitviewer_2022,
+ author = {Kaufmann, Manuel and Vechev, Velko and Mylonopoulos, Dario},
+ doi = {10.5281/zenodo.1234},
+ month = {7},
+ title = {{aitviewer}},
+ url = {https://github.com/eth-ait/aitviewer},
+ year = {2022}
+}
+```
+
+### Acknowledgments
+
+Constructing the ARCTIC dataset is a huge effort. The authors deeply thank: [Tsvetelina Alexiadis (TA)](https://ps.is.mpg.de/person/talexiadis) for trial coordination; [Markus Höschle (MH)](https://ps.is.mpg.de/person/mhoeschle), [Senya Polikovsky](https://is.mpg.de/person/senya), [Matvey Safroshkin](https://is.mpg.de/person/msafroshkin), [Tobias Bauch (TB)](https://www.linkedin.com/in/tobiasbauch/?originalSubdomain=de) for the capture setup; MH, TA and [Galina Henz](https://ps.is.mpg.de/person/ghenz) for data capture; [Priyanka Patel](https://ps.is.mpg.de/person/ppatel) for alignment; [Nima Ghorbani](https://nghorbani.github.io/) for MoSh++; [Leyre Sánchez Vinuela](https://is.mpg.de/person/lsanchez), [Andres Camilo Mendoza Patino](https://ps.is.mpg.de/person/acmendoza), [Mustafa Alperen Ekinci](https://ps.is.mpg.de/person/mekinci) for data cleaning; TB for Vicon support; MH and [Jakob Reinhardt](https://ps.is.mpg.de/person/jreinhardt) for object scanning; [Taylor McConnell](https://ps.is.mpg.de/person/tmcconnell) for Vicon support, and data cleaning coordination; [Benjamin Pellkofer](https://ps.is.mpg.de/person/bpellkofer) for IT/web support; [Neelay Shah](https://ps.is.mpg.de/person/nshah) for evaluation server. We also thank [Adrian Spurr](https://ait.ethz.ch/people/spurra/) and [Xu Chen](https://ait.ethz.ch/people/xu/) for insightful discussion. OT and DT were supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B".
+
+
+### Contact
+
+For technical questions, please create an issue. For other questions, please contact `arctic@tue.mpg.de`.
+
+For commercial licensing, please contact `ps-licensing@tue.mpg.de`.
diff --git a/bash/assets/checksum.json b/bash/assets/checksum.json
new file mode 100644
index 0000000..54d9fde
--- /dev/null
+++ b/bash/assets/checksum.json
@@ -0,0 +1,686 @@
+{
+ "/data/cropped_images_zips/s01/box_grab_01.zip": "76873f96597b60f31e649db43800d6211ec234fdcd62997fde346c27cdd892dd",
+ "/data/cropped_images_zips/s01/box_use_01.zip": "0bc6b572b63f58aa7eda97a133d8ff645b9dc97141fc9c68a7b109ed09d57439",
+ "/data/cropped_images_zips/s01/box_use_02.zip": "b21f1d98cba3bec09a47a7f1ae1bc3f1f6015457bb7f1c0c5b760b0adb3f98c3",
+ "/data/cropped_images_zips/s01/capsulemachine_grab_01.zip": "295f454519918e26dfdaaf938b8ed293182acd34fe2c510a5c444f588eae7088",
+ "/data/cropped_images_zips/s01/capsulemachine_use_01.zip": "602f5709b889d874c9da1ada4e2a2ee1b3f824322184cad112e4bcd846b520f4",
+ "/data/cropped_images_zips/s01/capsulemachine_use_02.zip": "01309031d7ca54702b1434c054bd634d4530f234d51ced532ae288b6019197a3",
+ "/data/cropped_images_zips/s01/espressomachine_grab_01.zip": "67942d68a7db4f3428c798ce2a5ce6d27cd365ebaa14bed9b0d45484b72eea0d",
+ "/data/cropped_images_zips/s01/espressomachine_use_01.zip": "ec90efabb3c7cdb1a6eeab8ad5c549afb1ad27bdf5a3a1965b376b38293c4d54",
+ "/data/cropped_images_zips/s01/espressomachine_use_02.zip": "8e2814fb9bb8400160b8a1b4f6cfa524cb162dfe5414b45d76910ba3b5467ded",
+ "/data/cropped_images_zips/s01/ketchup_grab_01.zip": "e4173913e435b58b9bbdabd4448b37e22652bda6e360e4a1cb3ddc29274d054a",
+ "/data/cropped_images_zips/s01/ketchup_use_01.zip": "4e531e12dcc711cae1e2ec557537a4b2970744f3b3d08ad4e41ee658cb77aefa",
+ "/data/cropped_images_zips/s01/ketchup_use_02.zip": "b927ec26f1c803661e80540460a0483c745d757946dfa318eff24dab49793053",
+ "/data/cropped_images_zips/s01/laptop_grab_01.zip": "a26ece6a7a310a623e7450e5c723be921028345f7b29aa9ef748c1e138f81dde",
+ "/data/cropped_images_zips/s01/laptop_use_01.zip": "1239d61f27a5d2ca51e4d5caaae48a25ec740d65838116aca6685cba511e0278",
+ "/data/cropped_images_zips/s01/laptop_use_02.zip": "76799899ed4bf5af32f70b151bf7ff78cfd525edabd28bfb3b73ed3153f01ad6",
+ "/data/cropped_images_zips/s01/laptop_use_03.zip": "8953e46303f7cb6eef9362f03683bb09e5cf530873973deee7838471af86f102",
+ "/data/cropped_images_zips/s01/laptop_use_04.zip": "c5f25609edca073278c81ec1f138a4500c117a2a82f59280376af6dcbde49345",
+ "/data/cropped_images_zips/s01/microwave_grab_01.zip": "991bd3c56b59d91b070b818505e8ac42cea56e5b0766a6afadfb720978abe78a",
+ "/data/cropped_images_zips/s01/microwave_use_01.zip": "5c78ed690182a6c67a6fbe7ca7870717e329d04e21126ce076f691b2507916aa",
+ "/data/cropped_images_zips/s01/microwave_use_02.zip": "0f78fd1d0e0af887eafe051645fb3bcdf27439354123037675d939bcbc16141f",
+ "/data/cropped_images_zips/s01/mixer_grab_01.zip": "276a8d41b39c3f616ecb1817b5bd039df14aa88e3092c062aab07443adeefd02",
+ "/data/cropped_images_zips/s01/mixer_use_01.zip": "39c500c289a9f67d22b464be3d8296bd2a89689e52d479a8739d355c3f95e18c",
+ "/data/cropped_images_zips/s01/mixer_use_02.zip": "ceb4f3f350cc70ec63d4558cb6ae7e612f8f3c73463934d8e4624ec4f8503c2f",
+ "/data/cropped_images_zips/s01/notebook_use_01.zip": "d98af30be5267df9a866368e0b87a2d19305d729bcef64618cfacaf2f26911c1",
+ "/data/cropped_images_zips/s01/notebook_use_02.zip": "333d1533631f443a2c93d23ad2633149d1621d841cff4106bad107a65f990699",
+ "/data/cropped_images_zips/s01/phone_grab_01.zip": "3b8e65d98eb48c41143234b69ef94d787f22dcead247ec667be0fc2fae248599",
+ "/data/cropped_images_zips/s01/phone_use_01.zip": "620cbdc1719927ea2d5f6246dbffbc1be3cacfa149080ef7a8cd7000688b8f40",
+ "/data/cropped_images_zips/s01/phone_use_02.zip": "877772134113071f19d182410f1f352315aef43b7ded6f4785a1da02786dd9f0",
+ "/data/cropped_images_zips/s01/scissors_grab_01.zip": "703715d44a269d8efbed8ae02cd8703289d0b472cd97066dcff4741099da17fc",
+ "/data/cropped_images_zips/s01/scissors_use_01.zip": "7b73e5f9785bf860cc480979df21f2b7fddf65eae4b03b052a5a6420168bd5b3",
+ "/data/cropped_images_zips/s01/scissors_use_02.zip": "f21eb85440a6057ebde44258e9029222c47935649ee91ce746569c342f23150e",
+ "/data/cropped_images_zips/s01/waffleiron_grab_01.zip": "7d67700085b41989c08c3b3b49e562a83985977261f88d2aa1feb466acc1ec28",
+ "/data/cropped_images_zips/s01/waffleiron_use_01.zip": "6c7ab3770ed891f0f65cb409b2fd160b699b4daa2f3f9d0cd44428c0fcad490e",
+ "/data/cropped_images_zips/s01/waffleiron_use_02.zip": "36b68c7f34b41b937d41795ee1557460d0b93985096bbdfebb84b5128b941dd6",
+ "/data/cropped_images_zips/s02/box_grab_01.zip": "6a2a3a52f1277ccda46a9eaac8138aae1c8c335832d374f64c6bef28c5bffb96",
+ "/data/cropped_images_zips/s02/box_use_01.zip": "95334a34287ae6689a586ea146e8908e2bac7a0771a03b162a62d9ce6f800dcc",
+ "/data/cropped_images_zips/s02/box_use_02.zip": "aa65c660988bde1a40440c4c177eb5c1f3f892e1b1744197c27da0570248f9ad",
+ "/data/cropped_images_zips/s02/capsulemachine_grab_01.zip": "7270ce31299bba766358e7581b94f6e7d929a45b2d0c6355afe71bf9fcd40cf6",
+ "/data/cropped_images_zips/s02/capsulemachine_use_01.zip": "b7884d7163d0d9e554f615282b097989505f161ed50358842bd7377b754f03b8",
+ "/data/cropped_images_zips/s02/capsulemachine_use_02.zip": "97d85492f5113ca2465bbdc11f6a44cb358e8e11266f40598427f1aeb47d413a",
+ "/data/cropped_images_zips/s02/espressomachine_grab_01.zip": "49c5bbc14a327e253726b426a58fddac6062925cf59f6ad8abd37790d7d5ad3d",
+ "/data/cropped_images_zips/s02/espressomachine_use_01.zip": "773f8d2f476168c0edfb56c27280f86dda8f21aeb89a3b39513bb823134bf522",
+ "/data/cropped_images_zips/s02/espressomachine_use_02.zip": "c88b95466f80fc1922dd68c93704f39c15d23d0f962e4fd37b2b149edea897e9",
+ "/data/cropped_images_zips/s02/ketchup_grab_01.zip": "730f6108290974ff92d4a5bb2bb8418c1d5ab3c830f800e13162f53fff0b39a4",
+ "/data/cropped_images_zips/s02/ketchup_use_01.zip": "b353ca323f1747334521d2bbbf5061bdd729f71e765d8ed82123e926d702e8d7",
+ "/data/cropped_images_zips/s02/ketchup_use_02.zip": "9635fc4bf8e57ea3b2fc52c34cc23cba00ff9b860fd184e8d175bd24bf5282e0",
+ "/data/cropped_images_zips/s02/ketchup_use_03.zip": "2f4f26b9bb59a0e1237ca3b4f9c5ebf8d9a85f32c8acc6b82684a4aa418c05a3",
+ "/data/cropped_images_zips/s02/ketchup_use_04.zip": "7ff8d625076c9456e6d6bcb196312e05126321fa04b3ce68759f3973d2dc4fb5",
+ "/data/cropped_images_zips/s02/laptop_grab_01.zip": "fcd33f111645fb1d569b7c3f817c04cec81f573efbbcb616b3e6eb889e779e24",
+ "/data/cropped_images_zips/s02/laptop_use_01.zip": "e450a7814c6ee034c538f9e91901fd937b56ce79a349a350647b4d7e17b01cc7",
+ "/data/cropped_images_zips/s02/laptop_use_02.zip": "b029c1f4995e55389c87c69237f41b86e2d714f4e62a5328403b9554158d4bef",
+ "/data/cropped_images_zips/s02/microwave_grab_01.zip": "17c5f31dae192d8e6478ef6b43081bb60638533a1455edd314357fb387c7649e",
+ "/data/cropped_images_zips/s02/microwave_use_01.zip": "e4432e872d8e7a9980fe60435c57fb2596f44fd08d8739f54e391bf2883ff002",
+ "/data/cropped_images_zips/s02/microwave_use_01_retake.zip": "36effae7fddc9f4d24f813645bb4ea9c61e4054a1f9a172b8287974425a40106",
+ "/data/cropped_images_zips/s02/microwave_use_02.zip": "327c1ff9fc0ff192d583e2cc883d129062d4fb5eb9423d27b6194609f3d5a143",
+ "/data/cropped_images_zips/s02/mixer_grab_01.zip": "eed618abd497f73f252844d49106a561a1caaa2aa6e1415dc99cbaa9cb3feb7d",
+ "/data/cropped_images_zips/s02/mixer_use_01.zip": "b7813a1bf141f6940d5106a42eaddc523a5ea2f3a18c5454da99d84196df7f78",
+ "/data/cropped_images_zips/s02/mixer_use_02.zip": "de3472eebd1f2b230b2ed9572a01f3a721855ca5cd1dbe6af5609c677f4518db",
+ "/data/cropped_images_zips/s02/mixer_use_03.zip": "46f7a494b1d6b696b8b957957b171dd23b86e4d5be4e058cea1fff34252953ea",
+ "/data/cropped_images_zips/s02/mixer_use_04.zip": "cf1a288e040b6013513e7864a5f65f084c18a88c82fd8a2bda2d7afef8f05983",
+ "/data/cropped_images_zips/s02/notebook_use_01.zip": "35d124bd639cb7f6b62741bf5d82b7c2086f730429bd0eaab5ad748dde5107d2",
+ "/data/cropped_images_zips/s02/notebook_use_02.zip": "e97bf560e7b2fb771cc27bc1fd8f9a7e8af265a8e23aa7e75a95e0df54fd0d2f",
+ "/data/cropped_images_zips/s02/phone_grab_01.zip": "7f70df527284ba581224562dab2db1c272fc97c8dd9ed8f0338719731e7d765a",
+ "/data/cropped_images_zips/s02/phone_use_01.zip": "7eccee90a3089df130ce6bfe1b54a48fb4e8c22591cce0d139409e66efd2a5a0",
+ "/data/cropped_images_zips/s02/phone_use_02.zip": "8f130ba166a283960b3314fd57111ca59abc973085e80dff96dc5042392c3acb",
+ "/data/cropped_images_zips/s02/scissors_grab_01.zip": "29c3024673b0ae8fa4e112238c8a755afa13315d0e66f2655381223397d43f2f",
+ "/data/cropped_images_zips/s02/scissors_use_01.zip": "a8218feeab82e5048840cedbad094447705b1e3506a055fe629382e88f0a1c7e",
+ "/data/cropped_images_zips/s02/scissors_use_02.zip": "09335248a63c24c24ccde015c270e542b74fa664aa00fa2f7533654f1554ceb7",
+ "/data/cropped_images_zips/s02/waffleiron_grab_01.zip": "5e605e5adbee550629d73559f4ec8a35967dd19fe63ef12fc993e63cb03c76ee",
+ "/data/cropped_images_zips/s02/waffleiron_use_01.zip": "04bb7e357a0ba935bec399521b5ab1db002b4b53e1a17c6297070962be68d7b2",
+ "/data/cropped_images_zips/s02/waffleiron_use_02.zip": "fb6dc59c8c83dee2f6bdd148c43be93e13fbf3aa199b671ad6539f5615352998",
+ "/data/cropped_images_zips/s03/box_grab_01.zip": "806c1dd9c839c4a972ac2a3e6443ab045110e50271f68a212cd76af7a43851b9",
+ "/data/cropped_images_zips/s03/box_use_01.zip": "05b4ecacfbafb4c3ccdef6ae7554060c07b93d421775d6fc662c7b366ecf1642",
+ "/data/cropped_images_zips/s03/box_use_01_retake.zip": "2c5b7cb0e71b2b60e0be40c235ca543a23b2760960c2e483c92ef94c82275588",
+ "/data/cropped_images_zips/s03/box_use_02.zip": "5e7d3fd0831b306f9fa80d79402a37aab9e89af2c186a8f14c5be85d56699ad5",
+ "/data/cropped_images_zips/s03/capsulemachine_grab_01.zip": "819339048aa5db43e7524b63fe2f3455e9110ec9b6c9ca4d44a81e66055f60ee",
+ "/data/cropped_images_zips/s03/capsulemachine_use_01.zip": "5e807e2573988881112ba1b709d27764232cbfe890823fb7a96a82f129aca3e6",
+ "/data/cropped_images_zips/s03/capsulemachine_use_02.zip": "d96ef1ac9202e395e5614d69f75696aca1b67f2b32f6965178bf05795d832118",
+ "/data/cropped_images_zips/s03/capsulemachine_use_03.zip": "e118d5e56605ff80b3102f495b19c3a5ab8b876a4cd25681d57bf67201027399",
+ "/data/cropped_images_zips/s03/capsulemachine_use_04.zip": "9c8b0d8bb44ceba2d73077fdcc1bb08bc2373402774d0825061546ebf87c6d2d",
+ "/data/cropped_images_zips/s03/espressomachine_grab_01.zip": "ceade6d90aec875fcb325517717230731a55664da5668938d2131e71d8d6040f",
+ "/data/cropped_images_zips/s03/espressomachine_use_01.zip": "00130c870985c5fdc222f99ebc1dc6651613401f9478c17b7591587b75d7664e",
+ "/data/cropped_images_zips/s03/espressomachine_use_02_retake.zip": "caad872bd496245e8babf02956bc22b9dcad8b53864d019e72ad08709b715da4",
+ "/data/cropped_images_zips/s03/ketchup_grab_01.zip": "6b483df09304fde81baab79d795b66848d84a0d1ace35fe9030e97edcba6930f",
+ "/data/cropped_images_zips/s03/ketchup_use_01.zip": "a66bb36884b08ec7600501c6dd4b79fd41ad34bc28ca06b75f7ebe3144ad14a6",
+ "/data/cropped_images_zips/s03/ketchup_use_02.zip": "5cee7966faa227346e016938d6e43364adc282020b8f19941883fa353f184780",
+ "/data/cropped_images_zips/s03/laptop_grab_01.zip": "4afeccd525f9cddda66129937f9a1645d35e4830f329c2dc53e701f716621c00",
+ "/data/cropped_images_zips/s03/laptop_use_01.zip": "0d563584d6a64aa8815987afcbce2c49ed7cc0d585ce200baccc3aacdbd04b77",
+ "/data/cropped_images_zips/s03/laptop_use_02.zip": "5f25b1e5291b7f5fa63e7e521b236ae3ace8b37e8596c6c7336e5fa1913dd583",
+ "/data/cropped_images_zips/s03/microwave_grab_01.zip": "5f2e464b1539ba8099a43ba920701ac9056b9f00813f00db7c4681c13965bc3e",
+ "/data/cropped_images_zips/s03/microwave_use_01.zip": "32cf8641dbc5b2072d0b95e782f5419ee08bc968cf967ca5b9b103f587e7f862",
+ "/data/cropped_images_zips/s03/microwave_use_02.zip": "9de534d70825bd47052b70afa4013da1a5734ccac9dd8aacf3c22d54dedc7e67",
+ "/data/cropped_images_zips/s03/mixer_grab_01.zip": "263a435bc940f03cfcfe29ba006a4b4f1f23f96bf992713edbd76fda6bf691dc",
+ "/data/cropped_images_zips/s03/mixer_use_01.zip": "2b75bde0d89df2f321a48679e2a2c1e2a78b92247317e062bce00a09629ec709",
+ "/data/cropped_images_zips/s03/mixer_use_02.zip": "43d4856ab7a073ea4844a512e0ef6e4628714b7bf25e8153f034d29d6127ceb0",
+ "/data/cropped_images_zips/s03/notebook_grab_01.zip": "1407c86fb560f2d0a57031d2e26e74c4cb5a61b2c2ebf8cff805d86fc4e34acc",
+ "/data/cropped_images_zips/s03/notebook_use_01.zip": "d15c02de1befd38de3e10264edf26bd6a60517ae3176146f29e53d2d2b33c1d3",
+ "/data/cropped_images_zips/s03/notebook_use_02.zip": "c3a7213179085cd72248cb05f1a63f1fdf3ce8e098f5e80c3251aa5881ed317a",
+ "/data/cropped_images_zips/s03/notebook_use_03.zip": "ab12a8827494e59d45881b0bf9b36837e6c65dbf3ce66a881872a9c08c9c85d7",
+ "/data/cropped_images_zips/s03/notebook_use_04.zip": "0748b99335e256b36cfa17b76502852cc288736be650f557838b146d591584f7",
+ "/data/cropped_images_zips/s03/phone_grab_01.zip": "1103069878ce103791fb72e3c6e0543fd72dbe5520c141c95889f5b0ea8b603b",
+ "/data/cropped_images_zips/s03/phone_use_01_retake.zip": "3d9a9373446ecd66b51faf7b30294772944d7f9ac47aa08a3c4c422aa424bfd3",
+ "/data/cropped_images_zips/s03/phone_use_02.zip": "8a88d80da6b3c94b990cfbba5e33b61637d325b43861f09d7349e580b26e8f45",
+ "/data/cropped_images_zips/s03/scissors_grab_01.zip": "1665d8b83f3e2e253e1e6e5d5cd9ee6a8c60304bf6975ea77d4c7fe13379757b",
+ "/data/cropped_images_zips/s03/scissors_use_01.zip": "bb3d0fc71aa013acfffdfcc0a22d4f4282264941a51e1cc6feec7307df3a96e1",
+ "/data/cropped_images_zips/s03/scissors_use_02.zip": "766054b7147b4f4deb1759f9c5cd4e1f4b49645f72bf52726d58cb28e3ef2030",
+ "/data/cropped_images_zips/s03/waffleiron_grab_01.zip": "6e37256ee65c08cec439ce517f5c27cdf745351b3ff74281985e211e6afe6bff",
+ "/data/cropped_images_zips/s03/waffleiron_use_01.zip": "764b93b995a71df056d4ca09224634fd086179999b669ea90fb0b50a5abde1b8",
+ "/data/cropped_images_zips/s03/waffleiron_use_02.zip": "7e6eae25f50f5303c414b3ce945e9d64cb1fc6f700a1f39b5c54dca1ad212888",
+ "/data/cropped_images_zips/s04/box_grab_01.zip": "8d79285721991d1bbbdc6aac9c209c1e6fa4cd1cb41a801c80ea89e96446e800",
+ "/data/cropped_images_zips/s04/box_use_01.zip": "7d9af49ed1bbcfdebd5d6355a548ed95808783a3d0f1ca8395efcd11f71a481c",
+ "/data/cropped_images_zips/s04/box_use_02.zip": "d3bc60e32d4e880ed608abf52af4527291d8d9447e98e8f77ede532d679d5e87",
+ "/data/cropped_images_zips/s04/capsulemachine_grab_01.zip": "ba018479a36ae2da229e7825e57f11dc0c87061d2dfccff8603d1c8a29ad918f",
+ "/data/cropped_images_zips/s04/capsulemachine_use_01.zip": "e5602166305aec65d78c241551413c36795a6ed97a657ba514bb2205b446ccaf",
+ "/data/cropped_images_zips/s04/capsulemachine_use_02.zip": "3889c4c424a8773b2bb42c30eea272e8b3834f3145fb4345a3855da56da17ed9",
+ "/data/cropped_images_zips/s04/espressomachine_grab_01.zip": "cad997166f42adc4fef42dec229fafb3142cd343a39f18f1965ee948a32f0d58",
+ "/data/cropped_images_zips/s04/espressomachine_use_01_retake.zip": "c199ded309cbd4b1fdbdd314745926597dea490f3d3aa5e2eac834163ffb9e3c",
+ "/data/cropped_images_zips/s04/espressomachine_use_02.zip": "d615dfc110910a3df3d164d4d4b55ea6ca21f0ff2e392c071d9dd55ed69b8139",
+ "/data/cropped_images_zips/s04/ketchup_grab_01.zip": "d9eec3345bf0405e52faa190d3e53d2d68fb80182f8d3d7d87a30a07f523ec36",
+ "/data/cropped_images_zips/s04/ketchup_use_02.zip": "9e8271792143dc676d38e71a06121639706c188ca96e13cbec8469d3610b9fa9",
+ "/data/cropped_images_zips/s04/laptop_grab_01.zip": "30b1a8dab45f21f74284080e69c6397f975e79a5d2540a1e268018dad21b056f",
+ "/data/cropped_images_zips/s04/laptop_use_01.zip": "2f457e65267eb97a68dfe4e13ed034b9b1817cbb32ea8579f3c0ec8ca507e786",
+ "/data/cropped_images_zips/s04/laptop_use_02.zip": "390180ebb2a0646c5e45a7ed59d73bb641c2c0a4f56542c85a41ed8b8bc2d314",
+ "/data/cropped_images_zips/s04/microwave_grab_01.zip": "3404c05feefb746cc692097a6012bad410f70581d24f42d5921b3b0a6c5339d6",
+ "/data/cropped_images_zips/s04/microwave_use_01.zip": "e789563e3cfb919c4366b139c6ab514661991caf41a2b7c5e6f43481bb610f2b",
+ "/data/cropped_images_zips/s04/microwave_use_02.zip": "b284107f0503a830f319e6d71127ff1b419cb298de1c6e2f9f2b05cc37f487cf",
+ "/data/cropped_images_zips/s04/mixer_grab_01.zip": "526ebe83fc7eb61827d69ad0ecad4d97f5c8e9998cc89c7d2b0b17a436ea4711",
+ "/data/cropped_images_zips/s04/mixer_use_01.zip": "8dbe53c00f7b0606d769076c3527204110a40414322dd15cf99b4821ef7fb915",
+ "/data/cropped_images_zips/s04/mixer_use_02.zip": "5add40a6c57bca61f10e72d472c735aa644ce2b1751bc9fcfa12babb77c5384d",
+ "/data/cropped_images_zips/s04/notebook_grab_01.zip": "02b2b9abb925d0cfdebb1e7bb42ae4c0b5d5631333397db20b4e555c2abd0b88",
+ "/data/cropped_images_zips/s04/notebook_use_01.zip": "90f1db36527fefccf97446abd02bd63e78cdf1eef5c175ffe2a7cf90e120b4d8",
+ "/data/cropped_images_zips/s04/notebook_use_02.zip": "b769bd98207c0f92b86101525fe61f9cec00b71741f5b68a09c914fe57b47451",
+ "/data/cropped_images_zips/s04/phone_grab_01.zip": "ec278e54f6c18129d045cc2fd1eca90c34f74db0643cdf0db5f997a21e6568e7",
+ "/data/cropped_images_zips/s04/phone_use_01.zip": "947bfe0d8cfd16155d3a63740e5247c6ca668ff2f282b04a047b7df3c68a6084",
+ "/data/cropped_images_zips/s04/phone_use_02.zip": "1576e48f50e5bd46865243edc496616b3224151f2668eb9f5d101ed8a5e6c2b6",
+ "/data/cropped_images_zips/s04/scissors_grab_01.zip": "b668597badaebe5e9215c62b205beedc72df03031aeba771ac4993cf337c2f59",
+ "/data/cropped_images_zips/s04/scissors_use_01.zip": "4a63b482c57b66d92f9766d5ef94df13bbb3938509d4d9634c2882ca32a716e2",
+ "/data/cropped_images_zips/s04/scissors_use_02.zip": "7ec6a1cbc2caf92f4d75118611eef810f62d9096cc52b152d5d960cbb3c67b90",
+ "/data/cropped_images_zips/s04/waffleiron_use_01.zip": "ee6d650cce011ead52d7c8ef3bd3dbe656ac106c8044f8784eebf893964d4d33",
+ "/data/cropped_images_zips/s04/waffleiron_use_02.zip": "78b489d22e4ae755301d7353845a37a15481a0500b69e5dd01a3cc07258b280f",
+ "/data/cropped_images_zips/s05/box_grab_01.zip": "890cd7044a2eed24811b8e08a17fe9a7efc61aa4a6f425728c840c920f7d4055",
+ "/data/cropped_images_zips/s05/box_use_01.zip": "1c148262b420a9be5f385b3865f04f4c24bf9d5aaa81a3417effb7272f1dacad",
+ "/data/cropped_images_zips/s05/box_use_02.zip": "d03fb888b9388e60dc2feae5c84bd0d8a7b1d369cd7fda519a5136a8f8ed8033",
+ "/data/cropped_images_zips/s05/capsulemachine_grab_01.zip": "4785ee6aa6141293ca08293861a1109464277b6e26000f4a8ea75144cb4a11b9",
+ "/data/cropped_images_zips/s05/capsulemachine_use_01.zip": "67a2a32b331e043e282790d7f3d23610bca2d2d92e094d8c41d061d8fc4fa0c2",
+ "/data/cropped_images_zips/s05/capsulemachine_use_02.zip": "a8ba950f43f2acd6853b82061bc1a5e49da70b4e5fb075f10e27ac9dc9ff6ef0",
+ "/data/cropped_images_zips/s05/espressomachine_grab_01.zip": "f6d3497124dfba6ab9038d71c46a7626ab42d0916b0c7e932d1a89d63d2925e2",
+ "/data/cropped_images_zips/s05/espressomachine_use_01.zip": "189e28790ad801060a23d1f3b1604ead4fc55f085089d30b5cecc7760abbe597",
+ "/data/cropped_images_zips/s05/espressomachine_use_02.zip": "452752b9403bcafe382a8789ea7e0a27dfa1a9b9a51336b3ea8bf83128e93d63",
+ "/data/cropped_images_zips/s05/ketchup_grab_01.zip": "a601b09150060d20208f683c9fcd5486a577b7b8b6a034c6da3c8a00036440a9",
+ "/data/cropped_images_zips/s05/ketchup_use_01.zip": "f241f83eaa46dba923c0231f0ee476cfa83d96eab75bd0262fd15b21a3e8c63d",
+ "/data/cropped_images_zips/s05/laptop_grab_01.zip": "69c2acf23609ed9042e3e4048dc72694a47f8de41ebd5c064e8d798129c7d7f4",
+ "/data/cropped_images_zips/s05/laptop_use_01.zip": "fa89e623066fd7b3c7ed0f50ce1e9691f706c2d467c96c9de63dd3f0003391de",
+ "/data/cropped_images_zips/s05/laptop_use_02.zip": "df877e169b279eaad570a8fbfc45b62d01a56db5c39d7b5f646c020e1661d121",
+ "/data/cropped_images_zips/s05/microwave_grab_01.zip": "1a68a75333fdc6fefcbee7a8e172d5b744d921c92b44c69c09ef848dcf6b191e",
+ "/data/cropped_images_zips/s05/microwave_use_01.zip": "05b3ce0d1e60e6523e53cdfd51d11698a39ad82dad99434f2afda0c1781bc1d7",
+ "/data/cropped_images_zips/s05/microwave_use_02.zip": "897c08b2e983d554b9b3149824afa9530ab2bdaf23f423e0a655d4f43cf9b0d3",
+ "/data/cropped_images_zips/s05/mixer_grab_01.zip": "d62886fd8595bb856388cdc0871b715a3c002a745d05957f2f4997950741cb81",
+ "/data/cropped_images_zips/s05/mixer_use_01.zip": "7c09ab6651ca41fe2923004dd0d1070c304d4c15e26694eb34e7b9d3c37ab128",
+ "/data/cropped_images_zips/s05/mixer_use_02.zip": "27301af7f675d0dcd566a3d9da422ca904bb5848f3b260944a7ffa272eece292",
+ "/data/cropped_images_zips/s05/notebook_grab_01.zip": "05d183e0bb3667c46010b823608b7d5535601e01d7e731da2c6a7030ebc2435a",
+ "/data/cropped_images_zips/s05/notebook_use_01.zip": "30d7e06191f147afd39431d52e0f5fda4f9b2e98f3054c4a66492117a767cb74",
+ "/data/cropped_images_zips/s05/notebook_use_02.zip": "d94a555edb3dc84eaf088b93f1251768ac0920947018d2a98b24579422f4837f",
+ "/data/cropped_images_zips/s05/phone_grab_01.zip": "eb28e4ab5d9a97b8dae939dfa3398d627ca739a4c1cc7133605a0c14f0874cfe",
+ "/data/cropped_images_zips/s05/phone_use_01.zip": "e0b0b08be337269f1d7a20f5c4003715806e5e7d9e7970c3fa2bce62d4aaef31",
+ "/data/cropped_images_zips/s05/phone_use_02.zip": "608283fa33b34a28ab18531318256013eb059a3dfb576d50838c8f35ff6751d8",
+ "/data/cropped_images_zips/s05/phone_use_03.zip": "45509f471b716aa54c3d8e93cacb7f4bb9556837fa9ba5449f60a1d7c7cff401",
+ "/data/cropped_images_zips/s05/phone_use_04.zip": "e042b65ab839272695a4a1e1858bd07582c11f3366f77f095d9f29c4d4b85c9b",
+ "/data/cropped_images_zips/s05/scissors_grab_01.zip": "bd4f3eef74a87eae35fb497c1b291a97968d39e6030fa82fcb55bc455fa67cd0",
+ "/data/cropped_images_zips/s05/scissors_use_01_retake.zip": "78266845c8b83b7c7ddbf745e426c2192c770dea7c97d0b28bd47dcdbb3c3512",
+ "/data/cropped_images_zips/s05/scissors_use_02.zip": "796f223bda4767e1298a0284c3b91a69e02e05d629e4bcd184389ec93df0c91a",
+ "/data/cropped_images_zips/s05/waffleiron_grab_01.zip": "91f550c806113133897e8cc442c9bfe0c2163cd57866067816c9c48d777b53e5",
+ "/data/cropped_images_zips/s05/waffleiron_use_01.zip": "179943f32d1adcfcee79bc5f3d3fa353c5921e560e6a0b3a44a6c44a2bbed5c7",
+ "/data/cropped_images_zips/s05/waffleiron_use_02.zip": "9555177b22fd490ad4317fc68c60d07e031b2aa32fd2cfe11006e7e73cfe7927",
+ "/data/cropped_images_zips/s06/box_use_01.zip": "daef0e198625c749716a03e3bf019be4b26dd51f28240858de9d23e84ef1dfc5",
+ "/data/cropped_images_zips/s06/box_use_02.zip": "76dad732d95321d860e9f707bc668c06023441082fac743b87029ba3e92fefe3",
+ "/data/cropped_images_zips/s06/box_use_03.zip": "95e66bdaf989cda442fb0fbb364f9b7fa3a5b2fec103c2de13511da8a418e980",
+ "/data/cropped_images_zips/s06/box_use_04.zip": "febbf879186938614ef2c84a9b24b5273d3aac7d6bb59dc1493acb540f803fb2",
+ "/data/cropped_images_zips/s06/capsulemachine_grab_01.zip": "1ff5e13b09e8e71323dd6d1d273438d0bb6c59ce0e72aaab2d05642614eaa3fc",
+ "/data/cropped_images_zips/s06/capsulemachine_use_01.zip": "7d19a556b743379d139864dcbcffc9a811ade67d1f809e9ecc0ad335e38f94a8",
+ "/data/cropped_images_zips/s06/capsulemachine_use_02.zip": "69d828d6ad6bd169797d37a4de6612e931963e521070857550b4bbfd679616c2",
+ "/data/cropped_images_zips/s06/espressomachine_grab_01.zip": "40d4a3f31690f453e0ddfa7611d70da0583120ad8e160908c68950442bd5fddd",
+ "/data/cropped_images_zips/s06/espressomachine_use_01.zip": "4c48008d329f006532b5965033495ed455ccd2731a2dc44cd8415842e02de483",
+ "/data/cropped_images_zips/s06/espressomachine_use_02.zip": "fdec03006a941342c83b42c4b6ea3039065e5d8c3b179bbb21445021de95f7a0",
+ "/data/cropped_images_zips/s06/espressomachine_use_03.zip": "650838a94c47b0b781a7590fe6ab252ebeb9d2261adc1e31ec689f5ea9acc582",
+ "/data/cropped_images_zips/s06/espressomachine_use_04.zip": "63b02d494f8184ed70722b026381e24a4880667566261129b4019a434e780c0a",
+ "/data/cropped_images_zips/s06/ketchup_grab_01_retake2.zip": "e6efd8457b0ae86fc57c628342bd1a25ab51a851b3cf271781bf0a269a624a4f",
+ "/data/cropped_images_zips/s06/ketchup_use_01.zip": "222d62f51e3a50a93b3d3ba624686d71a3d60681c6456d4903e659abc45a5e93",
+ "/data/cropped_images_zips/s06/ketchup_use_02.zip": "60e3f023bd58998c9cf7f88a4662ac40287e032a9023e8306ace52a51a7d6601",
+ "/data/cropped_images_zips/s06/laptop_use_01_retake.zip": "31800938d07c925adb22c1572ffa90c710017ef577428841e065b35638cb3c3a",
+ "/data/cropped_images_zips/s06/laptop_use_02_retake.zip": "7ee8c61eebfeeb5a55d5f89d74060af102897afa34bc4222e22c2105372399b6",
+ "/data/cropped_images_zips/s06/microwave_grab_01.zip": "c7467173a5bb3d1424d0dd380564183b731188cb6e169d6460a0bc29abc1aaf8",
+ "/data/cropped_images_zips/s06/microwave_use_01.zip": "bf34df05cb6d6fb14e605a8560584b48d9e3215ade121e0dd17c0ed5381cac90",
+ "/data/cropped_images_zips/s06/microwave_use_02.zip": "aaa5f21e9da7d0d0d8bf38cd4c9bb73594354e9d9d94072c25b03f2de4982e59",
+ "/data/cropped_images_zips/s06/mixer_grab_01.zip": "8d0ad024821757afb414125f579b8f0e5a14ce0829232f85c68007f548658f52",
+ "/data/cropped_images_zips/s06/mixer_use_01.zip": "a2a8202451823904c6f27b85f6f90c74a1e00af96a3092236dadbec621fe5e66",
+ "/data/cropped_images_zips/s06/mixer_use_02.zip": "fa5d6a69aa8f0cc41e15900c160daf3d196a303bd08c1f9b2f5907c0b7c7d764",
+ "/data/cropped_images_zips/s06/notebook_grab_01.zip": "a757f9c19f2fde0bd98d6c462f3f047ecbe03ea1c12dcae7ed8665c2a6a0dc65",
+ "/data/cropped_images_zips/s06/notebook_use_01.zip": "ee16d81977516dcd1b58ab3fdad303c4c76b7f6ea1cf50dfb9ac79337b5f1bfd",
+ "/data/cropped_images_zips/s06/notebook_use_02.zip": "c65231d67c2095a8a6e809550eb6f5635e97d0486ad5fe61fd76300d2d6eb8ba",
+ "/data/cropped_images_zips/s06/phone_grab_01.zip": "e9a323844520d01d68497fc1974658a83b199d903b41f6dfec63f0c35a0d5463",
+ "/data/cropped_images_zips/s06/phone_use_01.zip": "c6a1271b1602f1fe364091aafd126436d59708ab4611c58a18d8c66a5fec9efc",
+ "/data/cropped_images_zips/s06/phone_use_02.zip": "0e90c5d87555b819a9c222f06ba6a3ed24be3e782c0b4765267d8c3f74502143",
+ "/data/cropped_images_zips/s06/scissors_grab_01.zip": "fcaef37125953b97b3d25e6df15a75ae61998e50083efe0cf1428da7c9a0b9ae",
+ "/data/cropped_images_zips/s06/scissors_use_01.zip": "4f55d2559a1728e7651acd035eb92d9b32c9e82e64e9f9023d84ab8c7c7484e8",
+ "/data/cropped_images_zips/s06/waffleiron_grab_01_retake.zip": "a796c888f3f062f076bd5494a4ed918666297f9856f0c3a0b8664108b2e6be33",
+ "/data/cropped_images_zips/s06/waffleiron_use_01.zip": "37753b5825ae936a66ac588dd8dd2e21ddc66ca38daf1cc19b400ad23f807ed3",
+ "/data/cropped_images_zips/s06/waffleiron_use_02.zip": "6339e792a39fc75615475c5ccfc0b572ff82e319b779046cbb7b2a4736eaf91f",
+ "/data/cropped_images_zips/s06/waffleiron_use_03.zip": "0caa080fb039ec512533c5095b00ecf784984f19b0b1f37851a53776e134bf1f",
+ "/data/cropped_images_zips/s06/waffleiron_use_04.zip": "2f4f52a2297abaf35a10a757ed370e547b4cecaf6b0b59b03911159d910e8c4a",
+ "/data/cropped_images_zips/s07/box_grab_01.zip": "309188942edc68dbb75c77141588ac3c4c3bb8d665dee35c1d3fa8e71537ef95",
+ "/data/cropped_images_zips/s07/box_use_01.zip": "46df381eb8aaab085854d2c1bd1acd24f396de42b66d3fcffc7a493f1e2bf291",
+ "/data/cropped_images_zips/s07/box_use_02.zip": "e11a45a9908ddbcb975474ce76eaac747c6d85998b0046d408a0eacf9cdd83d7",
+ "/data/cropped_images_zips/s07/capsulemachine_use_01.zip": "e697979909896923c1d41d0ec6022fb115739ab65da7fa6f96584cb433789de7",
+ "/data/cropped_images_zips/s07/capsulemachine_use_02.zip": "ad3f8c293142d3e69ad8b462c5d11c3897211655d68d8db4bea048c07f64f83c",
+ "/data/cropped_images_zips/s07/espressomachine_grab_01.zip": "51d80f7b907566491b7c5f9fe06e59bbc18fe4631aaae7074bbc6c1b30a94b2a",
+ "/data/cropped_images_zips/s07/espressomachine_use_01.zip": "79feb668c3f6018e503cca03fbde385245c220bf5a8077b44faceb65c8b0bc01",
+ "/data/cropped_images_zips/s07/espressomachine_use_02.zip": "e88f51094eaf79e51a250681fba897e3045c52db9ab24104c0b2cba3e69509bd",
+ "/data/cropped_images_zips/s07/ketchup_grab_01.zip": "13b063459e091df99cb10572bb9112a442fdebf68dd5ba057ae97c16098025ab",
+ "/data/cropped_images_zips/s07/ketchup_use_02.zip": "63f2809ed75058573119fd7ddad8ae40ba8d2af40c96d9ddf474cf69329fc264",
+ "/data/cropped_images_zips/s07/laptop_grab_01.zip": "7e5d0dda3905f7509deb57f47ecb5f9a725e22146bf2851d1756f0d959f47f0a",
+ "/data/cropped_images_zips/s07/laptop_use_01.zip": "d374f6318b8dcfde9d95e75898505f075f169ca353a24fe93730f690db77d414",
+ "/data/cropped_images_zips/s07/laptop_use_02.zip": "a7b820d4e7eb760aa5c70600764c4019f1369e12c5c997fab4cbafe37ca82753",
+ "/data/cropped_images_zips/s07/microwave_grab_01.zip": "388742f13c8a1d7859f71d25082456482c6972c3461b5c9297c5b7decb7e1e5e",
+ "/data/cropped_images_zips/s07/microwave_use_01.zip": "137fbdf08fb93ac97a8db6cf766a2870b115730c31af01718678074fa0d2af4b",
+ "/data/cropped_images_zips/s07/microwave_use_02.zip": "1a2af71af87c5f761653de6ea31d48e01c579091b487608efd79bd187e88b3c6",
+ "/data/cropped_images_zips/s07/mixer_grab_01_retake.zip": "a43f9dce11cda7890b96af383e18c1c696431f9acd88e34cc93e8e44e6482fb9",
+ "/data/cropped_images_zips/s07/mixer_use_01.zip": "462f105cdaef2f697c8ec0b5590abf441bef2251a8615ca582c1e093eb98c549",
+ "/data/cropped_images_zips/s07/notebook_use_01.zip": "061ab9d5139827f89b231c4b1711222a3a91d681c1c6bfef33b807bc0318a810",
+ "/data/cropped_images_zips/s07/notebook_use_02.zip": "bacdcff55a6eac092f58219b483b9db8c6280f5bb67b97589b52b23c61aa29a2",
+ "/data/cropped_images_zips/s07/phone_grab_01.zip": "00f9c727403a00a34ac4b360f59c113250f48719785e744ef572d4bba0c92175",
+ "/data/cropped_images_zips/s07/phone_use_01.zip": "196c7bae70d1cecad1eceeed1e8613826310896389145233b44dbe327f4c1740",
+ "/data/cropped_images_zips/s07/phone_use_02.zip": "ec1b438db6c4413fb9c21b489a46f74ae7de467c138d5cd2e813e8faae0db126",
+ "/data/cropped_images_zips/s07/scissors_grab_01.zip": "8688b9f194328c490268a45333ccfae9c2785a00587594bc350b665ad6e5857c",
+ "/data/cropped_images_zips/s07/scissors_use_01.zip": "5dc79de1966dfc2db47dfe60c513f7091b21cb0e2ee2023581e02218252b7dd7",
+ "/data/cropped_images_zips/s07/scissors_use_02.zip": "a84d9bea48087f6cec4673f316a385f507f42d8b8766969dfcb684b25123deb6",
+ "/data/cropped_images_zips/s07/waffleiron_grab_01.zip": "5a8b924f2ee6ca1df969db616ddbb48ff7f2cd108864b3d405fa5c273b46faeb",
+ "/data/cropped_images_zips/s07/waffleiron_use_01.zip": "de7fc7c1e1d648293b735fb2c89103e057700dbb6960acb597ec8ee90278bd9f",
+ "/data/cropped_images_zips/s07/waffleiron_use_02.zip": "6b8f17dba03b83529800f01a9c9c6e3ee4c61e3aabfafeb934fd4429cfdece9d",
+ "/data/cropped_images_zips/s08/box_use_01.zip": "1a55c8285d3d353c1209a0dd2309aac964624a98dfd37116efe64dae713e45ca",
+ "/data/cropped_images_zips/s08/box_use_02.zip": "343a3f893118f3539df61ff6e964fc739a81c90ad9e8a1d9ee37c8d89f8632ff",
+ "/data/cropped_images_zips/s08/box_use_03.zip": "e6247f0986de14267b28583720fc1ea27b406c2c2bd72be0c07cac5157516f8f",
+ "/data/cropped_images_zips/s08/box_use_04.zip": "9e52940634949523dc2988bbc336f9d3e69273ff52c01e5b3f05df77d5359e90",
+ "/data/cropped_images_zips/s08/capsulemachine_use_01.zip": "6c10f595a2f40f52e02a01988d9a11112b53988a1aca2e09d42368050fdb0eef",
+ "/data/cropped_images_zips/s08/capsulemachine_use_02.zip": "e55c30a28542e40bdb05d2af86b800be8133f6b2b169c90bd62f4b77be6c1f90",
+ "/data/cropped_images_zips/s08/capsulemachine_use_03.zip": "b26b7de5296e1935e3dfba9b0a8fb2e9c17a4cbd5639d7b950989c26cfae43ea",
+ "/data/cropped_images_zips/s08/capsulemachine_use_04.zip": "6c02f5225aff26a6ee44a1c14141784d00f3a7e11ca7880e766ab5eb3b3b8a00",
+ "/data/cropped_images_zips/s08/espressomachine_use_01.zip": "2cf1753bdcb20ebc77d5423a2b47d07be9ba9d916b4315203fb6835b47f67043",
+ "/data/cropped_images_zips/s08/espressomachine_use_02.zip": "55ddf0ef52b8aaa3bf093c006501bee66aee305dd0cd63fb9ffd4168b11bb0b3",
+ "/data/cropped_images_zips/s08/espressomachine_use_03.zip": "a5b910d9f97723e2be5cc710cdde00d53c60470dc7f7f57606f533721394247e",
+ "/data/cropped_images_zips/s08/ketchup_use_01.zip": "43174098088099c778b9aee559abf55ba80e0d44f78c5e7b8c6aed7af58fcbfb",
+ "/data/cropped_images_zips/s08/ketchup_use_02.zip": "b299b34988c59400281c4fb243be799f18fab3f192dc388cc7adf421f3f8ce2e",
+ "/data/cropped_images_zips/s08/ketchup_use_03.zip": "0ec7082b6ce5c05448cab732d6305482bfd6a159a695d46f9b17ae42b2b1b3da",
+ "/data/cropped_images_zips/s08/ketchup_use_04.zip": "ee1226d03bd8187b015d2e6e19d8f1ffdb181e5e0d94d4603efa6e2c96e535fa",
+ "/data/cropped_images_zips/s08/laptop_use_01.zip": "2ea30aab8312c973cce5dc3295175ce7f6fd92accc257af7fde1d56070395e8e",
+ "/data/cropped_images_zips/s08/laptop_use_02.zip": "77ae6674c6c812e00612035c657cf7c05e73d323ff3e832f2b7280a156c254f8",
+ "/data/cropped_images_zips/s08/laptop_use_03.zip": "77cc8e7230171ea8ade757abe13738280a5b7108b651eb18f8baac8f1e957b66",
+ "/data/cropped_images_zips/s08/laptop_use_04.zip": "b2d75ce1adb703a88bc931e229bdf7ea514addce3002ac48b73df3bf08977e8d",
+ "/data/cropped_images_zips/s08/microwave_use_01.zip": "1a375ac2fd517b0178c62dcf2e0027c767509c3b84ad745edbaf0f81175445e8",
+ "/data/cropped_images_zips/s08/microwave_use_02.zip": "e09800b2bbd41f700efaaa31f997b910d2312094a010871b4545dd717d4a4c29",
+ "/data/cropped_images_zips/s08/microwave_use_03.zip": "a8af4d139acb00b99bfbfde8d0373dbad8b146df541336550ef5a43401fc3e78",
+ "/data/cropped_images_zips/s08/microwave_use_04.zip": "b8827ec824b0bbc117f65b2b2e36cb9468357c494312fd69143702c20283fca9",
+ "/data/cropped_images_zips/s08/mixer_use_01.zip": "088272a634fbacc41e6c5749c816df4ec389a07ed0694b693dc22b24c1e057b6",
+ "/data/cropped_images_zips/s08/mixer_use_02.zip": "e02d7346516c6ecb2a44d849e0d85181dea1510d6d25ea13dafceb79fd7d83fa",
+ "/data/cropped_images_zips/s08/mixer_use_03.zip": "ed5f0f8d7cf2dfdef3ac5e4231fe0d47ed9eb39c9d99b73f183501d739711e16",
+ "/data/cropped_images_zips/s08/mixer_use_04.zip": "209c014ef5200339a5bd1953b047eb608a3281405cdc473cc1c4c7c870b9fa56",
+ "/data/cropped_images_zips/s08/notebook_use_01.zip": "386c533f4e49e030987459cdc4d5b84bf906cee492575a64037c279a35b594a0",
+ "/data/cropped_images_zips/s08/notebook_use_02.zip": "7027e5d0dbf92224e9af626bb68eb6dc7f9ff7cb9f51e7620d06ec452095163e",
+ "/data/cropped_images_zips/s08/notebook_use_03.zip": "d19d48c74bab2890d72b79bc2bd1191e65d8dfeba2113a5f0823829aced44cf4",
+ "/data/cropped_images_zips/s08/notebook_use_04.zip": "ae25d15de161bed5be8d5bdbda5acd4b1b8ec7ef215d30cf93e8d5a3dc27c3aa",
+ "/data/cropped_images_zips/s08/phone_use_01.zip": "de4716d844e13e3f090c34528dd9f101a230d644c045058f31386d2c57f5176b",
+ "/data/cropped_images_zips/s08/phone_use_02.zip": "959abe12912290b4a5df4270b24dd2eb98fcaf45dc1cc59c4c71fac2cc50055b",
+ "/data/cropped_images_zips/s08/phone_use_03.zip": "10abbcef86166b779723bede58c418722f7e678ed991ecf328a1fe0867310eb8",
+ "/data/cropped_images_zips/s08/phone_use_04.zip": "cd04dcf0393dd281be03e1f1e81d107fb32c8aaf04927c4314db224b548336e4",
+ "/data/cropped_images_zips/s08/scissors_use_01.zip": "329afef4a2c72fe8d5dedf6634270142c7bb430f44e0ce6aad192f0cc51263ad",
+ "/data/cropped_images_zips/s08/scissors_use_03.zip": "1fd7a45d5612bd9f401faafcda29ec3b6daae2b0b69762942e6b51b3c6e7e885",
+ "/data/cropped_images_zips/s08/scissors_use_04.zip": "c801ba1ec8ef80413726cdc13ca3b4b216773c3f9ed872ecdc0a7515e3d22881",
+ "/data/cropped_images_zips/s08/waffleiron_use_01.zip": "af20b1970d4a9d3c1acf34c971b3868e9c3e1b8fb517ac080e2d3ec073c05d0b",
+ "/data/cropped_images_zips/s08/waffleiron_use_02.zip": "08dbdbacfb4225acb7cecf2a64f7934949cc3b9f99f151a06cc38c73e8917697",
+ "/data/cropped_images_zips/s08/waffleiron_use_03.zip": "80b747cb0fe0ca6d2f24db6f2fad1a6ab6e06973bc714db18d15ab28659e1517",
+ "/data/cropped_images_zips/s08/waffleiron_use_04.zip": "1af8d979213e4f4002b0b066be5f41f265eaf4c36b39739748522bdd4fc7bfc8",
+ "/data/cropped_images_zips/s09/capsulemachine_use_01.zip": "aa8d682acc8af081f662f4af5fd7b6f7f01be8f22e08db11cdbaf707824e5c92",
+ "/data/cropped_images_zips/s09/capsulemachine_use_02.zip": "a533b1bbdf5fe09c1ba5dc0cbff719e6eafda89cd7f3af435490cc6ac76ba7d0",
+ "/data/cropped_images_zips/s09/capsulemachine_use_03.zip": "2f35843e1bf8301cd593d8a23865d44769197055261aee78301e047533693b09",
+ "/data/cropped_images_zips/s09/capsulemachine_use_04.zip": "ede112b37696e4cf8a19ad82db4135a2868e23404d4ccf5d311b873ba6aff3ff",
+ "/data/cropped_images_zips/s09/espressomachine_use_01.zip": "215eaa7ada54e3dd561b833f16dd81f9a89761939549b58578f48eb9ebc74243",
+ "/data/cropped_images_zips/s09/espressomachine_use_02.zip": "d75c19634f2c4faabfcba681c1cac31346b2e3ad44a150903b72b08f2959a9eb",
+ "/data/cropped_images_zips/s09/espressomachine_use_03.zip": "886c488549e3e3fd840bfb9512398bd5071eaec3e5478c886a2121bc276cd89a",
+ "/data/cropped_images_zips/s09/espressomachine_use_04.zip": "11fac0da9804b0a0b381646eaa31ee25b63cf5b13380e684975157c605497adf",
+ "/data/cropped_images_zips/s09/ketchup_use_01.zip": "8b21b8f42bc066738159ea5f1c0a60bd9659937f59c33ccae9ba6d2c09eb2c9f",
+ "/data/cropped_images_zips/s09/ketchup_use_02.zip": "d42684a1916447291a5165182d5d8f315152c5518bccbcc11969320d0de063c7",
+ "/data/cropped_images_zips/s09/ketchup_use_03.zip": "c33902ec5edf5cd600a7b85ff5bae98e93e332280756c1900779c514d99e9ea8",
+ "/data/cropped_images_zips/s09/ketchup_use_04.zip": "5b12db0c9be0d8e860e9fd86bce819b14f1d8e08d8cc6d2ecd09fdeedfe9e17d",
+ "/data/cropped_images_zips/s09/laptop_use_01.zip": "ac5bebd92cba2df99fe87931c5436269dd44de430752b72de074a74526179d81",
+ "/data/cropped_images_zips/s09/laptop_use_02.zip": "46159551288169c5d0cc3651b872e76722d7a04eba4289489e357f706369f322",
+ "/data/cropped_images_zips/s09/laptop_use_03.zip": "5b1f79152099effa945fb91ccd2f4eb2a1617d82b52e836484b6e915ae4e59b6",
+ "/data/cropped_images_zips/s09/laptop_use_04.zip": "c271e74164327c7f7265ffc8379e103e513dec17630d194106493aa1fbbb6ebc",
+ "/data/cropped_images_zips/s09/microwave_use_01.zip": "d8bc7d3b9c347d7cea5d152bee23d4586d62df3cf9d0e9ba9a0062744cd837f8",
+ "/data/cropped_images_zips/s09/microwave_use_02.zip": "854afd77b87875b763b98dc611713a1c9dd2f2f765eb6a6b9e25a355d6d8db76",
+ "/data/cropped_images_zips/s09/microwave_use_03.zip": "411d97ae72559ff3decff9a978a6c2e73376846db2c8efc10310d4974aa75f1f",
+ "/data/cropped_images_zips/s09/microwave_use_04.zip": "0d2997b712dbd65652af2a0b419f443af04bf1d61f458a36a9e0dc5bdb8c1ee5",
+ "/data/cropped_images_zips/s09/mixer_use_01.zip": "7fe27956d1def30be12bc89577304370221636cf9a29c37762b0a875d055f448",
+ "/data/cropped_images_zips/s09/mixer_use_02.zip": "d21d8f1c45a99547ebca99963530d1035b712c1fd0cf514563f08a79d6881016",
+ "/data/cropped_images_zips/s09/mixer_use_03.zip": "2c44a618bbbb1870abcad32b53e6f30437e817f6213d2566c4db5b6bc6d29436",
+ "/data/cropped_images_zips/s09/mixer_use_04.zip": "a9cb0e62d225d36b4fee8c63b4ada9518c52f051efc22d64e5c97438701bead4",
+ "/data/cropped_images_zips/s09/notebook_use_01.zip": "dba5777b356ba46e743fb8d97a4cce90959bc5489dca265d813ee69dafe6cbb1",
+ "/data/cropped_images_zips/s09/notebook_use_02.zip": "7e1acf7df53a2c3b849f3778f861aef689e85a161bb5059e695f41c4d5115c4f",
+ "/data/cropped_images_zips/s09/notebook_use_03.zip": "eca2dd2127299322691c5e6a25812e5108a28559b0ac1ebdd3cf9c3ffa16e7ef",
+ "/data/cropped_images_zips/s09/notebook_use_04.zip": "2baf1e09e673219c89886fd855b2a7888ebdcd0261c200669a727b6a03faee24",
+ "/data/cropped_images_zips/s09/phone_use_02.zip": "db65a80ea1bdc049563396aac3fdfa653e18e06e0f35313945c35875ada87c70",
+ "/data/cropped_images_zips/s09/phone_use_03.zip": "1dc78581b1c1581d2b0928471b430cb358654018627400ddb83dd0e99367680a",
+ "/data/cropped_images_zips/s09/scissors_use_01.zip": "a5add3870e1e780dcdec70981105422e013a41b8ed3d5bb60587584b90823078",
+ "/data/cropped_images_zips/s09/scissors_use_02.zip": "d3b00fbb0e604a6a4487d7fff876c153100d6112c3bc4a3a49628e13d7b0734c",
+ "/data/cropped_images_zips/s09/scissors_use_04.zip": "33450eb69caf7731213b642b7027010054ab4b8ae4b155aa9b710bc0ed81ff52",
+ "/data/cropped_images_zips/s09/waffleiron_use_01.zip": "106fda8d8bbc2b384d0acb8ebd6656b1748df9841fc82fecb33c66bec5761eea",
+ "/data/cropped_images_zips/s09/waffleiron_use_02.zip": "941ab93be999beac2a289ac58fe97d0ccbb504dc2022108a736ce0b0f04a79de",
+ "/data/cropped_images_zips/s09/waffleiron_use_03.zip": "b99d6d085806e1c586d1d559bf5efe300db96fdf24271578c5670e90b8bf296b",
+ "/data/cropped_images_zips/s09/waffleiron_use_04.zip": "9b8680a319bd5fc6a2dcc9a0ef5e5481148feaf3d93126ef86d902333077a4a7",
+ "/data/cropped_images_zips/s10/box_use_01.zip": "c48e46eeb92abd5be267098540c6c46afed098c5ed3be8fcf6c505806df0f2df",
+ "/data/cropped_images_zips/s10/box_use_02.zip": "508ff2c5f5b3bb0c7d8b3c052c33027ec2fd85e5ca25d1aca8dd7e6846d40a22",
+ "/data/cropped_images_zips/s10/capsulemachine_use_01.zip": "332eaf11bce46a87061791c1deb100dbfbcd0016cea2dbddfeb95a641b64c09d",
+ "/data/cropped_images_zips/s10/capsulemachine_use_02.zip": "97f2a0c1ca30ed19e45076652837f007f02d645bfdb290e71efcc7185c66a3f4",
+ "/data/cropped_images_zips/s10/espressomachine_use_02.zip": "e633cb3113971153b3c3873e25513429110c4c260833e83ac91830e9a2fc0a1c",
+ "/data/cropped_images_zips/s10/ketchup_grab_01.zip": "35d4932574a30d667012dae22a78a944eb268756a5f22f0d3d99d34aa75fddad",
+ "/data/cropped_images_zips/s10/ketchup_use_01.zip": "d1a848a0fc8a393852d60dfc63c23743af77b4aa705eeb2e9752cba8f7f2059c",
+ "/data/cropped_images_zips/s10/ketchup_use_02.zip": "9e79868cffbb0e4b031e2c70641485e1615ce969e23ce64b108abf10e66d2148",
+ "/data/cropped_images_zips/s10/laptop_use_01.zip": "f2478046973425bc3f10249689ed38072e4cdf2a572338e6f6055a8b986bfe73",
+ "/data/cropped_images_zips/s10/laptop_use_02.zip": "0cfc19578322900be356eb9df77a6076588fa60dfd123bae527c755e3960664c",
+ "/data/cropped_images_zips/s10/mixer_use_01.zip": "7da1fd5dcdd4c9ec72a63597b51b875fb71e595f388fc909284ae324b66e9708",
+ "/data/cropped_images_zips/s10/mixer_use_02.zip": "c1f84ca05de82d9aba99fa1bb4285c12a4ddc96f99ed7df455c2d1f4d1e25c55",
+ "/data/cropped_images_zips/s10/notebook_use_01.zip": "82a65cbf5da18ab8e27a569f4f1ff9ef1d6897732d5543275e08c7677ad0447e",
+ "/data/cropped_images_zips/s10/notebook_use_02.zip": "fb829e1cff66537797f5db29165c871ff076e6ead1704b44ddae14a7dd23ac34",
+ "/data/cropped_images_zips/s10/phone_grab_01.zip": "ea54eb09f433e3b0482ab0d1c7625f5307b69e44576f3e6f5d7072765ee33fb6",
+ "/data/cropped_images_zips/s10/phone_use_01_retake.zip": "5a70f41036c551de459804276b05f5e25c8b41f414d91d8b63f2ac8950d4795c",
+ "/data/cropped_images_zips/s10/phone_use_02.zip": "7d7e8e519538870de4186f6dc92867f7dcb6060b06564b94dc73cb0346eac5a9",
+ "/data/cropped_images_zips/s10/scissors_use_02.zip": "3f6600d506a1e35ae3d3dc6f1611fba5b62bdfc1373838662a686bc59ccfd6a9",
+ "/data/cropped_images_zips/s10/waffleiron_grab_01.zip": "71719c6d2a43d8c585ea3fe983a4a8fa26140eb5c7b36c53cef74d2443f9754d",
+ "/data/cropped_images_zips/s10/waffleiron_use_01.zip": "b63731fa3c19410717fa2ebc557a55d549bf21c27a0d786aad290743fe30817d",
+ "/data/cropped_images_zips/s10/waffleiron_use_02.zip": "0358f2d5b5248c254d8d98fcb31578b95f206a72aa73dc39a12a9df408acb988",
+ "/data/feat.zip": "8b5fede5424af61fbb538915a98a552cf917c9ccff3e320fc5e49a07d06137ac",
+ "/data/images_zips/s01/box_grab_01.zip": "447a3b7cd9508d86300ce860e47e8bf1f054239cc927523deb1424bc073a2f82",
+ "/data/images_zips/s01/box_use_01.zip": "4f30f582a300450e0bfdc79c1ac5379921364fef9673bf7559ba67e5ef1ed63f",
+ "/data/images_zips/s01/box_use_02.zip": "ce977833b0a2b7a678f223aa658818bbbe5c47407e7baf7235c9030cd2cac3ea",
+ "/data/images_zips/s01/capsulemachine_grab_01.zip": "07eea5cbd604ef7e2457899ba0500e6bc67f1f621f0642a4b1f04458618f13a9",
+ "/data/images_zips/s01/capsulemachine_use_01.zip": "69703db7022d8163997038d2eaef000d5ac15e4edcb91f1e736ea67084053f3d",
+ "/data/images_zips/s01/capsulemachine_use_02.zip": "e98eafd2905f3c9ea43aa8c6f08bdff12d4098655a164a592b2c16f829839106",
+ "/data/images_zips/s01/espressomachine_grab_01.zip": "37dcca3c873021a3778d6d024ef1a1bf3d1c19a55c165c9c45bc9b465ce08bcf",
+ "/data/images_zips/s01/espressomachine_use_01.zip": "1c50f599dd62c1ea70c0f052fc92ea8c7e9fc2929b49235178eac07f6038680d",
+ "/data/images_zips/s01/espressomachine_use_02.zip": "71b2efa1abf736c072a72b89c153ce8cba24a3c6960a2cfd00f0280eb92b2e40",
+ "/data/images_zips/s01/ketchup_grab_01.zip": "aa7c4d38724ed4af2f4321cf147798045eac469b3ab82a33054fb671adb000fc",
+ "/data/images_zips/s01/ketchup_use_01.zip": "5b367327e567a2d568af1c64df5cdef918109e44eda37528c96e616a34785e65",
+ "/data/images_zips/s01/ketchup_use_02.zip": "085e565c7bd132511e5b1118846eb7cb2ad83d1ee7464267e2f6edb96c5ee3c3",
+ "/data/images_zips/s01/laptop_grab_01.zip": "a69427381c93141de107ea059c1c447b06edbfcfff816d4c0f302fd2f99224ea",
+ "/data/images_zips/s01/laptop_use_01.zip": "781d15083274e06a111ef87f5e6344f91d88e5344b786f96bd10d5d3d65bfa71",
+ "/data/images_zips/s01/laptop_use_02.zip": "134c5738295fff0919179ac7170c81b5314935087bf8404b5bdfd9a355c2a226",
+ "/data/images_zips/s01/laptop_use_03.zip": "2140a132bd5d5d7a94d77945c06507b3653a7f78b666c4a699896efbe4a4c29e",
+ "/data/images_zips/s01/laptop_use_04.zip": "4ccb60873021623e30dca3ffb0fee6dcefa71745c127007319b9d8d0015a44dc",
+ "/data/images_zips/s01/microwave_grab_01.zip": "1d3dfa9708c51fb04ef22bf2f06c4112632349f82d30086f2726424f1107a84f",
+ "/data/images_zips/s01/microwave_use_01.zip": "2dbb40962a9493147c72aa5967caefb529954bf06e02506410b7c291a7dc2180",
+ "/data/images_zips/s01/microwave_use_02.zip": "71fd43cca294ae853700c52c299299b265d802d6d50a2b46f72cb43edcb6276b",
+ "/data/images_zips/s01/mixer_grab_01.zip": "caff4fcf6f647fed90bc830a57b10c96173fdfc3ce1f1c947b795d3f74b4e3fd",
+ "/data/images_zips/s01/mixer_use_01.zip": "18183b300dc2ef764ba733ebd44bb18da29612c89dc052ce574df87cb173ea83",
+ "/data/images_zips/s01/mixer_use_02.zip": "5476cabb20627b3a749be4b416d334e164c0cef79943fdc11920cc26f14d8cf4",
+ "/data/images_zips/s01/notebook_use_01.zip": "ea058b8ae2c197b4637400cf212c13e5dc90cfafed2e5bbdac2d090515f75a58",
+ "/data/images_zips/s01/notebook_use_02.zip": "abd67d734bc09d947e117369288d0ffdcb7600402b0a12b7cd1aa05a82fc2143",
+ "/data/images_zips/s01/phone_grab_01.zip": "1395ad6f94097d5db43936ae8ea611fe1ad0d9df60b82561d4600702a663d2b7",
+ "/data/images_zips/s01/phone_use_01.zip": "25c94f6dcd11c0acfddc0aec01d8f1fc434b1846844a5ef6a8416342dc0face4",
+ "/data/images_zips/s01/phone_use_02.zip": "a9d8525f8b04dabba555acc3ed1e4b5b3fd4b55c088719bc2c80bdefae088345",
+ "/data/images_zips/s01/scissors_grab_01.zip": "c7f22168c1ac1b958945b1579e5b00f5360ffeb6f277b01431827999ce0dc440",
+ "/data/images_zips/s01/scissors_use_01.zip": "8134c176a34a03a593e9d74e6f033a247fe1adf4430633d01c588c282d5df732",
+ "/data/images_zips/s01/scissors_use_02.zip": "dd6519567aa9bf6ae9023e9b9faeebe7825c9cd015cd1b3a0d52f98db7f6d582",
+ "/data/images_zips/s01/waffleiron_grab_01.zip": "2305c404cf87a1e212d3c3fd2863a2f269e69af9a299edcf3fe8a6629711a3dd",
+ "/data/images_zips/s01/waffleiron_use_01.zip": "66b2efd559dab9786d4e1708f6766be9bda78b78cfe72213306f74b3564662a2",
+ "/data/images_zips/s01/waffleiron_use_02.zip": "ff39446dc9bd6cf14cbf6ac229b77b9bcf21bb32c829b72e5931b1daff3ffa96",
+ "/data/images_zips/s02/box_grab_01.zip": "208587e8b7ea2e2f2368d27af459d43a6323b252daf9cf79452ee68264ef3180",
+ "/data/images_zips/s02/box_use_01.zip": "57c638decaa3d9283db92d6030accb252720a153ae572180fa298be6ac64f56d",
+ "/data/images_zips/s02/box_use_02.zip": "0ae3bb70a94b71f279bbe13f8b49eebf6c8af2a5c7edd172c95a8a9e14507bce",
+ "/data/images_zips/s02/capsulemachine_grab_01.zip": "081a09e61cdd5c7adf0f24e94c3619aefa3a8b1bfb496d23418619142f1547cb",
+ "/data/images_zips/s02/capsulemachine_use_01.zip": "ade958a8fd97552448d8694a18bdc9835e87e5abd391fb49263086853981bf17",
+ "/data/images_zips/s02/capsulemachine_use_02.zip": "e9a03c9a11f404419add62176ed799cf0848a66c147dc1f7141055896356aedc",
+ "/data/images_zips/s02/espressomachine_grab_01.zip": "5aca5dea96ab563feef83f3d4f838f597a79a95cd54b981b38cb9d7c9e871c11",
+ "/data/images_zips/s02/espressomachine_use_01.zip": "d904ed280b9eca056573d5c18106592232b44d5f565dcfbcc53611b1233d5fb5",
+ "/data/images_zips/s02/espressomachine_use_02.zip": "bb9df76b57567a07ef70d5fb3df1a7dc49e755a582dad066ea8e0a312fe447f0",
+ "/data/images_zips/s02/ketchup_grab_01.zip": "67bee5b1bbc810ecbc3f51c8ede2a9c1c50f7304bf3e47cc417554357b3481d0",
+ "/data/images_zips/s02/ketchup_use_01.zip": "798dbb45bf70224a167345b57afe9c9d191072d9361d53095592bb9bf6218d04",
+ "/data/images_zips/s02/ketchup_use_02.zip": "091d774822129920cf4d77202b0a36c21fa0df2ea72dc7e56cd7c2e5e131035d",
+ "/data/images_zips/s02/ketchup_use_03.zip": "7c804bf85bf86ace199e970ab4ea829f0135a95a7ac7a1727516eea46141ed93",
+ "/data/images_zips/s02/ketchup_use_04.zip": "14ebbfcab2262d439c0d473135d9acd6c9df0b3b7bb256973e7ff405894f274c",
+ "/data/images_zips/s02/laptop_grab_01.zip": "d161246a2c03e9c0561e6aead79f7cc401cb7da55bc1767c398a0300b9289f4e",
+ "/data/images_zips/s02/laptop_use_01.zip": "d3b8871d206f9481a3cc2f32b09afbf0de62a67a003bbe9cd649cfa7ae8def5a",
+ "/data/images_zips/s02/laptop_use_02.zip": "63ad02819c82785039308c613e014b66afc48d6f6717faa6172c1a60098d3b63",
+ "/data/images_zips/s02/microwave_grab_01.zip": "1d7f21707970cfa0d86b3a53873dcfbc20530835f4ed1a0855dc28e066a5ac2b",
+ "/data/images_zips/s02/microwave_use_01.zip": "baa91b46a089a27ea70d93a02c8d3c6c128de5e089eb975b48a4cb74994e568f",
+ "/data/images_zips/s02/microwave_use_01_retake.zip": "bd8541483248f1e8b49fbc69aec0f91b3a0695986fb0f23237e80c30cb2e10b9",
+ "/data/images_zips/s02/microwave_use_02.zip": "d32e08788a6c028242c9dc7094fe744e84a199e2592f5f81c21a3b71d5d4ec33",
+ "/data/images_zips/s02/mixer_grab_01.zip": "eb694eb8f14ce55214b0d6b71856644a3002ee2df1c2cf9c4cd9ae12e6ec049f",
+ "/data/images_zips/s02/mixer_use_01.zip": "d296d815aa67ab69568d12a7ca1a30a7c0f1ee086ba3752ebd058a8495937bd9",
+ "/data/images_zips/s02/mixer_use_02.zip": "82e10d86ba812c3018f8f43162356a4cd3311e846265ff98c2a6c85f6a0a59ce",
+ "/data/images_zips/s02/mixer_use_03.zip": "4ec2625269e70132c5b81efef654599f18addf75431828c08ddfe102dc6ae363",
+ "/data/images_zips/s02/mixer_use_04.zip": "1cad45a006e3381a96beeb8a20123113c9c55697164d541ad455745aec7be38e",
+ "/data/images_zips/s02/notebook_use_01.zip": "077e20788e57875334a68b2bff684dd538e6d7974dd9d2126747d7093124afc4",
+ "/data/images_zips/s02/notebook_use_02.zip": "118e06111842b73d348b39f7b7f04d2eddd21b8f78145d20b889da880de1c0f6",
+ "/data/images_zips/s02/phone_grab_01.zip": "b8ff7faeb5b9c8efad5dc35a93a8d67ec729e33bd4bfd65f36759ad0057e3aec",
+ "/data/images_zips/s02/phone_use_01.zip": "7bde76466b0417f2205eec7e81cbcf09b2584554d36ca7376d927e746cde7552",
+ "/data/images_zips/s02/phone_use_02.zip": "c8f00e8fb7451fd39f234a0c5fba34cddb39c4ecca99e889e87600d385e94d51",
+ "/data/images_zips/s02/scissors_grab_01.zip": "5a82611cc8679b5b38289a90011c8e13312406d90cd42788f2b31e55c242314b",
+ "/data/images_zips/s02/scissors_use_01.zip": "11905f132f520b8d7d1c3e92b8e5b75fad6cfd7a1bfe79dfb4fce50635f53699",
+ "/data/images_zips/s02/scissors_use_02.zip": "d3c36a42c8d232ed47fe6efe466cb9a2a86c8460557f3b5eb29b5dd785482261",
+ "/data/images_zips/s02/waffleiron_grab_01.zip": "362c12c39ef509e822ac8c65b395f48fe7be3ab53b34d9ec35f0516c43d0b112",
+ "/data/images_zips/s02/waffleiron_use_01.zip": "d575a2ec310bfccdb457625a70a9ab1c89995bae834c6079603e7d621bdac7bf",
+ "/data/images_zips/s02/waffleiron_use_02.zip": "75be89d1b5bb32cf5df6268466b60e9d4041faf17f030b2af506467a49ebb270",
+ "/data/images_zips/s03/box_grab_01.zip": "04dba856c3fc6f3793d4fa385bff9ad4a35a743f79e7b57d3c7169cb30da1542",
+ "/data/images_zips/s03/box_use_01.zip": "2a1181c96081b3d3ed387befb9a3ff37d1da7b74e567020d825fa28408f6135f",
+ "/data/images_zips/s03/box_use_01_retake.zip": "b4a222a02583429834505379e956860ce910b8223f695954c757a87a2628061e",
+ "/data/images_zips/s03/box_use_02.zip": "33d36b90e3773170851aba607e2406a027ebfcec681ff3f5b9bf340ac171f8ef",
+ "/data/images_zips/s03/capsulemachine_grab_01.zip": "19d84c3a6ec1788c1b27c232311d2a99aa2a025de913770bcd230576d17a405c",
+ "/data/images_zips/s03/capsulemachine_use_01.zip": "c007a2430407f1882b4040462da456eef1465db50b12d63d35158f667036bc90",
+ "/data/images_zips/s03/capsulemachine_use_02.zip": "932114be57e264676453f1e25e23582290747a0c4bb0b2702d45e204a9dd6496",
+ "/data/images_zips/s03/capsulemachine_use_03.zip": "4e302374e20646240b89d84cd126e6ebd3520ede7b98ead533bafd4ae0fae0fe",
+ "/data/images_zips/s03/capsulemachine_use_04.zip": "290da2473d1de4c0fde23bf0f93b499fca2e68688fc0aa48f707ff7e3bb39779",
+ "/data/images_zips/s03/espressomachine_grab_01.zip": "f8941b48f69c554a3f17cf05505ee8dcd88e06f3b7c76591b30bf60827eb72ae",
+ "/data/images_zips/s03/espressomachine_use_01.zip": "91df87fa3c503db27b6ad9fabfdbbee0eb634d5e272058c172a775f630d3bdb6",
+ "/data/images_zips/s03/espressomachine_use_02_retake.zip": "4778a7045fb0d4e1538ed5b13ac30083bd335b9a6794bf4de3f83ad55f7f97d7",
+ "/data/images_zips/s03/ketchup_grab_01.zip": "ed84c9cf9772a78904c70bc7229729b7b9cf2f19c3a79ede667e3e94f493639b",
+ "/data/images_zips/s03/ketchup_use_01.zip": "ab4f432fa2672287960d458074b70217feaad2dbef6766e249d0e31b83d983ac",
+ "/data/images_zips/s03/ketchup_use_02.zip": "5319c8aef0738ee6a8291d99bb14349a348804c2eb56ffda88470493fc41d81c",
+ "/data/images_zips/s03/laptop_grab_01.zip": "192a0b17232954c3d8c91d5a7af19a5ad51204016177c59ccf985b8314877c08",
+ "/data/images_zips/s03/laptop_use_01.zip": "4e47e109168c101e651c532140fd66a41c3274aa835fe2ec9ccc20e27eb91dc8",
+ "/data/images_zips/s03/laptop_use_02.zip": "aa730fd6d900b21f1e2808ae451f6b55a8671b15a3c4048fe74d98ac4e448a34",
+ "/data/images_zips/s03/microwave_grab_01.zip": "e9aca38076c9f92fdecd496c4a21069828df38dbc03fb8ec38dcd1ec70d9161b",
+ "/data/images_zips/s03/microwave_use_01.zip": "9b7d896176f32dc0d9ee84b591457c1759fba204e8c46527f4bfcd77b2f6068f",
+ "/data/images_zips/s03/microwave_use_02.zip": "b556073028053e8704c36c0119970235b7b23ded8fe7e388b890e77bf6573403",
+ "/data/images_zips/s03/mixer_grab_01.zip": "0799f8f148228696eab6164a3d0eaa6c628140c3aa5a34c34502c49996393f92",
+ "/data/images_zips/s03/mixer_use_01.zip": "811c5fa75ee09e9796a570192347e293150f112a5e379652a870993bb0139a2d",
+ "/data/images_zips/s03/mixer_use_02.zip": "7cc505bf67819bd6836c099c5a08c66b874c73f678938a209f2b3966d222a090",
+ "/data/images_zips/s03/notebook_grab_01.zip": "da203142657b40d681116b0c354aae92dfa25d70dd9871fb13dcb7934222375e",
+ "/data/images_zips/s03/notebook_use_01.zip": "8e31f8c1951bb72d1805725d4f7bc8d6e9de2d31e80f1692e24f3a8a95eee456",
+ "/data/images_zips/s03/notebook_use_02.zip": "4288152984b403c5d8dedfc3b47ee63d0a59a419458ccfa3e1fab83186164d89",
+ "/data/images_zips/s03/notebook_use_03.zip": "e8495819fbc74d30edf2143069d4b1d33488c09b16a854bb605fb42bbf8979ec",
+ "/data/images_zips/s03/notebook_use_04.zip": "c96360446969a300edd6d090e27cb16b18cb648c80de40ccc940577c7ed5b3cb",
+ "/data/images_zips/s03/phone_grab_01.zip": "81c8152a10d801803d791213fd5f69d339cf939ee7195ad335a7b3fcc6eb3e7d",
+ "/data/images_zips/s03/phone_use_01_retake.zip": "af5ad5ac733d04b32db09f58a382c7074129d7ddccb8428632d84ed9d22f7052",
+ "/data/images_zips/s03/phone_use_02.zip": "7008cbcba96d199d6bf00629348d423fef57f8d252b177d9cfa829226fb0e951",
+ "/data/images_zips/s03/scissors_grab_01.zip": "bd4dbed416c7d75782db434a8866b3be39f68a520aabba754310dc5931cbc848",
+ "/data/images_zips/s03/scissors_use_01.zip": "56b77a136b85301ce33bedf68c1727a8cef5cbaba6a6d32974de4c6b4b3b316d",
+ "/data/images_zips/s03/scissors_use_02.zip": "b3a2d0346a0377f4e57990a8a7757061c6516540d6f27f9fc19494fb940084cc",
+ "/data/images_zips/s03/waffleiron_grab_01.zip": "d789e88232ce84b11ea05bae21e22f5ad596009d1fc514b290056792a8d5a136",
+ "/data/images_zips/s03/waffleiron_use_01.zip": "96e82043ce5cbdf3534317086a68888a207c64a991ed189ec8bf14dfa00e6b34",
+ "/data/images_zips/s03/waffleiron_use_02.zip": "9ebf1172764859cefc27cfb6e72299da3dcbfd06ea0deb1c6e3f9b165bf73d73",
+ "/data/images_zips/s04/box_grab_01.zip": "4889a7ca23f4d43ade74c79141bf075b406c730ddcc4f2c6f14249d573d76d54",
+ "/data/images_zips/s04/box_use_01.zip": "ada2aaf061439f3fc5c399578d5b82e953c4aafb114790eb25c6794739ff453a",
+ "/data/images_zips/s04/box_use_02.zip": "02f1b58728ee14ab03c9fb184ac66d347869077a2a6c011a1794abecd2dade85",
+ "/data/images_zips/s04/capsulemachine_grab_01.zip": "5c08dd286a21110d21b67f631c8f7f28a5e005a75566762fa5aba598dc1c3be2",
+ "/data/images_zips/s04/capsulemachine_use_01.zip": "43d55f35ade0648a3143d6b94e3b71eddae948eaf06133cce97cdba8fd29494b",
+ "/data/images_zips/s04/capsulemachine_use_02.zip": "d22fe13fff579bee5a617cc7a8d0dd51634c3d1a8900730b59eda78a359061cc",
+ "/data/images_zips/s04/espressomachine_grab_01.zip": "17c599451f5c5f59829dcb85367caa2e3bb56626a6def885af1b9ad091bcb61c",
+ "/data/images_zips/s04/espressomachine_use_01_retake.zip": "480e5621e50c44ceaffd68146243b7ab487997161711648ce68e6e2848c0937e",
+ "/data/images_zips/s04/espressomachine_use_02.zip": "7522e700c95152dce9ba316e82e0cebae43078fad099dc2880808841b1feeae0",
+ "/data/images_zips/s04/ketchup_grab_01.zip": "47dce7f8ca9e0bece3478373e7cccbfdb04becfff3467529fa8ba86e6d1658b9",
+ "/data/images_zips/s04/ketchup_use_02.zip": "46e6473b52dc337a97c4515231a5efdc99c5aec83520a591aa6d6908beb0a31b",
+ "/data/images_zips/s04/laptop_grab_01.zip": "16efb870f1fbc920bdd62ca4202974e3a40c843234a2869df9eeb9154c47d12c",
+ "/data/images_zips/s04/laptop_use_01.zip": "097fedf590668298b3b184a8b3d48700670b0371d3ffa57792e5509fdec77df3",
+ "/data/images_zips/s04/laptop_use_02.zip": "4bdb5f23fd86b1cd8c8692172a521b50632032f353f612eef52a5106d9ba28a3",
+ "/data/images_zips/s04/microwave_grab_01.zip": "5300224df25eb15c69ed33c3fb5c5dfa6e103f5c5ed4580b71f43592b02dae40",
+ "/data/images_zips/s04/microwave_use_01.zip": "5dd1df08f052d0b1e603516b344561b38717bb7b7ccf6a965bf0ba047f8029ff",
+ "/data/images_zips/s04/microwave_use_02.zip": "97adcc4271fcd2640311dbfd638ca7619a09609af52d799bb31461a18c26d359",
+ "/data/images_zips/s04/mixer_grab_01.zip": "b63e901912ef05c10a8318e82384084f807f7218f25c2b56d19e30e0325e518f",
+ "/data/images_zips/s04/mixer_use_01.zip": "c85baab0457394b7de119a71b322826d8452eeb5d2e79de25ba4957245fe33e4",
+ "/data/images_zips/s04/mixer_use_02.zip": "b14982646821465340d0479e138b591c8dde98129951cac0c33a458b3be0098d",
+ "/data/images_zips/s04/notebook_grab_01.zip": "91c6b37228ae7e61215d9bb4056fb5a716390d9681d3b7d297ffaa2322ad1286",
+ "/data/images_zips/s04/notebook_use_01.zip": "bde12acbd2ffce861eeef4bad849396318d4b69efd666c02f5cd704f4ad44563",
+ "/data/images_zips/s04/notebook_use_02.zip": "d807be6a5c7e83675084e489f0c791fca96f8a42db7796c1aceefcc7018a4dea",
+ "/data/images_zips/s04/phone_grab_01.zip": "4d7c7913375c8ac170d41c7a5a230589e0d0d0a9e6b928cf608bfa4d7eea813e",
+ "/data/images_zips/s04/phone_use_01.zip": "d9f50f1c5f4cc26040201de7f792d16e0e58e44419b538b2846e802e3ca5d1f4",
+ "/data/images_zips/s04/phone_use_02.zip": "b531e4d5ef6470f01542cb0d008c5f8df0f84f8c7111a2ddb1f8654599ba7cfd",
+ "/data/images_zips/s04/scissors_grab_01.zip": "15580ad7db38d5835da89a913ac2310608c1ba0124a2a043b962707da8ad7eca",
+ "/data/images_zips/s04/scissors_use_01.zip": "fdde8105ef804de6ddccf6fc57db366f828b093e03b23b98893d65e217d39d60",
+ "/data/images_zips/s04/scissors_use_02.zip": "3e4466b82186a8389e9551a6516c2a14ba7ac59d598d5091507b79beeb648d9b",
+ "/data/images_zips/s04/waffleiron_use_01.zip": "0513f248e46e6d7d0c4f107edd0651e6753c014e46ffee629a7d5e1e4f268d09",
+ "/data/images_zips/s04/waffleiron_use_02.zip": "e7d1e4dfb06af0c8fa9b6e964c1b61c74157e02a85ba63f92151c38053209f6e",
+ "/data/images_zips/s05/box_grab_01.zip": "82b00ecb05a7347e7cc87876779ebe2368ace2bda2b331b69d2b7ff4f993feff",
+ "/data/images_zips/s05/box_use_01.zip": "95d69b1b8408251f133ba28d40f60f303fe189dff5c07ffb8f312c1a57d8b8f6",
+ "/data/images_zips/s05/box_use_02.zip": "8edb8d44406673c57a7acdd63bc5dba66536dd1305f96b07621181f0ac1f587e",
+ "/data/images_zips/s05/capsulemachine_grab_01.zip": "fb37330f270914dbad54f3dc76e40eff18f620bdae0eff5ab4df066e52c469fa",
+ "/data/images_zips/s05/capsulemachine_use_01.zip": "1d7a07ee780546d8eae57ffba2591c4861569397cc18b71d1b46ee1cad5f5d6c",
+ "/data/images_zips/s05/capsulemachine_use_02.zip": "c8674bd6baf097017acaec39d2cd5200196b7f2a8fed61ea6ec1c9e247f3b80d",
+ "/data/images_zips/s05/espressomachine_grab_01.zip": "5545b01fc51fecc35eb83d81743c0f54a9ddb7f7cdfa65e2ee9ad6d1f4ece4e4",
+ "/data/images_zips/s05/espressomachine_use_01.zip": "f18ca11267357cbcd8e651fd719ea5d5dd69a2c2aae514c6677c67c4f9304b9b",
+ "/data/images_zips/s05/espressomachine_use_02.zip": "0a74a475a3cb8055122f8566177df73d2bb1901ccd2b49c9b37231a9d193b1f6",
+ "/data/images_zips/s05/ketchup_grab_01.zip": "d77a47b83d8ce803bd50ef4f4dd54fa7920bd47645c5abbe4498e59851e11f8c",
+ "/data/images_zips/s05/ketchup_use_01.zip": "57e16b460b34bc603008ca056cb5e2f4886f4ed98b6c6d1589ccad6b6101a7a0",
+ "/data/images_zips/s05/laptop_grab_01.zip": "702c012d565324a6937617f023cc97f8cbeea150fb22a77f81222a82d4988e37",
+ "/data/images_zips/s05/laptop_use_01.zip": "22892589e8e115d753f018969a03b727d7086943c1b336fd4b330b2464cea9e1",
+ "/data/images_zips/s05/laptop_use_02.zip": "181ca6228c36639dafd6fd171eb1519ef582f61cc05364666dbc7037305862f6",
+ "/data/images_zips/s05/microwave_grab_01.zip": "a9927b9e9f8cbf24df441b6052a60365178059659cb8fe879e0a9fc1caf76b2d",
+ "/data/images_zips/s05/microwave_use_01.zip": "660b9dce9d8a11e33b059d0232f43a8265a33275f367b06ed831b3d577dac478",
+ "/data/images_zips/s05/microwave_use_02.zip": "0b4543f1bcf88cfeecdf0a5e0617d2c70722848c2cfcf2eafd1c139b904a70cb",
+ "/data/images_zips/s05/mixer_grab_01.zip": "e0fb6eb162c2145f008606cf81e9753df1b65d193b5b1c03fa6831fe697f6141",
+ "/data/images_zips/s05/mixer_use_01.zip": "f2879e5a237f847ce76e29591f63b255ee4fccd4343589967daa6f6aa79fa75f",
+ "/data/images_zips/s05/mixer_use_02.zip": "b3bca298d1c1dd31712a9fce79141638f3b2949276efec7c30c734ac91abad51",
+ "/data/images_zips/s05/notebook_grab_01.zip": "871cc80405ae1994015179de4ac1513edb393222173f8a43e218c3269afaa154",
+ "/data/images_zips/s05/notebook_use_01.zip": "a483c3d68fcc1b1ca24952b653896b0b6c7ac26911c0cb1ae8ae43d7776330c5",
+ "/data/images_zips/s05/notebook_use_02.zip": "d6be510c55da7ca84cd183669a6307f4e21bf73146617040300f113e9fb77f22",
+ "/data/images_zips/s05/phone_grab_01.zip": "7b1d65ae4e74e4c7b0bee167484b4036e20ffa8865e5ebee4286f2b3571c01cf",
+ "/data/images_zips/s05/phone_use_01.zip": "900bfc6e473cb18c12aa2d4d0aead1cee08b8cdb5708a930812d1c12513ff5fc",
+ "/data/images_zips/s05/phone_use_02.zip": "74264352edff31777a44f3b81ad6f20e63aefb5858eab2238b96fce496567d2c",
+ "/data/images_zips/s05/phone_use_03.zip": "f52dfe8621f06aed5d6d04f8e3c629176612231848d9bc2305a6fbf1ce0f59a8",
+ "/data/images_zips/s05/phone_use_04.zip": "1bba8b4394b64d91eb7b13dbc81ec3b75beac84be8b6b8d1445119ffa8e6f12e",
+ "/data/images_zips/s05/scissors_grab_01.zip": "63097f42c283dd18b56fbb8364286b06a0c632f92b9739c90a79b74d021da82a",
+ "/data/images_zips/s05/scissors_use_01_retake.zip": "03fe4fcd61443e8d3e04abbcf658ef4278d28c95e8f566d8e21eb2f2b0c123f4",
+ "/data/images_zips/s05/scissors_use_02.zip": "ae69be0b9a460b63b954f93b2719a24e97c1c73205ddb92da375313ef6e6d39b",
+ "/data/images_zips/s05/waffleiron_grab_01.zip": "917c258d18e45e2c1cf1a7d3bb7c8fbe7b87b45b11cfbe95cf180a7788a0ed18",
+ "/data/images_zips/s05/waffleiron_use_01.zip": "21ac5cfa2d7e4573048f5233a70a8ec3b20392a2a390241541a03b251c936c47",
+ "/data/images_zips/s05/waffleiron_use_02.zip": "58ff4e92cef99c9646719247905af1cb11f2ee08a01b13b61a6e4f5659009d2c",
+ "/data/images_zips/s06/box_use_01.zip": "6b463197bfef52f6db98a4b7bc6a1b45de28fc501b15992354ea53b1073f371f",
+ "/data/images_zips/s06/box_use_02.zip": "70d78f18f02e9bc54816bad832316aa3fb7430b537f5c4fbfca04a29da2d3d2e",
+ "/data/images_zips/s06/box_use_03.zip": "e094e1f49932cf4fe6a0a9ea1baf04c45cd9e4ac42bdf8d41285cefa4da79a70",
+ "/data/images_zips/s06/box_use_04.zip": "fbed77c495fa729904843c951b457a5df709e6eec44bcc72e75ecf0e15825ef2",
+ "/data/images_zips/s06/capsulemachine_grab_01.zip": "88cb3c12095785505a167aace6a0eb6c17531573593bbd67705f294a8076f743",
+ "/data/images_zips/s06/capsulemachine_use_01.zip": "1f2056f283ef8dfe989e559c70f543a58bcd0942cca91919761d8fa25d4f33d8",
+ "/data/images_zips/s06/capsulemachine_use_02.zip": "903bad16ed246c220d491853be7f99a6b59069aeb3044a2c140b9356e0bed2d1",
+ "/data/images_zips/s06/espressomachine_grab_01.zip": "617d5bce851a0eacd4fa1a78040637352b40a6fb50e3f61e9c4043255baea79d",
+ "/data/images_zips/s06/espressomachine_use_01.zip": "1df8b5e430b07a64e0cbe692786b1ea43e3e35335f86bebafcbc6d3dc2994c48",
+ "/data/images_zips/s06/espressomachine_use_02.zip": "07917d1c0595a564f866e1610c1e2467b3c16c8532322808050ac0352d2105c8",
+ "/data/images_zips/s06/espressomachine_use_03.zip": "806a8df829039dac92f1d5a56456eaca2ad0ac36daf8cd8642c741b662ef8d94",
+ "/data/images_zips/s06/espressomachine_use_04.zip": "46ad6db7bac68d39908f711185162c7f0ee7d2a0b7afeb97ae86ca9e2acfd9ee",
+ "/data/images_zips/s06/ketchup_grab_01_retake2.zip": "4dda38862a3c8be76630804354dda75ed6d02ccd39dcc9145961d6cad0240856",
+ "/data/images_zips/s06/ketchup_use_01.zip": "95d321d529c4d25e85b2e1da802752ef1c2a66db72f43e88a1ef43bd5453d744",
+ "/data/images_zips/s06/ketchup_use_02.zip": "a4db602dd174d05eaa5725896b1583f0311fd1fadbcdd78a333771e50db3a934",
+ "/data/images_zips/s06/laptop_use_01_retake.zip": "21e53a1fba90d506edb93aaf7db068e5e8dccf19649031b59c100e9e92ef93c5",
+ "/data/images_zips/s06/laptop_use_02_retake.zip": "3c3b10b92e2ea1fe48c4ae34b28ff2534430f59aeddb9388bf7d9ed6f03ad540",
+ "/data/images_zips/s06/microwave_grab_01.zip": "cbd6b7177425b5334728de289e46da71b60da1c95b318d4ce6074bc3c4ad5fb0",
+ "/data/images_zips/s06/microwave_use_01.zip": "1561652026e2f65068d78a06f8bcfd8bf330c3d4a91dc506a2d10f30ef9223e3",
+ "/data/images_zips/s06/microwave_use_02.zip": "ba673c392bf89b7efff9c4e47af33c590dddd1158f0cc82c4e02a381f71fe20a",
+ "/data/images_zips/s06/mixer_grab_01.zip": "2f9ce113e2f331286ff5957b9b0d19d24a1ca6f05c1aaab033d05c9b9fa1b187",
+ "/data/images_zips/s06/mixer_use_01.zip": "7b90f5d3da8cc2b911c0bb2093d348c39ea01f3a8dc252f2adfd70868b5c5987",
+ "/data/images_zips/s06/mixer_use_02.zip": "923314cd76465ba4183ea20b1c27f39447ceaf5b64c6b54fcdbc23cb7e7b0d57",
+ "/data/images_zips/s06/notebook_grab_01.zip": "82092a8684d44586f962398f630ec76c53fdfd4f3b31468dea05e67a363ac8c9",
+ "/data/images_zips/s06/notebook_use_01.zip": "152bbbbee6c373c3269d3c41975beb5f844150edc3ef5142123e2b607684fdbd",
+ "/data/images_zips/s06/notebook_use_02.zip": "5b40f821c6bec0ba599084930a0ca4598e4d347b7c1867c9f2de3a1b574cc3be",
+ "/data/images_zips/s06/phone_grab_01.zip": "6729349699f42ca60f25dd0556e419c585d9882509e4a1753cff1477429929ab",
+ "/data/images_zips/s06/phone_use_01.zip": "d3f3661bd0975718267b0497076639a99944001dac2ba8db184ec72dce4cb480",
+ "/data/images_zips/s06/phone_use_02.zip": "79e9a6de85de2cecbe92b8290401f26f1c22e97497feae8f8fca5b4058ba96dd",
+ "/data/images_zips/s06/scissors_grab_01.zip": "7aed64e38a36ee6063fff1090155ae678a6292523e16f68586603b1457799f4d",
+ "/data/images_zips/s06/scissors_use_01.zip": "ed8420af0b0c14ac0ef555c2e98f89e4dde51d6d4be0dccb8e6c0385fea84bc8",
+ "/data/images_zips/s06/waffleiron_grab_01_retake.zip": "bc36d47b6fb99b73e83938d29d449e3d4691fd02e3ce49d8cb9562ac02a9b790",
+ "/data/images_zips/s06/waffleiron_use_01.zip": "c2e0a23102609b77af4314d44bc260f53b1b7162ce0ec189cc5f60e8b0186b62",
+ "/data/images_zips/s06/waffleiron_use_02.zip": "59a559fe6448724c50d5a3e5ec45a514c7a12a7705d05883e585f3f63582ee25",
+ "/data/images_zips/s06/waffleiron_use_03.zip": "29beb0ded9153a54c27dc49b810a6a765bbfa2f3856caf5d771ba4a8688cad5b",
+ "/data/images_zips/s06/waffleiron_use_04.zip": "2e3339f89fb6bf36645c9bab8ccd856d1fa62d77ffb20b5aad9fae3294e09b45",
+ "/data/images_zips/s07/box_grab_01.zip": "299f2b5f7e10b67d0f1228764da4f9054324deccfde64ab8320f18aafc81a6fe",
+ "/data/images_zips/s07/box_use_01.zip": "81400e88ef221870fcf9dd569e8f4cffeb3adc9a58fcd1ee601ea41ad4ffdb4e",
+ "/data/images_zips/s07/box_use_02.zip": "bc79116aa8624609e325d8673e72147d1137cf600de27a7ed78f1792f57ef631",
+ "/data/images_zips/s07/capsulemachine_use_01.zip": "8848746d94f0dac86b810c91a1981cab632109c1a1ddf57630fb284c5198e28d",
+ "/data/images_zips/s07/capsulemachine_use_02.zip": "cb71ad270fa9cf2f0a1f0504d6e1f1b5b2f177c10a7b00ed2882a38f0a4424c9",
+ "/data/images_zips/s07/espressomachine_grab_01.zip": "e6bc34cfc46d0bb0504fa3e339600a9d2aae10ee5d8446daab4dce4c17814f17",
+ "/data/images_zips/s07/espressomachine_use_01.zip": "aebdceb4d2762d7e4d4c31d5a209515ba79cb2b774c1522930ee50380499ae51",
+ "/data/images_zips/s07/espressomachine_use_02.zip": "6b1e196782ba54cf6c18f71ebf2d59b25750b5aa852b008a8fd72b80e01cd50c",
+ "/data/images_zips/s07/ketchup_grab_01.zip": "89dad49c6bd45408ff0c9c47dcf4c55c8344d9791220827585bda8972105e42d",
+ "/data/images_zips/s07/ketchup_use_02.zip": "920e5d7d3024aaaa46f03b3e520acb00b71d481a95683a67fdce3b5bef85ed63",
+ "/data/images_zips/s07/laptop_grab_01.zip": "2b495991d464e747c49afa8980fb936832d32d0014e71850a7c1f5cc2751d260",
+ "/data/images_zips/s07/laptop_use_01.zip": "b602ff79a55039df175e71f0ac85f9f55671119fc66bc412cf810f00f1e70c0e",
+ "/data/images_zips/s07/laptop_use_02.zip": "007e12e559f73a10f43a8bd1fde651c2398eb950a289578a7ead4076c50d558e",
+ "/data/images_zips/s07/microwave_grab_01.zip": "c8799620b507cdd05d6bc56c33004ab218bc1f7a0ae06902e4daacff4a41fa0c",
+ "/data/images_zips/s07/microwave_use_01.zip": "4888070475b61052684a5487665d972c997b2b6571385f290683c249d0de4052",
+ "/data/images_zips/s07/microwave_use_02.zip": "2f10e172fbb285b436946b4973f0c3c7007931b33be7913d5ddedfa16e10eb06",
+ "/data/images_zips/s07/mixer_grab_01_retake.zip": "bac6e67c33e01cbe27c8c9579e4b85aeac314e0d03791b6df74a5a620aa55f19",
+ "/data/images_zips/s07/mixer_use_01.zip": "74100e346882d4c99f88323db5a0760ecf350ef22714472cdb7cf25ea92beee3",
+ "/data/images_zips/s07/notebook_use_01.zip": "008b04f7aeb905ea440b4c730a9fb8c4a7f66a1f415956187739f2ca8faedb44",
+ "/data/images_zips/s07/notebook_use_02.zip": "9da62a6ddfa5731f31665b6f0e130178b078eea65789aefe9aaec1da874789ba",
+ "/data/images_zips/s07/phone_grab_01.zip": "1037bb5eb1edaa68040c38be1ba615bde3ab5b73629eb598a915a4fa34b4bd76",
+ "/data/images_zips/s07/phone_use_01.zip": "df2d5a76c09ed7ce83803b9c9b806c9342352a29ca119fcdf9dba52743385b17",
+ "/data/images_zips/s07/phone_use_02.zip": "e3cec58aee651aa563e93c334097ab434abccce593035a8507fbbfe3917f1f36",
+ "/data/images_zips/s07/scissors_grab_01.zip": "fec8b570a80c8e68b578129b71da19a3e6be5fe5f1ad18888735c2d990ac85af",
+ "/data/images_zips/s07/scissors_use_01.zip": "699c7cd10f9599b01409e034d74c71ef0fb943aa99bfef21f8f16e0e4a439ef2",
+ "/data/images_zips/s07/scissors_use_02.zip": "5d1b6e161bf794a079f14c1725043d2e7096d2a3a53c8fe206333c283bcca674",
+ "/data/images_zips/s07/waffleiron_grab_01.zip": "319de697070c61437a5f125f0be88c03c251e0121805c3cecf970938acc73a80",
+ "/data/images_zips/s07/waffleiron_use_01.zip": "7aee39ddfaad39a58e3b9e55d9ca76635ad36baf5807919d829206a5a363e66b",
+ "/data/images_zips/s07/waffleiron_use_02.zip": "0d3387b78b7ca59695a59402bde8d29ce9e5e004596e59c38001dd5ca067538d",
+ "/data/images_zips/s08/box_use_01.zip": "2bdb810416c5e02ced853d808cbef57f00e911611847ce77454083f8f197f03e",
+ "/data/images_zips/s08/box_use_02.zip": "b417697a349eac6ad06ddc102794b2b3a5218fed0d75d1f10479f7c4ee1b6799",
+ "/data/images_zips/s08/box_use_03.zip": "4b7fc78e7da04c2d0e64d1d73cfaf21fb8b2556b89abc472adde139eb8b6da98",
+ "/data/images_zips/s08/box_use_04.zip": "147f798344d0e6db671746b09eb7d6c053cf3665437a1d8eca09bb0cec3957bb",
+ "/data/images_zips/s08/capsulemachine_use_01.zip": "48651128e44040a2e70daa8c0cf261b089a83ff98d36d9b95c942ca42c879982",
+ "/data/images_zips/s08/capsulemachine_use_02.zip": "99207eb2da0e33c81cbca62c62a58460736ea09496a4c423a8f1e54a744c5155",
+ "/data/images_zips/s08/capsulemachine_use_03.zip": "17e0f9cdec408824267ea77edbf9098d0ce007854550b935f5c77091697f09e7",
+ "/data/images_zips/s08/capsulemachine_use_04.zip": "056e15627ffa960acc8c1f562a399737f8ead366a5d6f8176502fe3a9aec98c3",
+ "/data/images_zips/s08/espressomachine_use_01.zip": "fcee66af529662d6cb846cdd52f94e425a7c5fd6ac5025780c6827e57ed0b565",
+ "/data/images_zips/s08/espressomachine_use_02.zip": "9ea96164373cc2a1bf9ebad21653519ea077cd609c447e38e162cd6b3a6ddd04",
+ "/data/images_zips/s08/espressomachine_use_03.zip": "508c0a8c23eecfc3e0bb22ef208ac7783fba90fd914095e57f85593f9572c4fc",
+ "/data/images_zips/s08/ketchup_use_01.zip": "2b5a000fefd1a03bfc298dfcff8dd3e6048776d578366c8551dc33fcc4c7e6ee",
+ "/data/images_zips/s08/ketchup_use_02.zip": "632d38a2ba4f9e2c2e00b1dadf184381adec6eb2314bf9d817a3162117b81d62",
+ "/data/images_zips/s08/ketchup_use_03.zip": "b0ccf9eae94dd4e97cad94883b69b3fd68c4d92285b7b09da00bceb35e5398fc",
+ "/data/images_zips/s08/ketchup_use_04.zip": "eba945f370faf4b175479bce718e4610c06bce55b0205409e9a68e9d1f1bdde8",
+ "/data/images_zips/s08/laptop_use_01.zip": "a46f69ab1b710631eda86478786229086faaf144c641a3746b3868d8b668c9b7",
+ "/data/images_zips/s08/laptop_use_02.zip": "f97e4ae55d69d540b75c91b755dc5ee61c03f0e67f3bde5855b2ce095b24e85c",
+ "/data/images_zips/s08/laptop_use_03.zip": "699b4af15f0676da20be05174f5fa063f792e702c9f595a6777dffb815a7fa83",
+ "/data/images_zips/s08/laptop_use_04.zip": "ea95ee0a5e16172b8030ae1ff3fb6545512c116dccfe3718263c592edc71b83c",
+ "/data/images_zips/s08/microwave_use_01.zip": "6dac33951a78586166646deefe030cb260dba53ba3a0357f9088fdf5dd537aab",
+ "/data/images_zips/s08/microwave_use_02.zip": "753f3c42cedda9c38d732dc5ca580da58694cdbd26e96144bf71d1985bc4f26f",
+ "/data/images_zips/s08/microwave_use_03.zip": "e5438efad792fce40bac109fa27f8b79dc024db4c73cd87ee71a9663821ff4ef",
+ "/data/images_zips/s08/microwave_use_04.zip": "542d1fe7d88003891107eb362d17fbc6bd44a172cdf07ec9034c1bd335a021e7",
+ "/data/images_zips/s08/mixer_use_01.zip": "0043b8b9977d2bf5b48ddd90beca01dec15ed556e6e5493c3a8cba580d22a4d7",
+ "/data/images_zips/s08/mixer_use_02.zip": "19ce68bb320b34ecba0cab76d5526fdcd2a044c53adb56a4b57b546d877a5df3",
+ "/data/images_zips/s08/mixer_use_03.zip": "03a62032927d79457f111bc255d37bddab107fef0129556aed9f8c89dffca757",
+ "/data/images_zips/s08/mixer_use_04.zip": "ef4e5e856cf9c06120f63fd9e0ffc021469e51a529a48432f4202295974e4acb",
+ "/data/images_zips/s08/notebook_use_01.zip": "b0f57250ca25da01b0de042e853e201bc3452571e35dbbd1eb22589f303ab80a",
+ "/data/images_zips/s08/notebook_use_02.zip": "b1d73801eb2f70ce4313e94ca69c3af5b6d4729017fa66c7368fe07f68cfefbd",
+ "/data/images_zips/s08/notebook_use_03.zip": "6dc482f37e7af05b5a9ab05c0b732f683a3638837ab714a6d8a546902711053d",
+ "/data/images_zips/s08/notebook_use_04.zip": "1217da839a05c869dfd95d31bf3c647aa982c01ea4b40f3243c39f0fe11f351e",
+ "/data/images_zips/s08/phone_use_01.zip": "7272b0911aa0febefaabd69ee0cc73cee5d10f235e36e26128871586acb356fb",
+ "/data/images_zips/s08/phone_use_02.zip": "73f086f235e389e9da46ff008ad8e7a54657074c3267c4757fc969797f019c1f",
+ "/data/images_zips/s08/phone_use_03.zip": "464d72173300a4757d6ed803d90c6b593f7c088adbb0d292049bf9efcc2d9126",
+ "/data/images_zips/s08/phone_use_04.zip": "3cadc0b584f0eaf63a18d11df065d826121d5e8664ae21c607b4209e1b3d0069",
+ "/data/images_zips/s08/scissors_use_01.zip": "4148211e6ac6e90d98ddf3b58ce146cfd89a1273974cb12bfd8962e88985744f",
+ "/data/images_zips/s08/scissors_use_03.zip": "71db09677f59f9573732da21374b5c7cd630e39312d2b904a1f01cdadf764fc9",
+ "/data/images_zips/s08/scissors_use_04.zip": "edb8021d0659fc0a16ca011555f7745a0ef6010c920e07202f750e66aa05bade",
+ "/data/images_zips/s08/waffleiron_use_01.zip": "062cfbb863cd94e345844e96c94ff0dfa31c6d9b509e5c2b869fa5138446644c",
+ "/data/images_zips/s08/waffleiron_use_02.zip": "c4c8ff1d12074cfa96cd86481891fe51cb7c36783678aefbedbb398b84939001",
+ "/data/images_zips/s08/waffleiron_use_03.zip": "9b4f46a6f222231e60402e178756e58e019e89c6e42377c17e1e3db4edf33407",
+ "/data/images_zips/s08/waffleiron_use_04.zip": "d7913b7e67f4138ce18e151646117f3e84933caa2f9400ea8f4f6d5789cc7230",
+ "/data/images_zips/s09/capsulemachine_use_01.zip": "b8d16dcc865f2e766b5366e1502d37df9231860dfbf03bc503914e048c43883f",
+ "/data/images_zips/s09/capsulemachine_use_02.zip": "5929902bdf6577cdbc77d11dd79dd576e4e6f0d4f33b131849b5956d752d4c7c",
+ "/data/images_zips/s09/capsulemachine_use_03.zip": "1444855812569cc151761ea4f5bfa01565d53af7bb0ab19fc8c8dabb67109468",
+ "/data/images_zips/s09/capsulemachine_use_04.zip": "a4cd208e749cc7fd71c8b620bd68fc566b339c1735b30da351c30aac0e70ad37",
+ "/data/images_zips/s09/espressomachine_use_01.zip": "f02aef30970ca46eeca8adb83397e0e375c003b16dac5ba07e39e7e23a646aab",
+ "/data/images_zips/s09/espressomachine_use_02.zip": "4e305bf9869600005cece74b0fdd42dcc6ac9aa5239e0039be17f24cc151bd96",
+ "/data/images_zips/s09/espressomachine_use_03.zip": "f413072132e5e1ec0e0b26a2f09683f720bb6fb958d63140932e6d87639d2a73",
+ "/data/images_zips/s09/espressomachine_use_04.zip": "242eba67040613f8576e40b5b80bc671c9d4c6674a747d5a8b4c9f1ad9978de4",
+ "/data/images_zips/s09/ketchup_use_01.zip": "d8fa006e851f76814d01f589d12e088cbba245c20159ca054e12b9959b7d0d7b",
+ "/data/images_zips/s09/ketchup_use_02.zip": "2a0c42660e9848a971d57a02645e84bf6ff7f0ce8a1131236baeafa82900ff96",
+ "/data/images_zips/s09/ketchup_use_03.zip": "d97b4a7651f7a7c9cff1704ca4199bcf76d5a44581a06b21082673f0e2d317bb",
+ "/data/images_zips/s09/ketchup_use_04.zip": "0a9799cf0e72e8a1321dee88111ffe2d8f6f0f50a824ab25f0053a837487c810",
+ "/data/images_zips/s09/laptop_use_01.zip": "18e62a4279a1d05f5a8ae6229b3fa4a49dfe6762b8a711b0c67eab3367eabbe8",
+ "/data/images_zips/s09/laptop_use_02.zip": "2a18707a493500d42ba07f179fdae97f96e5e931ef64214c6993020797686ab9",
+ "/data/images_zips/s09/laptop_use_03.zip": "c91cba7c38478cbbda653365ed002373cdaa9e0869a0c6a3c235c879a0d2710f",
+ "/data/images_zips/s09/laptop_use_04.zip": "06ecae87866e3d81b063ba409f3a6eef9b2b9615952a3980fd28443a697a9579",
+ "/data/images_zips/s09/microwave_use_01.zip": "4af19a133abfa1c3cded4c193c8a7d447a08127447b8d3dcc931728c5d4e4c12",
+ "/data/images_zips/s09/microwave_use_02.zip": "b9c851c4caf20343d19d16568b1c076e277e42e8b4d4606190e6d41e4bd09dc3",
+ "/data/images_zips/s09/microwave_use_03.zip": "132beb65e4a414cd61ff76b9dcfd7c1ec9e777dce621bb76ccfcf72cc11d9519",
+ "/data/images_zips/s09/microwave_use_04.zip": "1816bf6922a855256aac39b94d832960aa45c8aa9b5d950587a074b95d7a1b70",
+ "/data/images_zips/s09/mixer_use_01.zip": "80f8f94f2bdbeda4c3c5661dd89d5ab2472e6b0d73e29ffa1838897dd8819ffb",
+ "/data/images_zips/s09/mixer_use_02.zip": "be648b54f672559cd9c6e2989a6d767eb41dddab6edc7e4381bdc7cbeb20ff43",
+ "/data/images_zips/s09/mixer_use_03.zip": "91b69b38bd8476dc8c61025c5530fcb58e12c732b6245bb86be41bdb91ae0380",
+ "/data/images_zips/s09/mixer_use_04.zip": "a687aac4bfbc8e6a4eea863da4b095528ab8e21818951e39ebeed5114a920941",
+ "/data/images_zips/s09/notebook_use_01.zip": "73370f1e1b2db46753a0e0c8fa79ad85204b1b478ce84c7b707be0cbe3a73407",
+ "/data/images_zips/s09/notebook_use_02.zip": "93dda98786b39fa53487d4f98918fbe5c4d26f3fbd520aba7d67f29082fa891a",
+ "/data/images_zips/s09/notebook_use_03.zip": "a78b8ba9488d515b381af8c83dea0ca51139a5b9543c1ea974fbf35b9c5a6474",
+ "/data/images_zips/s09/notebook_use_04.zip": "d53b500247adb901d7c8380a6cdc7d3694842556fc81e865a25184cd8d6d5f4d",
+ "/data/images_zips/s09/phone_use_02.zip": "c4d7dc0f5c64cd20de8ad45dfcedc7acd53938b3a3efbb82b6dfc04199012d02",
+ "/data/images_zips/s09/phone_use_03.zip": "e61555d98e6e59ccfd3391add8d01acbb21d90a7042875ea080eae7f7e7716fb",
+ "/data/images_zips/s09/scissors_use_01.zip": "4818605ca5cbdd5aa4481225a2a1fde681d50f18046b3fdfa9c64564c812faae",
+ "/data/images_zips/s09/scissors_use_02.zip": "82c544357209153b81be5e126b1cff9e9430189fd5de9dfde381956c1d83dce7",
+ "/data/images_zips/s09/scissors_use_04.zip": "93d8795a8391af7a4e820f077c0ffb6724c6433351597e8b424b259769b6ff99",
+ "/data/images_zips/s09/waffleiron_use_01.zip": "fab7fb6e3a676a3fefd13f1d462177a23a4b6b86575f0a383dd4243687148b83",
+ "/data/images_zips/s09/waffleiron_use_02.zip": "60ec2ddd430c2c145a71706ffc4f3915a6c2d5e4faa6d699427be443cb4867a3",
+ "/data/images_zips/s09/waffleiron_use_03.zip": "069cbbba5cd89eb2e818bc33a634621f793ed11beaa5934978b102965c97f5b9",
+ "/data/images_zips/s09/waffleiron_use_04.zip": "3bc05a3b80c51dcb0fbb719e3eaf42832a745aceb6a7b91670019d697d8d32ff",
+ "/data/images_zips/s10/box_use_01.zip": "0e6f560c385d9369ba4693faccc4a12ae73609afc335054ea670b819e1fe6839",
+ "/data/images_zips/s10/box_use_02.zip": "e101d2176a76678751e2c5aabc3d489a3dad8635073c7bd9d44fd7f67da09994",
+ "/data/images_zips/s10/capsulemachine_use_01.zip": "aa1d4b03f77980db0397e0307835dc0ebbf33ebff976d9165b18cb99b5d6f4c3",
+ "/data/images_zips/s10/capsulemachine_use_02.zip": "ebbeb02e19e64a5442f51e892f359a7550f840acc47ec8bdc9c68c6673502ec4",
+ "/data/images_zips/s10/espressomachine_use_02.zip": "ada60d66790189e9c1d1abb998c899ed9f17b5cd6f6fa7598dc954175fb7bf99",
+ "/data/images_zips/s10/ketchup_grab_01.zip": "db89a1acef284c70fabae93d4518f7b2230bf986a740f64f5ccd6cd66fb0b5e5",
+ "/data/images_zips/s10/ketchup_use_01.zip": "0b49bdd4fb19ff8a1a7c688d000fb7e1c4b5b240daace22b5089f62ae7be3ecc",
+ "/data/images_zips/s10/ketchup_use_02.zip": "353896d31101ec6db41d8234a98cfc84df50d42808fda7185ae85317d9a21df1",
+ "/data/images_zips/s10/laptop_use_01.zip": "6a118c6930d9dae5157dde79eb7dfa9bfba98872c464a3e95fcd524818b446d8",
+ "/data/images_zips/s10/laptop_use_02.zip": "56e2e83d4e4aaa671f737d41dcb2ca02654855de2328375874e5d8a885cc4754",
+ "/data/images_zips/s10/mixer_use_01.zip": "004d09d3692250e19c07f5b92997690fb06e3ee4e51198663d461c2379949fc4",
+ "/data/images_zips/s10/mixer_use_02.zip": "3129f044bdc6a74a5fd8fece261a50fafff734f97ad1e6fc4fd3cf12465127e1",
+ "/data/images_zips/s10/notebook_use_01.zip": "0cb1d394c198e14887318bb447d2ff5251c783fad0282821273588ca9ec9c89e",
+ "/data/images_zips/s10/notebook_use_02.zip": "4332911ede67e37b36e0d2b12a6d69961f3197234d0f13cf12f816873e51cccc",
+ "/data/images_zips/s10/phone_grab_01.zip": "61b06e59cbf846aa8c28f7afd0dd133c133c1547a78a5a356ee1345852fe93f2",
+ "/data/images_zips/s10/phone_use_01_retake.zip": "99fbb8ff4aed036e86146ed437016b82cf4fc44d0edf04d85d476fbd204800bb",
+ "/data/images_zips/s10/phone_use_02.zip": "e716eb3a913e72d8a216149f230195fd98c33a40bfa30c3576a86c67d13939a9",
+ "/data/images_zips/s10/scissors_use_02.zip": "81ba2074dc77652076b2108834e5e7bc72447461e08d4690a3fc2eb1c7f0a905",
+ "/data/images_zips/s10/waffleiron_grab_01.zip": "d8d9d06ec0262849f4af91d81d79159f26fba6149740c5af2aa8d309d99446f2",
+ "/data/images_zips/s10/waffleiron_use_01.zip": "9f27657cc3dbdf2c00c2490cae33bc9f6d82e6086f008fda15d281c708cd1723",
+ "/data/images_zips/s10/waffleiron_use_02.zip": "0d40959f519372c18fb4b5d926566ba9e62733596a276a24214ab2aeb9b411f2",
+ "/data/meta.zip": "2ec627bcb8f17be33defc985a79d1dd744ee44b1f1b5732ed163ecef217c0c6e",
+ "/data/raw_seqs.zip": "3c74f8cdb5fb4f521d99132faf0471432bab5db97c7653493b01920d2ad48535",
+ "/data/splits.zip": "69e939f4969191a482597bd9a4356190a2732ac066ed0cda113f2bfd4f0386a7",
+ "/data/splits_json.zip": "99af5f6759c727df07ef897db6d293cc866d465e6b3d6095579fcb62ac831b7d",
+ "/models.zip": "609bbe22b0305bd6173d8d436bb049ce4708001041c72695cb96d98be8dc3259"
+}
\ No newline at end of file
diff --git a/bash/assets/urls/cropped_images.txt b/bash/assets/urls/cropped_images.txt
new file mode 100644
index 0000000..db86ec9
--- /dev/null
+++ b/bash/assets/urls/cropped_images.txt
@@ -0,0 +1,339 @@
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/laptop_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/laptop_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s01/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/ketchup_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/ketchup_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/microwave_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/mixer_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/mixer_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s02/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/box_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/capsulemachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/capsulemachine_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/espressomachine_use_02_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/notebook_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/notebook_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/notebook_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/phone_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s03/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/espressomachine_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/notebook_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s04/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/notebook_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/phone_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/phone_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/scissors_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s05/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/box_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/box_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/espressomachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/espressomachine_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/ketchup_grab_01_retake2.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/laptop_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/laptop_use_02_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/notebook_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/waffleiron_grab_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/waffleiron_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s06/waffleiron_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/mixer_grab_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s07/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/box_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/box_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/capsulemachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/capsulemachine_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/espressomachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/ketchup_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/ketchup_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/laptop_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/laptop_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/microwave_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/microwave_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/mixer_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/mixer_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/notebook_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/notebook_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/phone_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/phone_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/scissors_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/scissors_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/waffleiron_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s08/waffleiron_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/capsulemachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/capsulemachine_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/espressomachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/espressomachine_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/ketchup_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/ketchup_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/laptop_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/laptop_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/microwave_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/microwave_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/mixer_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/mixer_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/notebook_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/notebook_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/phone_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/scissors_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/waffleiron_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s09/waffleiron_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/phone_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/cropped_images_zips/s10/waffleiron_use_02.zip
\ No newline at end of file
diff --git a/bash/assets/urls/feat.txt b/bash/assets/urls/feat.txt
new file mode 100644
index 0000000..8cffee7
--- /dev/null
+++ b/bash/assets/urls/feat.txt
@@ -0,0 +1 @@
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/feat.zip
\ No newline at end of file
diff --git a/bash/assets/urls/images.txt b/bash/assets/urls/images.txt
new file mode 100644
index 0000000..41f780d
--- /dev/null
+++ b/bash/assets/urls/images.txt
@@ -0,0 +1,339 @@
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/laptop_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/laptop_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s01/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/ketchup_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/ketchup_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/microwave_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/mixer_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/mixer_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s02/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/box_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/capsulemachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/capsulemachine_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/espressomachine_use_02_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/notebook_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/notebook_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/notebook_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/phone_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s03/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/espressomachine_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/notebook_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s04/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/notebook_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/phone_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/phone_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/scissors_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s05/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/box_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/box_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/capsulemachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/espressomachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/espressomachine_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/ketchup_grab_01_retake2.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/laptop_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/laptop_use_02_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/mixer_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/notebook_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/waffleiron_grab_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/waffleiron_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s06/waffleiron_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/box_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/espressomachine_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/laptop_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/microwave_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/mixer_grab_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/scissors_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s07/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/box_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/box_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/capsulemachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/capsulemachine_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/espressomachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/ketchup_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/ketchup_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/laptop_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/laptop_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/microwave_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/microwave_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/mixer_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/mixer_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/notebook_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/notebook_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/phone_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/phone_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/phone_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/scissors_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/scissors_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/waffleiron_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s08/waffleiron_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/capsulemachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/capsulemachine_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/espressomachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/espressomachine_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/espressomachine_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/ketchup_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/ketchup_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/laptop_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/laptop_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/microwave_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/microwave_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/microwave_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/microwave_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/mixer_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/mixer_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/notebook_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/notebook_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/phone_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/scissors_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/scissors_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/waffleiron_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/waffleiron_use_03.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s09/waffleiron_use_04.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/box_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/box_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/capsulemachine_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/capsulemachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/espressomachine_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/ketchup_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/ketchup_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/ketchup_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/laptop_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/laptop_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/mixer_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/mixer_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/notebook_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/notebook_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/phone_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/phone_use_01_retake.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/phone_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/scissors_use_02.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/waffleiron_grab_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/waffleiron_use_01.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/images_zips/s10/waffleiron_use_02.zip
\ No newline at end of file
diff --git a/bash/assets/urls/mano.txt b/bash/assets/urls/mano.txt
new file mode 100644
index 0000000..5892fc8
--- /dev/null
+++ b/bash/assets/urls/mano.txt
@@ -0,0 +1 @@
+https://download.is.tue.mpg.de/download.php?domain=mano&resume=1&sfile=mano_v1_2.zip
\ No newline at end of file
diff --git a/bash/assets/urls/misc.txt b/bash/assets/urls/misc.txt
new file mode 100644
index 0000000..41a616a
--- /dev/null
+++ b/bash/assets/urls/misc.txt
@@ -0,0 +1,3 @@
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/meta.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/splits_json.zip
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/raw_seqs.zip
\ No newline at end of file
diff --git a/bash/assets/urls/models.txt b/bash/assets/urls/models.txt
new file mode 100644
index 0000000..a7a710b
--- /dev/null
+++ b/bash/assets/urls/models.txt
@@ -0,0 +1 @@
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/models.zip
diff --git a/bash/assets/urls/smplx.txt b/bash/assets/urls/smplx.txt
new file mode 100644
index 0000000..7692afc
--- /dev/null
+++ b/bash/assets/urls/smplx.txt
@@ -0,0 +1 @@
+https://download.is.tue.mpg.de/download.php?domain=smplx&sfile=models_smplx_v1_1.zip
\ No newline at end of file
diff --git a/bash/assets/urls/splits.txt b/bash/assets/urls/splits.txt
new file mode 100644
index 0000000..3f39b21
--- /dev/null
+++ b/bash/assets/urls/splits.txt
@@ -0,0 +1 @@
+https://download.is.tue.mpg.de/download.php?domain=arctic&resume=1&sfile=arctic_release/c7216c3b205186106a1f8326ed7b948f838e4907e69b21c8b3c87bb69d87206e/v1_0/data/splits.zip
\ No newline at end of file
diff --git a/bash/clean_downloads.sh b/bash/clean_downloads.sh
new file mode 100755
index 0000000..61b9a40
--- /dev/null
+++ b/bash/clean_downloads.sh
@@ -0,0 +1 @@
+find downloads unpack render_out outputs -delete # clear dry run data
diff --git a/bash/download_baselines.sh b/bash/download_baselines.sh
new file mode 100755
index 0000000..cb2490f
--- /dev/null
+++ b/bash/download_baselines.sh
@@ -0,0 +1,6 @@
+#!/bin/bash
+set -e
+
+echo "Downloading model weights"
+mkdir -p downloads/
+python scripts_data/download_data.py --url_file ./bash/assets/urls/models.txt --out_folder downloads
diff --git a/bash/download_body_models.sh b/bash/download_body_models.sh
new file mode 100755
index 0000000..c3d1b25
--- /dev/null
+++ b/bash/download_body_models.sh
@@ -0,0 +1,18 @@
+#!/bin/bash
+set -e
+
+echo "Downloading SMPLX"
+mkdir -p downloads
+python scripts_data/download_data.py --url_file ./bash/assets/urls/smplx.txt --out_folder downloads
+unzip downloads/models_smplx_v1_1.zip
+mv models body_models
+
+echo "Downloading MANO"
+python scripts_data/download_data.py --url_file ./bash/assets/urls/mano.txt --out_folder downloads
+
+mkdir -p unpack
+cd downloads
+unzip mano_v1_2.zip
+mv mano_v1_2/models ../body_models/mano
+cd ..
+mv body_models unpack
diff --git a/bash/download_cropped_images.sh b/bash/download_cropped_images.sh
new file mode 100755
index 0000000..30c53fd
--- /dev/null
+++ b/bash/download_cropped_images.sh
@@ -0,0 +1,6 @@
+#!/bin/bash
+set -e
+
+echo "Downloading cropped images"
+mkdir -p downloads/data/cropped_images_zips
+python scripts_data/download_data.py --url_file ./bash/assets/urls/cropped_images.txt --out_folder downloads/data/cropped_images_zips
diff --git a/bash/download_dry_run.sh b/bash/download_dry_run.sh
new file mode 100755
index 0000000..faa25d0
--- /dev/null
+++ b/bash/download_dry_run.sh
@@ -0,0 +1,34 @@
+#!/bin/bash
+set -e
+
+echo "Downloading model weights"
+mkdir -p downloads/
+python scripts_data/download_data.py --url_file ./bash/assets/urls/models.txt --out_folder downloads --dry_run
+
+echo "Downloading smaller files"
+mkdir -p downloads/data
+python scripts_data/download_data.py --url_file ./bash/assets/urls/misc.txt --out_folder downloads/data --dry_run
+
+echo "Downloading cropped images"
+mkdir -p downloads/data/cropped_images_zips
+python scripts_data/download_data.py --url_file ./bash/assets/urls/cropped_images.txt --out_folder downloads/data/cropped_images_zips --dry_run
+
+echo "Downloading full resolution images"
+mkdir -p downloads/data/images_zips
+python scripts_data/download_data.py --url_file ./bash/assets/urls/images.txt --out_folder downloads/data/images_zips --dry_run
+
+echo "Downloading SMPLX"
+mkdir -p downloads
+python scripts_data/download_data.py --url_file ./bash/assets/urls/smplx.txt --out_folder downloads
+unzip downloads/models_smplx_v1_1.zip
+mv models body_models
+
+echo "Downloading MANO"
+python scripts_data/download_data.py --url_file ./bash/assets/urls/mano.txt --out_folder downloads
+
+mkdir unpack
+cd downloads
+unzip mano_v1_2.zip
+mv mano_v1_2/models ../body_models/mano
+cd ..
+mv body_models unpack
diff --git a/bash/download_feat.sh b/bash/download_feat.sh
new file mode 100755
index 0000000..85d3d8d
--- /dev/null
+++ b/bash/download_feat.sh
@@ -0,0 +1,3 @@
+echo "Downloading features files"
+mkdir -p downloads/data
+python scripts_data/download_data.py --url_file ./bash/assets/urls/feat.txt --out_folder downloads/data
\ No newline at end of file
diff --git a/bash/download_images.sh b/bash/download_images.sh
new file mode 100755
index 0000000..5d286a9
--- /dev/null
+++ b/bash/download_images.sh
@@ -0,0 +1,6 @@
+#!/bin/bash
+set -e
+
+echo "Downloading full resolution images"
+mkdir -p downloads/data/images_zips
+python scripts_data/download_data.py --url_file ./bash/assets/urls/images.txt --out_folder downloads/data/images_zips
diff --git a/bash/download_misc.sh b/bash/download_misc.sh
new file mode 100755
index 0000000..ef32b15
--- /dev/null
+++ b/bash/download_misc.sh
@@ -0,0 +1,6 @@
+#!/bin/bash
+set -e
+
+echo "Downloading smaller files"
+mkdir -p downloads/data
+python scripts_data/download_data.py --url_file ./bash/assets/urls/misc.txt --out_folder downloads/data
diff --git a/bash/download_splits.sh b/bash/download_splits.sh
new file mode 100755
index 0000000..caf10d7
--- /dev/null
+++ b/bash/download_splits.sh
@@ -0,0 +1,6 @@
+#!/bin/bash
+set -e
+
+echo "Downloading preprocessed splits"
+mkdir -p downloads/data
+python scripts_data/download_data.py --url_file ./bash/assets/urls/splits.txt --out_folder downloads/data
\ No newline at end of file
diff --git a/common/.gitignore b/common/.gitignore
new file mode 100644
index 0000000..bee8a64
--- /dev/null
+++ b/common/.gitignore
@@ -0,0 +1 @@
+__pycache__
diff --git a/common/___init___.py b/common/___init___.py
new file mode 100644
index 0000000..e69de29
diff --git a/common/abstract_pl.py b/common/abstract_pl.py
new file mode 100644
index 0000000..a09eca1
--- /dev/null
+++ b/common/abstract_pl.py
@@ -0,0 +1,180 @@
+import time
+
+import numpy as np
+import pytorch_lightning as pl
+import torch
+import torch.optim as optim
+
+import common.pl_utils as pl_utils
+from common.comet_utils import log_dict
+from common.pl_utils import avg_losses_cpu, push_checkpoint_metric
+from common.xdict import xdict
+
+
+class AbstractPL(pl.LightningModule):
+ def __init__(
+ self,
+ args,
+ push_images_fn,
+ tracked_metric,
+ metric_init_val,
+ high_loss_val,
+ ):
+ super().__init__()
+ self.experiment = args.experiment
+ self.args = args
+ self.tracked_metric = tracked_metric
+ self.metric_init_val = metric_init_val
+
+ self.started_training = False
+ self.loss_dict_vec = []
+ self.push_images = push_images_fn
+ self.vis_train_batches = []
+ self.vis_val_batches = []
+ self.high_loss_val = high_loss_val
+ self.max_vis_examples = 20
+ self.val_step_outputs = []
+ self.test_step_outputs = []
+
+ def set_training_flags(self):
+ self.started_training = True
+
+ def load_from_ckpt(self, ckpt_path):
+ sd = torch.load(ckpt_path)["state_dict"]
+ print(self.load_state_dict(sd))
+
+ def training_step(self, batch, batch_idx):
+ self.set_training_flags()
+ if len(self.vis_train_batches) < self.num_vis_train:
+ self.vis_train_batches.append(batch)
+ inputs, targets, meta_info = batch
+
+ out = self.forward(inputs, targets, meta_info, "train")
+ loss = out["loss"]
+
+ loss = {k: loss[k].mean().view(-1) for k in loss}
+ total_loss = sum(loss[k] for k in loss)
+
+ loss_dict = {"total_loss": total_loss, "loss": total_loss}
+ loss_dict.update(loss)
+
+ for k, v in loss_dict.items():
+ if k != "loss":
+ loss_dict[k] = v.detach()
+
+ log_every = self.args.log_every
+ self.loss_dict_vec.append(loss_dict)
+ self.loss_dict_vec = self.loss_dict_vec[len(self.loss_dict_vec) - log_every :]
+ if batch_idx % log_every == 0 and batch_idx != 0:
+ running_loss_dict = avg_losses_cpu(self.loss_dict_vec)
+ running_loss_dict = xdict(running_loss_dict).postfix("__train")
+ log_dict(self.experiment, running_loss_dict, step=self.global_step)
+ return loss_dict
+
+ def on_train_epoch_end(self):
+ self.experiment.log_epoch_end(self.current_epoch)
+
+ def validation_step(self, batch, batch_idx):
+ if len(self.vis_val_batches) < self.num_vis_val:
+ self.vis_val_batches.append(batch)
+ out = self.inference_step(batch, batch_idx)
+ self.val_step_outputs.append(out)
+ return out
+
+ def on_validation_epoch_end(self):
+ outputs = self.val_step_outputs
+ outputs = self.inference_epoch_end(outputs, postfix="__val")
+ self.log("loss__val", outputs["loss__val"])
+ self.val_step_outputs.clear() # free memory
+ return outputs
+
+ def inference_step(self, batch, batch_idx):
+ if self.training:
+ self.eval()
+ with torch.no_grad():
+ inputs, targets, meta_info = batch
+ out, loss = self.forward(inputs, targets, meta_info, "test")
+ return {"out_dict": out, "loss": loss}
+
+ def inference_epoch_end(self, out_list, postfix):
+ if not self.started_training:
+ self.started_training = True
+ result = push_checkpoint_metric(self.tracked_metric, self.metric_init_val)
+ return result
+
+ # unpack
+ outputs, loss_dict = pl_utils.reform_outputs(out_list)
+
+ if "test" in postfix:
+ per_img_metric_dict = {}
+ for k, v in outputs.items():
+ if "metric." in k:
+ per_img_metric_dict[k] = np.array(v)
+
+ metric_dict = {}
+ for k, v in outputs.items():
+ if "metric." in k:
+ metric_dict[k] = np.nanmean(np.array(v))
+
+ loss_metric_dict = {}
+ loss_metric_dict.update(metric_dict)
+ loss_metric_dict.update(loss_dict)
+ loss_metric_dict = xdict(loss_metric_dict).postfix(postfix)
+
+ log_dict(
+ self.experiment,
+ loss_metric_dict,
+ step=self.global_step,
+ )
+
+ if self.args.interface_p is None and "test" not in postfix:
+ result = push_checkpoint_metric(
+ self.tracked_metric, loss_metric_dict[self.tracked_metric]
+ )
+ self.log(self.tracked_metric, result[self.tracked_metric])
+
+ if not self.args.no_vis:
+ print("Rendering train images")
+ self.visualize_batches(self.vis_train_batches, "_train", False)
+ print("Rendering val images")
+ self.visualize_batches(self.vis_val_batches, "_val", False)
+
+ if "test" in postfix:
+ return (
+ outputs,
+ {"per_img_metric_dict": per_img_metric_dict},
+ metric_dict,
+ )
+ return loss_metric_dict
+
+ def configure_optimizers(self):
+ optimizer = torch.optim.Adam(self.parameters(), lr=self.args.lr)
+ scheduler = optim.lr_scheduler.MultiStepLR(
+ optimizer, self.args.lr_dec_epoch, gamma=self.args.lr_decay, verbose=True
+ )
+ return [optimizer], [scheduler]
+
+ def visualize_batches(self, batches, postfix, no_tqdm=True):
+ im_list = []
+ if self.training:
+ self.eval()
+
+ tic = time.time()
+ for batch_idx, batch in enumerate(batches):
+ with torch.no_grad():
+ inputs, targets, meta_info = batch
+ vis_dict = self.forward(inputs, targets, meta_info, "vis")
+ for vis_fn in self.vis_fns:
+ curr_im_list = vis_fn(
+ vis_dict,
+ self.max_vis_examples,
+ self.renderer,
+ postfix=postfix,
+ no_tqdm=no_tqdm,
+ )
+ im_list += curr_im_list
+ print("Rendering: %d/%d" % (batch_idx + 1, len(batches)))
+
+ self.push_images(self.experiment, im_list, self.global_step)
+ print("Done rendering (%.1fs)" % (time.time() - tic))
+ return im_list
diff --git a/common/args_utils.py b/common/args_utils.py
new file mode 100644
index 0000000..2ddba2d
--- /dev/null
+++ b/common/args_utils.py
@@ -0,0 +1,15 @@
+from loguru import logger
+
+
+def set_default_params(args, default_args):
+ # if a val is not set on argparse, use default val
+ # else, use the one in the argparse
+ custom_dict = {}
+ for key, val in args.items():
+ if val is None:
+ args[key] = default_args[key]
+ else:
+ custom_dict[key] = val
+
+ logger.info(f"Using custom values: {custom_dict}")
+ return args
diff --git a/common/body_models.py b/common/body_models.py
new file mode 100644
index 0000000..f444da4
--- /dev/null
+++ b/common/body_models.py
@@ -0,0 +1,146 @@
+import json
+
+import numpy as np
+import torch
+from smplx import MANO
+
+from common.mesh import Mesh
+
+
+class MANODecimator:
+ def __init__(self):
+ data = np.load(
+ "./data/arctic_data/data/meta/mano_decimator_195.npy", allow_pickle=True
+ ).item()
+ mydata = {}
+ for key, val in data.items():
+ # only consider decimation matrix so far
+ if "D" in key:
+ mydata[key] = torch.FloatTensor(val)
+ self.data = mydata
+
+ def downsample(self, verts, is_right):
+ dev = verts.device
+ flag = "right" if is_right else "left"
+ if self.data[f"D_{flag}"].device != dev:
+ self.data[f"D_{flag}"] = self.data[f"D_{flag}"].to(dev)
+ D = self.data[f"D_{flag}"]
+ batch_size = verts.shape[0]
+ D_batch = D[None, :, :].repeat(batch_size, 1, 1)
+ verts_sub = torch.bmm(D_batch, verts)
+ return verts_sub
+
+
+MODEL_DIR = "./data/body_models/mano"
+
+SEAL_FACES_R = [
+ [120, 108, 778],
+ [108, 79, 778],
+ [79, 78, 778],
+ [78, 121, 778],
+ [121, 214, 778],
+ [214, 215, 778],
+ [215, 279, 778],
+ [279, 239, 778],
+ [239, 234, 778],
+ [234, 92, 778],
+ [92, 38, 778],
+ [38, 122, 778],
+ [122, 118, 778],
+ [118, 117, 778],
+ [117, 119, 778],
+ [119, 120, 778],
+]
+
+# vertex ids around the ring of the wrist
+CIRCLE_V_ID = np.array(
+ [108, 79, 78, 121, 214, 215, 279, 239, 234, 92, 38, 122, 118, 117, 119, 120],
+ dtype=np.int64,
+)
+
+
+def seal_mano_mesh(v3d, faces, is_rhand):
+ # v3d: B, 778, 3
+ # faces: 1538, 3
+ # output: v3d(B, 779, 3); faces (1554, 3)
+
+ seal_faces = torch.LongTensor(np.array(SEAL_FACES_R)).to(faces.device)
+ if not is_rhand:
+ # left hand
+ seal_faces = seal_faces[:, np.array([1, 0, 2])] # invert face normal
+ centers = v3d[:, CIRCLE_V_ID].mean(dim=1)[:, None, :]
+ sealed_vertices = torch.cat((v3d, centers), dim=1)
+ faces = torch.cat((faces, seal_faces), dim=0)
+ return sealed_vertices, faces
+
+
+def build_layers(device=None):
+ from common.object_tensors import ObjectTensors
+
+ layers = {
+ "right": build_mano_aa(True),
+ "left": build_mano_aa(False),
+ "object_tensors": ObjectTensors(),
+ }
+
+ if device is not None:
+ layers["right"] = layers["right"].to(device)
+ layers["left"] = layers["left"].to(device)
+ layers["object_tensors"].to(device)
+ return layers
+
+
+MANO_MODEL_DIR = "./data/body_models/mano"
+SMPLX_MODEL_P = {
+ "male": "./data/body_models/smplx/SMPLX_MALE.npz",
+ "female": "./data/body_models/smplx/SMPLX_FEMALE.npz",
+ "neutral": "./data/body_models/smplx/SMPLX_NEUTRAL.npz",
+}
+
+
+def build_smplx(batch_size, gender, vtemplate):
+ import smplx
+
+ subj_m = smplx.create(
+ model_path=SMPLX_MODEL_P[gender],
+ model_type="smplx",
+ gender=gender,
+ num_pca_comps=45,
+ v_template=vtemplate,
+ flat_hand_mean=True,
+ use_pca=False,
+ batch_size=batch_size,
+ # batch_size=320,
+ )
+ return subj_m
+
+
+def build_subject_smplx(batch_size, subject_id):
+ with open("./data/arctic_data/data/meta/misc.json", "r") as f:
+ misc = json.load(f)
+ vtemplate_p = f"./data/arctic_data/data/meta/subject_vtemplates/{subject_id}.obj"
+ mesh = Mesh(filename=vtemplate_p)
+ vtemplate = mesh.v
+ gender = misc[subject_id]["gender"]
+ return build_smplx(batch_size, gender, vtemplate)
+
+
+def build_mano_aa(is_rhand, create_transl=False, flat_hand=False):
+ return MANO(
+ MODEL_DIR,
+ create_transl=create_transl,
+ use_pca=False,
+ flat_hand_mean=flat_hand,
+ is_rhand=is_rhand,
+ )
+
+
+def construct_layers(dev):
+ mano_layers = {
+ "right": build_mano_aa(True, create_transl=True, flat_hand=False),
+ "left": build_mano_aa(False, create_transl=True, flat_hand=False),
+ "smplx": build_smplx(1, "neutral", None),
+ }
+ for layer in mano_layers.values():
+ layer.to(dev)
+ return mano_layers
diff --git a/common/camera.py b/common/camera.py
new file mode 100644
index 0000000..1d105c8
--- /dev/null
+++ b/common/camera.py
@@ -0,0 +1,474 @@
+import numpy as np
+import torch
+
+"""
+Useful geometric operations, e.g. Perspective projection and a differentiable Rodrigues formula
+Parts of the code are taken from https://github.com/MandyMo/pytorch_HMR
+"""
+
+
+def perspective_to_weak_perspective_torch(
+ perspective_camera,
+ focal_length,
+ img_res,
+):
+ # Convert Weak Perspective Camera [s, tx, ty] to camera translation [tx, ty, tz]
+ # in 3D given the bounding box size
+ # This camera translation can be used in a full perspective projection
+ # if isinstance(focal_length, torch.Tensor):
+ # focal_length = focal_length[:, 0]
+
+ tx = perspective_camera[:, 0]
+ ty = perspective_camera[:, 1]
+ tz = perspective_camera[:, 2]
+
+ weak_perspective_camera = torch.stack(
+ [2 * focal_length / (img_res * tz + 1e-9), tx, ty],
+ dim=-1,
+ )
+ return weak_perspective_camera
+
+
+def convert_perspective_to_weak_perspective(
+ perspective_camera,
+ focal_length,
+ img_res,
+):
+ # Convert Weak Perspective Camera [s, tx, ty] to camera translation [tx, ty, tz]
+ # in 3D given the bounding box size
+ # This camera translation can be used in a full perspective projection
+ # if isinstance(focal_length, torch.Tensor):
+ # focal_length = focal_length[:, 0]
+
+ weak_perspective_camera = torch.stack(
+ [
+ 2 * focal_length / (img_res * perspective_camera[:, 2] + 1e-9),
+ perspective_camera[:, 0],
+ perspective_camera[:, 1],
+ ],
+ dim=-1,
+ )
+ return weak_perspective_camera
+
+
+def convert_weak_perspective_to_perspective(
+ weak_perspective_camera, focal_length, img_res
+):
+ # Convert Weak Perspective Camera [s, tx, ty] to camera translation [tx, ty, tz]
+ # in 3D given the bounding box size
+ # This camera translation can be used in a full perspective projection
+ # if isinstance(focal_length, torch.Tensor):
+ # focal_length = focal_length[:, 0]
+
+ perspective_camera = torch.stack(
+ [
+ weak_perspective_camera[:, 1],
+ weak_perspective_camera[:, 2],
+ 2 * focal_length / (img_res * weak_perspective_camera[:, 0] + 1e-9),
+ ],
+ dim=-1,
+ )
+ return perspective_camera
+
+
+def get_default_cam_t(f, img_res):
+ cam = torch.tensor([[5.0, 0.0, 0.0]])
+ return convert_weak_perspective_to_perspective(cam, f, img_res)
+
+
+def estimate_translation_np(S, joints_2d, joints_conf, focal_length, img_size):
+ """Find camera translation that brings 3D joints S closest to 2D the corresponding joints_2d.
+ Input:
+ S: (25, 3) 3D joint locations
+ joints: (25, 3) 2D joint locations and confidence
+ Returns:
+ (3,) camera translation vector
+ """
+ num_joints = S.shape[0]
+ # focal length
+
+ f = np.array([focal_length[0], focal_length[1]])
+ # optical center
+ center = np.array([img_size[1] / 2.0, img_size[0] / 2.0])
+
+ # transformations
+ Z = np.reshape(np.tile(S[:, 2], (2, 1)).T, -1)
+ XY = np.reshape(S[:, 0:2], -1)
+ O = np.tile(center, num_joints)
+ F = np.tile(f, num_joints)
+ weight2 = np.reshape(np.tile(np.sqrt(joints_conf), (2, 1)).T, -1)
+
+ # least squares
+ Q = np.array(
+ [
+ F * np.tile(np.array([1, 0]), num_joints),
+ F * np.tile(np.array([0, 1]), num_joints),
+ O - np.reshape(joints_2d, -1),
+ ]
+ ).T
+ c = (np.reshape(joints_2d, -1) - O) * Z - F * XY
+
+ # weighted least squares
+ W = np.diagflat(weight2)
+ Q = np.dot(W, Q)
+ c = np.dot(W, c)
+
+ # square matrix
+ A = np.dot(Q.T, Q)
+ b = np.dot(Q.T, c)
+
+ # solution
+ trans = np.linalg.solve(A, b)
+
+ return trans
+
+
+def estimate_translation(
+ S,
+ joints_2d,
+ focal_length,
+ img_size,
+ use_all_joints=False,
+ rotation=None,
+ pad_2d=False,
+):
+ """Find camera translation that brings 3D joints S closest to 2D the corresponding joints_2d.
+ Input:
+ S: (B, 49, 3) 3D joint locations
+ joints: (B, 49, 3) 2D joint locations and confidence
+ Returns:
+ (B, 3) camera translation vectors
+ """
+ if pad_2d:
+ batch, num_pts = joints_2d.shape[:2]
+ joints_2d_pad = torch.ones((batch, num_pts, 3))
+ joints_2d_pad[:, :, :2] = joints_2d
+ joints_2d_pad = joints_2d_pad.to(joints_2d.device)
+ joints_2d = joints_2d_pad
+
+ device = S.device
+
+ if rotation is not None:
+ S = torch.einsum("bij,bkj->bki", rotation, S)
+
+ # Use only joints 25:49 (GT joints)
+ if use_all_joints:
+ S = S.cpu().numpy()
+ joints_2d = joints_2d.cpu().numpy()
+ else:
+ S = S[:, 25:, :].cpu().numpy()
+ joints_2d = joints_2d[:, 25:, :].cpu().numpy()
+
+ joints_conf = joints_2d[:, :, -1]
+ joints_2d = joints_2d[:, :, :-1]
+ trans = np.zeros((S.shape[0], 3), dtype=np.float32)
+ # Find the translation for each example in the batch
+ for i in range(S.shape[0]):
+ S_i = S[i]
+ joints_i = joints_2d[i]
+ conf_i = joints_conf[i]
+ trans[i] = estimate_translation_np(
+ S_i, joints_i, conf_i, focal_length=focal_length, img_size=img_size
+ )
+ return torch.from_numpy(trans).to(device)
+
+
+def estimate_translation_cam(
+ S, joints_2d, focal_length, img_size, use_all_joints=False, rotation=None
+):
+ """Find camera translation that brings 3D joints S closest to 2D the corresponding joints_2d.
+ Input:
+ S: (B, 49, 3) 3D joint locations
+ joints: (B, 49, 3) 2D joint locations and confidence
+ Returns:
+ (B, 3) camera translation vectors
+ """
+
+ def estimate_translation_np(S, joints_2d, joints_conf, focal_length, img_size):
+ """Find camera translation that brings 3D joints S closest to 2D the corresponding joints_2d.
+ Input:
+ S: (25, 3) 3D joint locations
+ joints: (25, 3) 2D joint locations and confidence
+ Returns:
+ (3,) camera translation vector
+ """
+
+ num_joints = S.shape[0]
+ # focal length
+ f = np.array([focal_length[0], focal_length[1]])
+ # optical center
+ center = np.array([img_size[0] / 2.0, img_size[1] / 2.0])
+
+ # transformations
+ Z = np.reshape(np.tile(S[:, 2], (2, 1)).T, -1)
+ XY = np.reshape(S[:, 0:2], -1)
+ O = np.tile(center, num_joints)
+ F = np.tile(f, num_joints)
+ weight2 = np.reshape(np.tile(np.sqrt(joints_conf), (2, 1)).T, -1)
+
+ # least squares
+ Q = np.array(
+ [
+ F * np.tile(np.array([1, 0]), num_joints),
+ F * np.tile(np.array([0, 1]), num_joints),
+ O - np.reshape(joints_2d, -1),
+ ]
+ ).T
+ c = (np.reshape(joints_2d, -1) - O) * Z - F * XY
+
+ # weighted least squares
+ W = np.diagflat(weight2)
+ Q = np.dot(W, Q)
+ c = np.dot(W, c)
+
+ # square matrix
+ A = np.dot(Q.T, Q)
+ b = np.dot(Q.T, c)
+
+ # solution
+ trans = np.linalg.solve(A, b)
+
+ return trans
+
+ device = S.device
+
+ if rotation is not None:
+ S = torch.einsum("bij,bkj->bki", rotation, S)
+
+ # Use only joints 25:49 (GT joints)
+ if use_all_joints:
+ S = S.cpu().numpy()
+ joints_2d = joints_2d.cpu().numpy()
+ else:
+ S = S[:, 25:, :].cpu().numpy()
+ joints_2d = joints_2d[:, 25:, :].cpu().numpy()
+
+ joints_conf = joints_2d[:, :, -1]
+ joints_2d = joints_2d[:, :, :-1]
+ trans = np.zeros((S.shape[0], 3), dtype=np.float32)
+ # Find the translation for each example in the batch
+ for i in range(S.shape[0]):
+ S_i = S[i]
+ joints_i = joints_2d[i]
+ conf_i = joints_conf[i]
+ trans[i] = estimate_translation_np(
+ S_i, joints_i, conf_i, focal_length=focal_length, img_size=img_size
+ )
+ return torch.from_numpy(trans).to(device)
+
+
+def get_coord_maps(size=56):
+ xx_ones = torch.ones([1, size], dtype=torch.int32)
+ xx_ones = xx_ones.unsqueeze(-1)
+
+ xx_range = torch.arange(size, dtype=torch.int32).unsqueeze(0)
+ xx_range = xx_range.unsqueeze(1)
+
+ xx_channel = torch.matmul(xx_ones, xx_range)
+ xx_channel = xx_channel.unsqueeze(-1)
+
+ yy_ones = torch.ones([1, size], dtype=torch.int32)
+ yy_ones = yy_ones.unsqueeze(1)
+
+ yy_range = torch.arange(size, dtype=torch.int32).unsqueeze(0)
+ yy_range = yy_range.unsqueeze(-1)
+
+ yy_channel = torch.matmul(yy_range, yy_ones)
+ yy_channel = yy_channel.unsqueeze(-1)
+
+ xx_channel = xx_channel.permute(0, 3, 1, 2)
+ yy_channel = yy_channel.permute(0, 3, 1, 2)
+
+ xx_channel = xx_channel.float() / (size - 1)
+ yy_channel = yy_channel.float() / (size - 1)
+
+ xx_channel = xx_channel * 2 - 1
+ yy_channel = yy_channel * 2 - 1
+
+ out = torch.cat([xx_channel, yy_channel], dim=1)
+ return out
+
+
+def look_at(eye, at=np.array([0, 0, 0]), up=np.array([0, 0, 1]), eps=1e-5):
+ at = at.astype(float).reshape(1, 3)
+ up = up.astype(float).reshape(1, 3)
+
+ eye = eye.reshape(-1, 3)
+ up = up.repeat(eye.shape[0] // up.shape[0], axis=0)
+ eps = np.array([eps]).reshape(1, 1).repeat(up.shape[0], axis=0)
+
+ z_axis = eye - at
+ z_axis /= np.max(np.stack([np.linalg.norm(z_axis, axis=1, keepdims=True), eps]))
+
+ x_axis = np.cross(up, z_axis)
+ x_axis /= np.max(np.stack([np.linalg.norm(x_axis, axis=1, keepdims=True), eps]))
+
+ y_axis = np.cross(z_axis, x_axis)
+ y_axis /= np.max(np.stack([np.linalg.norm(y_axis, axis=1, keepdims=True), eps]))
+
+ r_mat = np.concatenate(
+ (x_axis.reshape(-1, 3, 1), y_axis.reshape(-1, 3, 1), z_axis.reshape(-1, 3, 1)),
+ axis=2,
+ )
+
+ return r_mat
+
+
+def to_sphere(u, v):
+ theta = 2 * np.pi * u
+ phi = np.arccos(1 - 2 * v)
+ cx = np.sin(phi) * np.cos(theta)
+ cy = np.sin(phi) * np.sin(theta)
+ cz = np.cos(phi)
+ s = np.stack([cx, cy, cz])
+ return s
+
+
+def sample_on_sphere(range_u=(0, 1), range_v=(0, 1)):
+ u = np.random.uniform(*range_u)
+ v = np.random.uniform(*range_v)
+ return to_sphere(u, v)
+
+
+def sample_pose_on_sphere(range_v=(0, 1), range_u=(0, 1), radius=1, up=[0, 1, 0]):
+ # sample location on unit sphere
+ loc = sample_on_sphere(range_u, range_v)
+
+ # sample radius if necessary
+ if isinstance(radius, tuple):
+ radius = np.random.uniform(*radius)
+
+ loc = loc * radius
+ R = look_at(loc, up=np.array(up))[0]
+
+ RT = np.concatenate([R, loc.reshape(3, 1)], axis=1)
+ RT = torch.Tensor(RT.astype(np.float32))
+ return RT
+
+
+def rectify_pose(camera_r, body_aa, rotate_x=False):
+ body_r = batch_rodrigues(body_aa).reshape(-1, 3, 3)
+
+ if rotate_x:
+ rotate_x = torch.tensor([[[1.0, 0.0, 0.0], [0.0, -1.0, 0.0], [0.0, 0.0, -1.0]]])
+ body_r = body_r @ rotate_x
+
+ final_r = camera_r @ body_r
+ body_aa = batch_rot2aa(final_r)
+ return body_aa
+
+
+def estimate_translation_k_np(S, joints_2d, joints_conf, K):
+ """Find camera translation that brings 3D joints S closest to 2D the corresponding joints_2d.
+ Input:
+ S: (25, 3) 3D joint locations
+ joints: (25, 3) 2D joint locations and confidence
+ Returns:
+ (3,) camera translation vector
+ """
+ num_joints = S.shape[0]
+ # focal length
+
+ focal = np.array([K[0, 0], K[1, 1]])
+ # optical center
+ center = np.array([K[0, 2], K[1, 2]])
+
+ # transformations
+ Z = np.reshape(np.tile(S[:, 2], (2, 1)).T, -1)
+ XY = np.reshape(S[:, 0:2], -1)
+ O = np.tile(center, num_joints)
+ F = np.tile(focal, num_joints)
+ weight2 = np.reshape(np.tile(np.sqrt(joints_conf), (2, 1)).T, -1)
+
+ # least squares
+ Q = np.array(
+ [
+ F * np.tile(np.array([1, 0]), num_joints),
+ F * np.tile(np.array([0, 1]), num_joints),
+ O - np.reshape(joints_2d, -1),
+ ]
+ ).T
+ c = (np.reshape(joints_2d, -1) - O) * Z - F * XY
+
+ # weighted least squares
+ W = np.diagflat(weight2)
+ Q = np.dot(W, Q)
+ c = np.dot(W, c)
+
+ # square matrix
+ A = np.dot(Q.T, Q)
+ b = np.dot(Q.T, c)
+
+ # solution
+ trans = np.linalg.solve(A, b)
+
+ return trans
+
+
+def estimate_translation_k(
+ S,
+ joints_2d,
+ K,
+ use_all_joints=False,
+ rotation=None,
+ pad_2d=False,
+):
+ """Find camera translation that brings 3D joints S closest to 2D the corresponding joints_2d.
+ Input:
+ S: (B, 49, 3) 3D joint locations
+ joints: (B, 49, 3) 2D joint locations and confidence
+ Returns:
+ (B, 3) camera translation vectors
+ """
+ if pad_2d:
+ batch, num_pts = joints_2d.shape[:2]
+ joints_2d_pad = torch.ones((batch, num_pts, 3))
+ joints_2d_pad[:, :, :2] = joints_2d
+ joints_2d_pad = joints_2d_pad.to(joints_2d.device)
+ joints_2d = joints_2d_pad
+
+ device = S.device
+
+ if rotation is not None:
+ S = torch.einsum("bij,bkj->bki", rotation, S)
+
+ # Use only joints 25:49 (GT joints)
+ if use_all_joints:
+ S = S.cpu().numpy()
+ joints_2d = joints_2d.cpu().numpy()
+ else:
+ S = S[:, 25:, :].cpu().numpy()
+ joints_2d = joints_2d[:, 25:, :].cpu().numpy()
+
+ joints_conf = joints_2d[:, :, -1]
+ joints_2d = joints_2d[:, :, :-1]
+ trans = np.zeros((S.shape[0], 3), dtype=np.float32)
+ # Find the translation for each example in the batch
+ for i in range(S.shape[0]):
+ S_i = S[i]
+ joints_i = joints_2d[i]
+ conf_i = joints_conf[i]
+ K_i = K[i]
+ trans[i] = estimate_translation_k_np(S_i, joints_i, conf_i, K_i)
+ return torch.from_numpy(trans).to(device)
+
+
+def weak_perspective_to_perspective_torch(
+ weak_perspective_camera, focal_length, img_res, min_s
+):
+ # Convert Weak Perspective Camera [s, tx, ty] to camera translation [tx, ty, tz]
+ # in 3D given the bounding box size
+ # This camera translation can be used in a full perspective projection
+ s = weak_perspective_camera[:, 0]
+ s = torch.clamp(s, min_s)
+ tx = weak_perspective_camera[:, 1]
+ ty = weak_perspective_camera[:, 2]
+ perspective_camera = torch.stack(
+ [
+ tx,
+ ty,
+ 2 * focal_length / (img_res * s + 1e-9),
+ ],
+ dim=-1,
+ )
+ return perspective_camera
diff --git a/common/comet_utils.py b/common/comet_utils.py
new file mode 100644
index 0000000..36c7af2
--- /dev/null
+++ b/common/comet_utils.py
@@ -0,0 +1,158 @@
+import json
+import os
+import os.path as op
+import time
+
+import comet_ml
+import numpy as np
+import torch
+from loguru import logger
+from tqdm import tqdm
+
+from src.datasets.dataset_utils import copy_repo_arctic
+
+# folder used for debugging
+DUMMY_EXP = "xxxxxxxxx"
+
+
+def add_paths(args):
+ exp_key = args.exp_key
+ args_p = f"./logs/{exp_key}/args.json"
+ ckpt_p = f"./logs/{exp_key}/checkpoints/last.ckpt"
+ if not op.exists(ckpt_p) or DUMMY_EXP in ckpt_p:
+ ckpt_p = ""
+ if args.resume_ckpt != "":
+ ckpt_p = args.resume_ckpt
+ args.ckpt_p = ckpt_p
+ args.log_dir = f"./logs/{exp_key}"
+
+ if args.infer_ckpt != "":
+ basedir = "/".join(args.infer_ckpt.split("/")[:2])
+ basename = op.basename(args.infer_ckpt).replace(".ckpt", ".params.pt")
+ args.interface_p = op.join(basedir, basename)
+ args.args_p = args_p
+ if args.cluster:
+ args.run_p = op.join(args.log_dir, "condor", "run.sh")
+ args.submit_p = op.join(args.log_dir, "condor", "submit.sub")
+ args.repo_p = op.join(args.log_dir, "repo")
+
+ return args
+
+
+def save_args(args, save_keys):
+ args_save = {}
+ for key, val in args.items():
+ if key in save_keys:
+ args_save[key] = val
+ with open(args.args_p, "w") as f:
+ json.dump(args_save, f, indent=4)
+ logger.info(f"Saved args at {args.args_p}")
+
+
+def create_files(args):
+ os.makedirs(args.log_dir, exist_ok=True)
+ if args.cluster:
+ os.makedirs(op.dirname(args.run_p), exist_ok=True)
+ copy_repo_arctic(args.exp_key)
+
+
+def log_exp_meta(args):
+ tags = [args.method]
+ logger.info(f"Experiment tags: {tags}")
+ args.experiment.set_name(args.exp_key)
+ args.experiment.add_tags(tags)
+ args.experiment.log_parameters(args)
+
+
+def init_experiment(args):
+ if args.resume_ckpt != "":
+ args.exp_key = args.resume_ckpt.split("/")[1]
+ if args.fast_dev_run:
+ args.exp_key = DUMMY_EXP
+ if args.exp_key == "":
+ args.exp_key = generate_exp_key()
+ args = add_paths(args)
+ if op.exists(args.args_p) and args.exp_key not in [DUMMY_EXP]:
+ with open(args.args_p, "r") as f:
+ args_disk = json.load(f)
+ if "comet_key" in args_disk.keys():
+ args.comet_key = args_disk["comet_key"]
+
+ create_files(args)
+
+ project_name = args.project
+ disabled = args.mute
+ comet_url = args["comet_key"] if "comet_key" in args.keys() else None
+
+ api_key = os.environ["COMET_API_KEY"]
+ workspace = os.environ["COMET_WORKSPACE"]
+ if not args.cluster:
+ if comet_url is None:
+ experiment = comet_ml.Experiment(
+ api_key=api_key,
+ workspace=workspace,
+ project_name=project_name,
+ disabled=disabled,
+ display_summary_level=0,
+ )
+ args.comet_key = experiment.get_key()
+ else:
+ experiment = comet_ml.ExistingExperiment(
+ previous_experiment=comet_url,
+ api_key=api_key,
+ project_name=project_name,
+ workspace=workspace,
+ disabled=disabled,
+ display_summary_level=0,
+ )
+
+ device = "cuda" if torch.cuda.is_available() else "cpu"
+ logger.add(
+ os.path.join(args.log_dir, "train.log"),
+ level="INFO",
+ colorize=True,
+ )
+ logger.info(torch.cuda.get_device_properties(device))
+ args.gpu = torch.cuda.get_device_properties(device).name
+ else:
+ experiment = None
+ args.experiment = experiment
+ return experiment, args
+
+
+def log_dict(experiment, metric_dict, step, postfix=None):
+ if experiment is None:
+ return
+ for key, value in metric_dict.items():
+ if postfix is not None:
+ key = key + postfix
+ if isinstance(value, torch.Tensor) and len(value.view(-1)) == 1:
+ value = value.item()
+
+ if isinstance(value, (int, float, np.float32)):
+ experiment.log_metric(key, value, step=step)
+
+
+def generate_exp_key():
+ import random
+
+ hash = random.getrandbits(128)
+ key = "%032x" % (hash)
+ key = key[:9]
+ return key
+
+
+def push_images(experiment, all_im_list, global_step=None, no_tqdm=False, verbose=True):
+ if verbose:
+ print("Pushing PIL images")
+ tic = time.time()
+ iterator = all_im_list if no_tqdm else tqdm(all_im_list)
+ for im in iterator:
+ im_np = np.array(im["im"])
+ if "fig_name" in im.keys():
+ experiment.log_image(im_np, im["fig_name"], step=global_step)
+ else:
+ experiment.log_image(im_np, "unnamed", step=global_step)
+ if verbose:
+ toc = time.time()
+ print("Done pushing PIL images (%.1fs)" % (toc - tic))
diff --git a/common/data_utils.py b/common/data_utils.py
new file mode 100644
index 0000000..fda2514
--- /dev/null
+++ b/common/data_utils.py
@@ -0,0 +1,371 @@
+"""
+This file contains functions that are used to perform data augmentation.
+"""
+import cv2
+import numpy as np
+import torch
+from loguru import logger
+
+
+def get_transform(center, scale, res, rot=0):
+ """Generate transformation matrix."""
+ h = 200 * scale
+ t = np.zeros((3, 3))
+ t[0, 0] = float(res[1]) / h
+ t[1, 1] = float(res[0]) / h
+ t[0, 2] = res[1] * (-float(center[0]) / h + 0.5)
+ t[1, 2] = res[0] * (-float(center[1]) / h + 0.5)
+ t[2, 2] = 1
+ if not rot == 0:
+ rot = -rot # To match direction of rotation from cropping
+ rot_mat = np.zeros((3, 3))
+ rot_rad = rot * np.pi / 180
+ sn, cs = np.sin(rot_rad), np.cos(rot_rad)
+ rot_mat[0, :2] = [cs, -sn]
+ rot_mat[1, :2] = [sn, cs]
+ rot_mat[2, 2] = 1
+ # Need to rotate around center
+ t_mat = np.eye(3)
+ t_mat[0, 2] = -res[1] / 2
+ t_mat[1, 2] = -res[0] / 2
+ t_inv = t_mat.copy()
+ t_inv[:2, 2] *= -1
+ t = np.dot(t_inv, np.dot(rot_mat, np.dot(t_mat, t)))
+ return t
+
+
+def transform(pt, center, scale, res, invert=0, rot=0):
+ """Transform pixel location to different reference."""
+ t = get_transform(center, scale, res, rot=rot)
+ if invert:
+ t = np.linalg.inv(t)
+ new_pt = np.array([pt[0] - 1, pt[1] - 1, 1.0]).T
+ new_pt = np.dot(t, new_pt)
+ return new_pt[:2].astype(int) + 1
+
+
+def rotate_2d(pt_2d, rot_rad):
+ x = pt_2d[0]
+ y = pt_2d[1]
+ sn, cs = np.sin(rot_rad), np.cos(rot_rad)
+ xx = x * cs - y * sn
+ yy = x * sn + y * cs
+ return np.array([xx, yy], dtype=np.float32)
+
+
+def gen_trans_from_patch_cv(
+ c_x, c_y, src_width, src_height, dst_width, dst_height, scale, rot, inv=False
+):
+ # augment size with scale
+ src_w = src_width * scale
+ src_h = src_height * scale
+ src_center = np.array([c_x, c_y], dtype=np.float32)
+
+ # augment rotation
+ rot_rad = np.pi * rot / 180
+ src_downdir = rotate_2d(np.array([0, src_h * 0.5], dtype=np.float32), rot_rad)
+ src_rightdir = rotate_2d(np.array([src_w * 0.5, 0], dtype=np.float32), rot_rad)
+
+ dst_w = dst_width
+ dst_h = dst_height
+ dst_center = np.array([dst_w * 0.5, dst_h * 0.5], dtype=np.float32)
+ dst_downdir = np.array([0, dst_h * 0.5], dtype=np.float32)
+ dst_rightdir = np.array([dst_w * 0.5, 0], dtype=np.float32)
+
+ src = np.zeros((3, 2), dtype=np.float32)
+ src[0, :] = src_center
+ src[1, :] = src_center + src_downdir
+ src[2, :] = src_center + src_rightdir
+
+ dst = np.zeros((3, 2), dtype=np.float32)
+ dst[0, :] = dst_center
+ dst[1, :] = dst_center + dst_downdir
+ dst[2, :] = dst_center + dst_rightdir
+
+ if inv:
+ trans = cv2.getAffineTransform(np.float32(dst), np.float32(src))
+ else:
+ trans = cv2.getAffineTransform(np.float32(src), np.float32(dst))
+
+ trans = trans.astype(np.float32)
+ return trans
+
+
+def generate_patch_image(
+ cvimg,
+ bbox,
+ scale,
+ rot,
+ out_shape,
+ interpl_strategy,
+ gauss_kernel=5,
+ gauss_sigma=8.0,
+):
+ img = cvimg.copy()
+
+ bb_c_x = float(bbox[0])
+ bb_c_y = float(bbox[1])
+ bb_width = float(bbox[2])
+ bb_height = float(bbox[3])
+
+ trans = gen_trans_from_patch_cv(
+ bb_c_x, bb_c_y, bb_width, bb_height, out_shape[1], out_shape[0], scale, rot
+ )
+
+ # anti-aliasing
+ blur = cv2.GaussianBlur(img, (gauss_kernel, gauss_kernel), gauss_sigma)
+ img_patch = cv2.warpAffine(
+ blur, trans, (int(out_shape[1]), int(out_shape[0])), flags=interpl_strategy
+ )
+ img_patch = img_patch.astype(np.float32)
+ inv_trans = gen_trans_from_patch_cv(
+ bb_c_x,
+ bb_c_y,
+ bb_width,
+ bb_height,
+ out_shape[1],
+ out_shape[0],
+ scale,
+ rot,
+ inv=True,
+ )
+
+ return img_patch, trans, inv_trans
+
+
+def augm_params(is_train, flip_prob, noise_factor, rot_factor, scale_factor):
+ """Get augmentation parameters."""
+ flip = 0 # flipping
+ pn = np.ones(3) # per channel pixel-noise
+ rot = 0 # rotation
+ sc = 1 # scaling
+ if is_train:
+ # We flip with probability 1/2
+ if np.random.uniform() <= flip_prob:
+ flip = 1
+ assert False, "Flipping not supported"
+
+ # Each channel is multiplied with a number
+ # in the area [1-opt.noiseFactor,1+opt.noiseFactor]
+ pn = np.random.uniform(1 - noise_factor, 1 + noise_factor, 3)
+
+ # The rotation is a number in the area [-2*rotFactor, 2*rotFactor]
+ rot = min(
+ 2 * rot_factor,
+ max(
+ -2 * rot_factor,
+ np.random.randn() * rot_factor,
+ ),
+ )
+
+ # The scale is multiplied with a number
+ # in the area [1-scaleFactor,1+scaleFactor]
+ sc = min(
+ 1 + scale_factor,
+ max(
+ 1 - scale_factor,
+ np.random.randn() * scale_factor + 1,
+ ),
+ )
+ # but it is zero with probability 3/5
+ if np.random.uniform() <= 0.6:
+ rot = 0
+
+ augm_dict = {}
+ augm_dict["flip"] = flip
+ augm_dict["pn"] = pn
+ augm_dict["rot"] = rot
+ augm_dict["sc"] = sc
+ return augm_dict
+
+
+def rgb_processing(is_train, rgb_img, center, bbox_dim, augm_dict, img_res):
+ rot = augm_dict["rot"]
+ sc = augm_dict["sc"]
+ pn = augm_dict["pn"]
+ scale = sc * bbox_dim
+
+ crop_dim = int(scale * 200)
+ # faster cropping!!
+ rgb_img = generate_patch_image(
+ rgb_img,
+ [center[0], center[1], crop_dim, crop_dim],
+ 1.0,
+ rot,
+ [img_res, img_res],
+ cv2.INTER_CUBIC,
+ )[0]
+
+ # in the rgb image we add pixel noise in a channel-wise manner
+ rgb_img[:, :, 0] = np.minimum(255.0, np.maximum(0.0, rgb_img[:, :, 0] * pn[0]))
+ rgb_img[:, :, 1] = np.minimum(255.0, np.maximum(0.0, rgb_img[:, :, 1] * pn[1]))
+ rgb_img[:, :, 2] = np.minimum(255.0, np.maximum(0.0, rgb_img[:, :, 2] * pn[2]))
+ rgb_img = np.transpose(rgb_img.astype("float32"), (2, 0, 1)) / 255.0
+ return rgb_img
+
+
+def transform_kp2d(kp2d, bbox):
+ # bbox: (cx, cy, scale) in the original image space
+ # scale is normalized
+ assert isinstance(kp2d, np.ndarray)
+ assert len(kp2d.shape) == 2
+ cx, cy, scale = bbox
+ s = 200 * scale # to px
+ cap_dim = 1000 # px
+ factor = cap_dim / (1.5 * s)
+ kp2d_cropped = np.copy(kp2d)
+ kp2d_cropped[:, 0] -= cx - 1.5 / 2 * s
+ kp2d_cropped[:, 1] -= cy - 1.5 / 2 * s
+ kp2d_cropped[:, 0] *= factor
+ kp2d_cropped[:, 1] *= factor
+ return kp2d_cropped
+
+
+def j2d_processing(kp, center, bbox_dim, augm_dict, img_res):
+ """Process gt 2D keypoints and apply all augmentation transforms."""
+ scale = augm_dict["sc"] * bbox_dim
+ rot = augm_dict["rot"]
+
+ nparts = kp.shape[0]
+ for i in range(nparts):
+ kp[i, 0:2] = transform(
+ kp[i, 0:2] + 1,
+ center,
+ scale,
+ [img_res, img_res],
+ rot=rot,
+ )
+ # convert to normalized coordinates
+ kp = normalize_kp2d_np(kp, img_res)
+ kp = kp.astype("float32")
+ return kp
+
+
+def pose_processing(pose, augm_dict):
+ """Process SMPL theta parameters and apply all augmentation transforms."""
+ rot = augm_dict["rot"]
+ # rotation or the pose parameters
+ pose[:3] = rot_aa(pose[:3], rot)
+ # flip the pose parameters
+ # (72),float
+ pose = pose.astype("float32")
+ return pose
+
+
+def rot_aa(aa, rot):
+ """Rotate axis angle parameters."""
+ # pose parameters
+ R = np.array(
+ [
+ [np.cos(np.deg2rad(-rot)), -np.sin(np.deg2rad(-rot)), 0],
+ [np.sin(np.deg2rad(-rot)), np.cos(np.deg2rad(-rot)), 0],
+ [0, 0, 1],
+ ]
+ )
+ # find the rotation of the body in camera frame
+ per_rdg, _ = cv2.Rodrigues(aa)
+ # apply the global rotation to the global orientation
+ resrot, _ = cv2.Rodrigues(np.dot(R, per_rdg))
+ aa = (resrot.T)[0]
+ return aa
+
+
+def denormalize_images(images):
+ images = images * torch.tensor([0.229, 0.224, 0.225], device=images.device).reshape(
+ 1, 3, 1, 1
+ )
+ images = images + torch.tensor([0.485, 0.456, 0.406], device=images.device).reshape(
+ 1, 3, 1, 1
+ )
+ return images
+
+
+def read_img(img_fn, dummy_shape):
+ try:
+ cv_img = _read_img(img_fn)
+ except:
+ logger.warning(f"Unable to load {img_fn}")
+ cv_img = np.zeros(dummy_shape, dtype=np.float32)
+ return cv_img, False
+ return cv_img, True
+
+
+def _read_img(img_fn):
+ img = cv2.cvtColor(cv2.imread(img_fn), cv2.COLOR_BGR2RGB)
+ return img.astype(np.float32)
+
+
+def normalize_kp2d_np(kp2d: np.ndarray, img_res):
+ assert kp2d.shape[1] == 3
+ kp2d_normalized = kp2d.copy()
+ kp2d_normalized[:, :2] = 2.0 * kp2d[:, :2] / img_res - 1.0
+ return kp2d_normalized
+
+
+def unnormalize_2d_kp(kp_2d_np: np.ndarray, res):
+ assert kp_2d_np.shape[1] == 3
+ kp_2d = np.copy(kp_2d_np)
+ kp_2d[:, :2] = 0.5 * res * (kp_2d[:, :2] + 1)
+ return kp_2d
+
+
+def normalize_kp2d(kp2d: torch.Tensor, img_res):
+ assert len(kp2d.shape) == 3
+ kp2d_normalized = kp2d.clone()
+ kp2d_normalized[:, :, :2] = 2.0 * kp2d[:, :, :2] / img_res - 1.0
+ return kp2d_normalized
+
+
+def unormalize_kp2d(kp2d_normalized: torch.Tensor, img_res):
+ assert len(kp2d_normalized.shape) == 3
+ assert kp2d_normalized.shape[2] == 2
+ kp2d = kp2d_normalized.clone()
+ kp2d = 0.5 * img_res * (kp2d + 1)
+ return kp2d
+
+
+def get_wp_intrix(fixed_focal: float, img_res):
+ # consruct weak perspective on patch
+ camera_center = np.array([img_res // 2, img_res // 2])
+ intrx = torch.zeros([3, 3])
+ intrx[0, 0] = fixed_focal
+ intrx[1, 1] = fixed_focal
+ intrx[2, 2] = 1.0
+ intrx[0, -1] = camera_center[0]
+ intrx[1, -1] = camera_center[1]
+ return intrx
+
+
+def get_aug_intrix(
+ intrx, fixed_focal: float, img_res, use_gt_k, bbox_cx, bbox_cy, scale
+):
+ """
+ This function returns camera intrinsics under scaling.
+ If use_gt_k, the GT K is used, but scaled based on the amount of scaling in the patch.
+ Else, we construct an intrinsic camera with a fixed focal length and fixed camera center.
+ """
+
+ if not use_gt_k:
+ # consruct weak perspective on patch
+ intrx = get_wp_intrix(fixed_focal, img_res)
+ else:
+ # update the GT intrinsics (full image space)
+ # such that it matches the scale of the patch
+
+ dim = scale * 200.0 # bbox size
+ k_scale = float(img_res) / dim # resized_dim / bbox_size in full image space
+ """
+ # x1 and y1: top-left corner of bbox
+ intrinsics after data augmentation
+ fx' = k*fx
+ fy' = k*fy
+ cx' = k*(cx - x1)
+ cy' = k*(cy - y1)
+ """
+ intrx[0, 0] *= k_scale # k*fx
+ intrx[1, 1] *= k_scale # k*fy
+ intrx[0, 2] -= bbox_cx - dim / 2.0
+ intrx[1, 2] -= bbox_cy - dim / 2.0
+ intrx[0, 2] *= k_scale
+ intrx[1, 2] *= k_scale
+ return intrx
diff --git a/common/ld_utils.py b/common/ld_utils.py
new file mode 100644
index 0000000..9ef3b22
--- /dev/null
+++ b/common/ld_utils.py
@@ -0,0 +1,116 @@
+import itertools
+
+import numpy as np
+import torch
+
+
+def sort_dict(disordered):
+ sorted_dict = {k: disordered[k] for k in sorted(disordered)}
+ return sorted_dict
+
+
+def prefix_dict(mydict, prefix):
+ out = {prefix + k: v for k, v in mydict.items()}
+ return out
+
+
+def postfix_dict(mydict, postfix):
+ out = {k + postfix: v for k, v in mydict.items()}
+ return out
+
+
+def unsort(L, sort_idx):
+ assert isinstance(sort_idx, list)
+ assert isinstance(L, list)
+ LL = zip(sort_idx, L)
+ LL = sorted(LL, key=lambda x: x[0])
+ _, L = zip(*LL)
+ return list(L)
+
+
+def cat_dl(out_list, dim, verbose=True, squeeze=True):
+ out = {}
+ for key, val in out_list.items():
+ if isinstance(val[0], torch.Tensor):
+ out[key] = torch.cat(val, dim=dim)
+ if squeeze:
+ out[key] = out[key].squeeze()
+ elif isinstance(val[0], np.ndarray):
+ out[key] = np.concatenate(val, axis=dim)
+ if squeeze:
+ out[key] = np.squeeze(out[key])
+ elif isinstance(val[0], list):
+ out[key] = sum(val, [])
+ else:
+ if verbose:
+ print(f"Ignoring {key} undefined type {type(val[0])}")
+ return out
+
+
+def stack_dl(out_list, dim, verbose=True, squeeze=True):
+ out = {}
+ for key, val in out_list.items():
+ if isinstance(val[0], torch.Tensor):
+ out[key] = torch.stack(val, dim=dim)
+ if squeeze:
+ out[key] = out[key].squeeze()
+ elif isinstance(val[0], np.ndarray):
+ out[key] = np.stack(val, axis=dim)
+ if squeeze:
+ out[key] = np.squeeze(out[key])
+ elif isinstance(val[0], list):
+ out[key] = sum(val, [])
+ else:
+ out[key] = val
+ if verbose:
+ print(f"Processing {key} undefined type {type(val[0])}")
+ return out
+
+
+def add_prefix_postfix(mydict, prefix="", postfix=""):
+ assert isinstance(mydict, dict)
+ return dict((prefix + key + postfix, value) for (key, value) in mydict.items())
+
+
+def ld2dl(LD):
+ assert isinstance(LD, list)
+ assert isinstance(LD[0], dict)
+ """
+ A list of dict (same keys) to a dict of lists
+ """
+ dict_list = {k: [dic[k] for dic in LD] for k in LD[0]}
+ return dict_list
+
+
+class NameSpace(object):
+ def __init__(self, adict):
+ self.__dict__.update(adict)
+
+
+def dict2ns(mydict):
+ """
+ Convert dict objec to namespace
+ """
+ return NameSpace(mydict)
+
+
+def ld2dev(ld, dev):
+ """
+ Convert tensors in a list or dict to a device recursively
+ """
+ if isinstance(ld, torch.Tensor):
+ return ld.to(dev)
+ if isinstance(ld, dict):
+ for k, v in ld.items():
+ ld[k] = ld2dev(v, dev)
+ return ld
+ if isinstance(ld, list):
+ return [ld2dev(x, dev) for x in ld]
+ return ld
+
+
+def all_comb_dict(hyper_dict):
+ assert isinstance(hyper_dict, dict)
+ keys, values = zip(*hyper_dict.items())
+ permute_dicts = [dict(zip(keys, v)) for v in itertools.product(*values)]
+ return permute_dicts
diff --git a/common/list_utils.py b/common/list_utils.py
new file mode 100644
index 0000000..4e6a321
--- /dev/null
+++ b/common/list_utils.py
@@ -0,0 +1,52 @@
+import math
+
+
+def chunks_by_len(L, n):
+ """
+ Split a list into n chunks
+ """
+ num_chunks = int(math.ceil(float(len(L)) / n))
+ splits = [L[x : x + num_chunks] for x in range(0, len(L), num_chunks)]
+ return splits
+
+
+def chunks_by_size(L, n):
+ """Yield successive n-sized chunks from lst."""
+ seqs = []
+ for i in range(0, len(L), n):
+ seqs.append(L[i : i + n])
+ return seqs
+
+
+def unsort(L, sort_idx):
+ assert isinstance(sort_idx, list)
+ assert isinstance(L, list)
+ LL = zip(sort_idx, L)
+ LL = sorted(LL, key=lambda x: x[0])
+ _, L = zip(*LL)
+ return list(L)
+
+
+def add_prefix_postfix(mydict, prefix="", postfix=""):
+ assert isinstance(mydict, dict)
+ return dict((prefix + key + postfix, value) for (key, value) in mydict.items())
+
+
+def ld2dl(LD):
+ assert isinstance(LD, list)
+ assert isinstance(LD[0], dict)
+ """
+ A list of dict (same keys) to a dict of lists
+ """
+ dict_list = {k: [dic[k] for dic in LD] for k in LD[0]}
+ return dict_list
+
+
+def chunks(lst, n):
+ """Yield successive n-sized chunks from lst."""
+ seqs = []
+ for i in range(0, len(lst), n):
+ seqs.append(lst[i : i + n])
+ seqs_chunked = sum(seqs, [])
+ assert set(seqs_chunked) == set(lst)
+ return seqs
diff --git a/common/mesh.py b/common/mesh.py
new file mode 100644
index 0000000..47c4bb6
--- /dev/null
+++ b/common/mesh.py
@@ -0,0 +1,94 @@
+import numpy as np
+import trimesh
+
+colors = {
+ "pink": [1.00, 0.75, 0.80],
+ "purple": [0.63, 0.13, 0.94],
+ "red": [1.0, 0.0, 0.0],
+ "green": [0.0, 1.0, 0.0],
+ "yellow": [1.0, 1.0, 0],
+ "brown": [1.00, 0.25, 0.25],
+ "blue": [0.0, 0.0, 1.0],
+ "white": [1.0, 1.0, 1.0],
+ "orange": [1.00, 0.65, 0.00],
+ "grey": [0.75, 0.75, 0.75],
+ "black": [0.0, 0.0, 0.0],
+}
+
+
+class Mesh(trimesh.Trimesh):
+ def __init__(
+ self,
+ filename=None,
+ v=None,
+ f=None,
+ vc=None,
+ fc=None,
+ process=False,
+ visual=None,
+ **kwargs
+ ):
+ if filename is not None:
+ mesh = trimesh.load(filename, process=process)
+ v = mesh.vertices
+ f = mesh.faces
+ visual = mesh.visual
+
+ super(Mesh, self).__init__(
+ vertices=v, faces=f, visual=visual, process=process, **kwargs
+ )
+
+ self.v = self.vertices
+ self.f = self.faces
+ assert self.v is self.vertices
+ assert self.f is self.faces
+
+ if vc is not None:
+ self.set_vc(vc)
+ self.vc = self.visual.vertex_colors
+ assert self.vc is self.visual.vertex_colors
+ if fc is not None:
+ self.set_fc(fc)
+ self.fc = self.visual.face_colors
+ assert self.fc is self.visual.face_colors
+
+ def rot_verts(self, vertices, rxyz):
+ return np.array(vertices * rxyz.T)
+
+ def colors_like(self, color, array, ids):
+ color = np.array(color)
+
+ if color.max() <= 1.0:
+ color = color * 255
+ color = color.astype(np.int8)
+
+ n_color = color.shape[0]
+ n_ids = ids.shape[0]
+
+ new_color = np.array(array)
+ if n_color <= 4:
+ new_color[ids, :n_color] = np.repeat(color[np.newaxis], n_ids, axis=0)
+ else:
+ new_color[ids, :] = color
+
+ return new_color
+
+ def set_vc(self, vc, vertex_ids=None):
+ all_ids = np.arange(self.vertices.shape[0])
+ if vertex_ids is None:
+ vertex_ids = all_ids
+
+ vertex_ids = all_ids[vertex_ids]
+ new_vc = self.colors_like(vc, self.visual.vertex_colors, vertex_ids)
+ self.visual.vertex_colors[:] = new_vc
+
+ def set_fc(self, fc, face_ids=None):
+ if face_ids is None:
+ face_ids = np.arange(self.faces.shape[0])
+
+ new_fc = self.colors_like(fc, self.visual.face_colors, face_ids)
+ self.visual.face_colors[:] = new_fc
+
+ @staticmethod
+ def cat(meshes):
+ return trimesh.util.concatenate(meshes)
diff --git a/common/metrics.py b/common/metrics.py
new file mode 100644
index 0000000..a940e69
--- /dev/null
+++ b/common/metrics.py
@@ -0,0 +1,51 @@
+import math
+
+import numpy as np
+import torch
+
+
+def compute_v2v_dist_no_reduce(v3d_cam_gt, v3d_cam_pred, is_valid):
+ assert isinstance(v3d_cam_gt, list)
+ assert isinstance(v3d_cam_pred, list)
+ assert len(v3d_cam_gt) == len(v3d_cam_pred)
+ assert len(v3d_cam_gt) == len(is_valid)
+ v2v = []
+ for v_gt, v_pred, valid in zip(v3d_cam_gt, v3d_cam_pred, is_valid):
+ if valid:
+ dist = ((v_gt - v_pred) ** 2).sum(dim=1).sqrt().cpu().numpy() # meter
+ else:
+ dist = None
+ v2v.append(dist)
+ return v2v
+
+
+def compute_joint3d_error(joints3d_cam_gt, joints3d_cam_pred, valid_jts):
+ valid_jts = valid_jts.view(-1)
+ assert joints3d_cam_gt.shape == joints3d_cam_pred.shape
+ assert joints3d_cam_gt.shape[0] == valid_jts.shape[0]
+ dist = ((joints3d_cam_gt - joints3d_cam_pred) ** 2).sum(dim=2).sqrt()
+ invalid_idx = torch.nonzero((1 - valid_jts).long()).view(-1)
+ dist[invalid_idx, :] = float("nan")
+ dist = dist.cpu().numpy()
+ return dist
+
+
+def compute_mrrpe(root_r_gt, root_l_gt, root_r_pred, root_l_pred, is_valid):
+ rel_vec_gt = root_l_gt - root_r_gt
+ rel_vec_pred = root_l_pred - root_r_pred
+
+ invalid_idx = torch.nonzero((1 - is_valid).long()).view(-1)
+ mrrpe = ((rel_vec_pred - rel_vec_gt) ** 2).sum(dim=1).sqrt()
+ mrrpe[invalid_idx] = float("nan")
+ mrrpe = mrrpe.cpu().numpy()
+ return mrrpe
+
+
+def compute_arti_deg_error(pred_radian, gt_radian):
+ assert pred_radian.shape == gt_radian.shape
+
+ # articulation error in degree
+ pred_degree = pred_radian / math.pi * 180 # degree
+ gt_degree = gt_radian / math.pi * 180 # degree
+ err_deg = torch.abs(pred_degree - gt_degree).tolist()
+ return np.array(err_deg, dtype=np.float32)
diff --git a/common/np_utils.py b/common/np_utils.py
new file mode 100644
index 0000000..35663ac
--- /dev/null
+++ b/common/np_utils.py
@@ -0,0 +1,7 @@
+import numpy as np
+
+
+def permute_np(x, idx):
+ original_perm = tuple(range(len(x.shape)))
+ x = np.moveaxis(x, original_perm, idx)
+ return x
diff --git a/common/object_tensors.py b/common/object_tensors.py
new file mode 100644
index 0000000..530f976
--- /dev/null
+++ b/common/object_tensors.py
@@ -0,0 +1,293 @@
+import json
+import os.path as op
+import sys
+
+import numpy as np
+import torch
+import torch.nn as nn
+import trimesh
+from easydict import EasyDict
+from scipy.spatial.distance import cdist
+
+sys.path = [".."] + sys.path
+import common.thing as thing
+from common.rot import axis_angle_to_quaternion, quaternion_apply
+from common.torch_utils import pad_tensor_list
+from common.xdict import xdict
+
+# objects to consider for training so far
+OBJECTS = [
+ "capsulemachine",
+ "box",
+ "ketchup",
+ "laptop",
+ "microwave",
+ "mixer",
+ "notebook",
+ "espressomachine",
+ "waffleiron",
+ "scissors",
+ "phone",
+]
+
+
+class ObjectTensors(nn.Module):
+ def __init__(self):
+ super(ObjectTensors, self).__init__()
+ self.obj_tensors = thing.thing2dev(construct_obj_tensors(OBJECTS), "cpu")
+ self.dev = None
+
+ def forward_7d_batch(
+ self,
+ angles: (None, torch.Tensor),
+ global_orient: (None, torch.Tensor),
+ transl: (None, torch.Tensor),
+ query_names: list,
+ fwd_template: bool,
+ ):
+ self._sanity_check(angles, global_orient, transl, query_names, fwd_template)
+
+ # store output
+ out = xdict()
+
+ # meta info
+ obj_idx = np.array(
+ [self.obj_tensors["names"].index(name) for name in query_names]
+ )
+ out["diameter"] = self.obj_tensors["diameter"][obj_idx]
+ out["f"] = self.obj_tensors["f"][obj_idx]
+ out["f_len"] = self.obj_tensors["f_len"][obj_idx]
+ out["v_len"] = self.obj_tensors["v_len"][obj_idx]
+
+ max_len = out["v_len"].max()
+ out["v"] = self.obj_tensors["v"][obj_idx][:, :max_len]
+ out["mask"] = self.obj_tensors["mask"][obj_idx][:, :max_len]
+ out["v_sub"] = self.obj_tensors["v_sub"][obj_idx]
+ out["parts_ids"] = self.obj_tensors["parts_ids"][obj_idx][:, :max_len]
+ out["parts_sub_ids"] = self.obj_tensors["parts_sub_ids"][obj_idx]
+
+ if fwd_template:
+ return out
+
+ # articulation + global rotation
+ quat_arti = axis_angle_to_quaternion(self.obj_tensors["z_axis"] * angles)
+ quat_global = axis_angle_to_quaternion(global_orient.view(-1, 3))
+
+ # mm
+ # collect entities to be transformed
+ tf_dict = xdict()
+ tf_dict["v_top"] = out["v"].clone()
+ tf_dict["v_sub_top"] = out["v_sub"].clone()
+ tf_dict["v_bottom"] = out["v"].clone()
+ tf_dict["v_sub_bottom"] = out["v_sub"].clone()
+ tf_dict["bbox_top"] = self.obj_tensors["bbox_top"][obj_idx]
+ tf_dict["bbox_bottom"] = self.obj_tensors["bbox_bottom"][obj_idx]
+ tf_dict["kp_top"] = self.obj_tensors["kp_top"][obj_idx]
+ tf_dict["kp_bottom"] = self.obj_tensors["kp_bottom"][obj_idx]
+
+ # articulate top parts
+ for key, val in tf_dict.items():
+ if "top" in key:
+ val_rot = quaternion_apply(quat_arti[:, None, :], val)
+ tf_dict.overwrite(key, val_rot)
+
+ # global rotation for all
+ for key, val in tf_dict.items():
+ val_rot = quaternion_apply(quat_global[:, None, :], val)
+ if transl is not None:
+ val_rot = val_rot + transl[:, None, :]
+ tf_dict.overwrite(key, val_rot)
+
+ # prep output
+ top_idx = out["parts_ids"] == 1
+ v_tensor = tf_dict["v_bottom"].clone()
+ v_tensor[top_idx, :] = tf_dict["v_top"][top_idx, :]
+
+ top_idx = out["parts_sub_ids"] == 1
+ v_sub_tensor = tf_dict["v_sub_bottom"].clone()
+ v_sub_tensor[top_idx, :] = tf_dict["v_sub_top"][top_idx, :]
+
+ bbox = torch.cat((tf_dict["bbox_top"], tf_dict["bbox_bottom"]), dim=1)
+ kp3d = torch.cat((tf_dict["kp_top"], tf_dict["kp_bottom"]), dim=1)
+
+ out.overwrite("v", v_tensor)
+ out.overwrite("v_sub", v_sub_tensor)
+ out.overwrite("bbox3d", bbox)
+ out.overwrite("kp3d", kp3d)
+ return out
+
+ def forward(self, angles, global_orient, transl, query_names):
+ out = self.forward_7d_batch(
+ angles, global_orient, transl, query_names, fwd_template=False
+ )
+ return out
+
+ def forward_template(self, query_names):
+ out = self.forward_7d_batch(
+ angles=None,
+ global_orient=None,
+ transl=None,
+ query_names=query_names,
+ fwd_template=True,
+ )
+ return out
+
+ def to(self, dev):
+ self.obj_tensors = thing.thing2dev(self.obj_tensors, dev)
+ self.dev = dev
+
+ def _sanity_check(self, angles, global_orient, transl, query_names, fwd_template):
+ # sanity check
+ if not fwd_template:
+ # assume transl is in meter
+ if transl is not None:
+ transl = transl * 1000 # mm
+
+ batch_size = angles.shape[0]
+ assert angles.shape == (batch_size, 1)
+ assert global_orient.shape == (batch_size, 3)
+ if transl is not None:
+ assert isinstance(transl, torch.Tensor)
+ assert transl.shape == (batch_size, 3)
+ assert len(query_names) == batch_size
+
+
+def construct_obj(object_model_p):
+ # load vtemplate
+ mesh_p = op.join(object_model_p, "mesh.obj")
+ parts_p = op.join(object_model_p, f"parts.json")
+ json_p = op.join(object_model_p, "object_params.json")
+ obj_name = op.basename(object_model_p)
+
+ top_sub_p = f"./data/arctic_data/data/meta/object_vtemplates/{obj_name}/top_keypoints_300.json"
+ bottom_sub_p = top_sub_p.replace("top_", "bottom_")
+ with open(top_sub_p, "r") as f:
+ sub_top = np.array(json.load(f)["keypoints"])
+
+ with open(bottom_sub_p, "r") as f:
+ sub_bottom = np.array(json.load(f)["keypoints"])
+ sub_v = np.concatenate((sub_top, sub_bottom), axis=0)
+
+ with open(parts_p, "r") as f:
+ parts = np.array(json.load(f), dtype=np.bool)
+
+ assert op.exists(mesh_p), f"Not found: {mesh_p}"
+
+ mesh = trimesh.exchange.load.load_mesh(mesh_p, process=False)
+ mesh_v = mesh.vertices
+
+ mesh_f = torch.LongTensor(mesh.faces)
+ vidx = np.argmin(cdist(sub_v, mesh_v, metric="euclidean"), axis=1)
+ parts_sub = parts[vidx]
+
+ vsk = object_model_p.split("/")[-1]
+
+ with open(json_p, "r") as f:
+ params = json.load(f)
+ rest = EasyDict()
+ rest.top = np.array(params["mocap_top"])
+ rest.bottom = np.array(params["mocap_bottom"])
+ bbox_top = np.array(params["bbox_top"])
+ bbox_bottom = np.array(params["bbox_bottom"])
+ kp_top = np.array(params["keypoints_top"])
+ kp_bottom = np.array(params["keypoints_bottom"])
+
+ np.random.seed(1)
+
+ obj = EasyDict()
+ obj.name = vsk
+ obj.obj_name = "".join([i for i in vsk if not i.isdigit()])
+ obj.v = torch.FloatTensor(mesh_v)
+ obj.v_sub = torch.FloatTensor(sub_v)
+ obj.f = torch.LongTensor(mesh_f)
+ obj.parts = torch.LongTensor(parts)
+ obj.parts_sub = torch.LongTensor(parts_sub)
+
+ with open("./data/arctic_data/data/meta/object_meta.json", "r") as f:
+ object_meta = json.load(f)
+ obj.diameter = torch.FloatTensor(np.array(object_meta[obj.obj_name]["diameter"]))
+ obj.bbox_top = torch.FloatTensor(bbox_top)
+ obj.bbox_bottom = torch.FloatTensor(bbox_bottom)
+ obj.kp_top = torch.FloatTensor(kp_top)
+ obj.kp_bottom = torch.FloatTensor(kp_bottom)
+ obj.mocap_top = torch.FloatTensor(np.array(params["mocap_top"]))
+ obj.mocap_bottom = torch.FloatTensor(np.array(params["mocap_bottom"]))
+ return obj
+
+
+def construct_obj_tensors(object_names):
+ obj_list = []
+ for k in object_names:
+ object_model_p = f"./data/arctic_data/data/meta/object_vtemplates/%s" % (k)
+ obj = construct_obj(object_model_p)
+ obj_list.append(obj)
+
+ bbox_top_list = []
+ bbox_bottom_list = []
+ mocap_top_list = []
+ mocap_bottom_list = []
+ kp_top_list = []
+ kp_bottom_list = []
+ v_list = []
+ v_sub_list = []
+ f_list = []
+ parts_list = []
+ parts_sub_list = []
+ diameter_list = []
+ for obj in obj_list:
+ v_list.append(obj.v)
+ v_sub_list.append(obj.v_sub)
+ f_list.append(obj.f)
+
+ # root_list.append(obj.root)
+ bbox_top_list.append(obj.bbox_top)
+ bbox_bottom_list.append(obj.bbox_bottom)
+ kp_top_list.append(obj.kp_top)
+ kp_bottom_list.append(obj.kp_bottom)
+ mocap_top_list.append(obj.mocap_top / 1000)
+ mocap_bottom_list.append(obj.mocap_bottom / 1000)
+ parts_list.append(obj.parts + 1)
+ parts_sub_list.append(obj.parts_sub + 1)
+ diameter_list.append(obj.diameter)
+
+ v_list, v_len_list = pad_tensor_list(v_list)
+ p_list, p_len_list = pad_tensor_list(parts_list)
+ ps_list = torch.stack(parts_sub_list, dim=0)
+ assert (p_len_list - v_len_list).sum() == 0
+
+ max_len = v_len_list.max()
+ mask = torch.zeros(len(obj_list), max_len)
+ for idx, vlen in enumerate(v_len_list):
+ mask[idx, :vlen] = 1.0
+
+ v_sub_list = torch.stack(v_sub_list, dim=0)
+ diameter_list = torch.stack(diameter_list, dim=0)
+
+ f_list, f_len_list = pad_tensor_list(f_list)
+
+ bbox_top_list = torch.stack(bbox_top_list, dim=0)
+ bbox_bottom_list = torch.stack(bbox_bottom_list, dim=0)
+ kp_top_list = torch.stack(kp_top_list, dim=0)
+ kp_bottom_list = torch.stack(kp_bottom_list, dim=0)
+
+ obj_tensors = {}
+ obj_tensors["names"] = object_names
+ obj_tensors["parts_ids"] = p_list
+ obj_tensors["parts_sub_ids"] = ps_list
+
+ obj_tensors["v"] = v_list.float() / 1000
+ obj_tensors["v_sub"] = v_sub_list.float() / 1000
+ obj_tensors["v_len"] = v_len_list
+ obj_tensors["f"] = f_list
+ obj_tensors["f_len"] = f_len_list
+ obj_tensors["diameter"] = diameter_list.float()
+
+ obj_tensors["mask"] = mask
+ obj_tensors["bbox_top"] = bbox_top_list.float() / 1000
+ obj_tensors["bbox_bottom"] = bbox_bottom_list.float() / 1000
+ obj_tensors["kp_top"] = kp_top_list.float() / 1000
+ obj_tensors["kp_bottom"] = kp_bottom_list.float() / 1000
+ obj_tensors["mocap_top"] = mocap_top_list
+ obj_tensors["mocap_bottom"] = mocap_bottom_list
+ obj_tensors["z_axis"] = torch.FloatTensor(np.array([0, 0, -1])).view(1, 3)
+ return obj_tensors
diff --git a/common/pl_utils.py b/common/pl_utils.py
new file mode 100644
index 0000000..9158fc2
--- /dev/null
+++ b/common/pl_utils.py
@@ -0,0 +1,63 @@
+import random
+import time
+
+import torch
+
+import common.thing as thing
+from common.ld_utils import ld2dl
+
+
+def reweight_loss_by_keys(loss_dict, keys, alpha):
+ for key in keys:
+ val, weight = loss_dict[key]
+ weight_new = weight * alpha
+ loss_dict[key] = (val, weight_new)
+ return loss_dict
+
+
+def select_loss_group(groups, agent_id, alphas):
+ random.seed(1)
+ random.shuffle(groups)
+
+ keys = groups[agent_id % len(groups)]
+
+ random.seed(time.time())
+ alpha = random.choice(alphas)
+ random.seed(1)
+ return keys, alpha
+
+
+def push_checkpoint_metric(key, val):
+ val = float(val)
+ checkpt_metric = torch.FloatTensor([val])
+ result = {key: checkpt_metric}
+ return result
+
+
+def avg_losses_cpu(outputs):
+ outputs = ld2dl(outputs)
+ for key, val in outputs.items():
+ val = [v.cpu() for v in val]
+ val = torch.cat(val, dim=0).view(-1)
+ outputs[key] = val.mean()
+ return outputs
+
+
+def reform_outputs(out_list):
+ out_list_dict = ld2dl(out_list)
+ outputs = ld2dl(out_list_dict["out_dict"])
+ losses = ld2dl(out_list_dict["loss"])
+
+ for k, tensor in outputs.items():
+ if isinstance(tensor[0], list):
+ outputs[k] = sum(tensor, [])
+ else:
+ outputs[k] = torch.cat(tensor)
+
+ for k, tensor in losses.items():
+ tensor = [ten.view(-1) for ten in tensor]
+ losses[k] = torch.cat(tensor)
+
+ outputs = {k: thing.thing2np(v) for k, v in outputs.items()}
+ loss_dict = {k: v.mean().item() for k, v in losses.items()}
+ return outputs, loss_dict
diff --git a/common/rend_utils.py b/common/rend_utils.py
new file mode 100644
index 0000000..06e9131
--- /dev/null
+++ b/common/rend_utils.py
@@ -0,0 +1,139 @@
+import copy
+import os
+
+import numpy as np
+import pyrender
+import trimesh
+
+# offline rendering
+os.environ["PYOPENGL_PLATFORM"] = "egl"
+
+
+def flip_meshes(meshes):
+ rot = trimesh.transformations.rotation_matrix(np.radians(180), [1, 0, 0])
+ for mesh in meshes:
+ mesh.apply_transform(rot)
+ return meshes
+
+
+def color2material(mesh_color: list):
+ material = pyrender.MetallicRoughnessMaterial(
+ metallicFactor=0.1,
+ alphaMode="OPAQUE",
+ baseColorFactor=(
+ mesh_color[0] / 255.0,
+ mesh_color[1] / 255.0,
+ mesh_color[2] / 255.0,
+ 0.5,
+ ),
+ )
+ return material
+
+
+class Renderer:
+ def __init__(self, img_res: int) -> None:
+ self.renderer = pyrender.OffscreenRenderer(
+ viewport_width=img_res, viewport_height=img_res, point_size=1.0
+ )
+
+ self.img_res = img_res
+
+ def render_meshes_pose(
+ self,
+ meshes,
+ image=None,
+ cam_transl=None,
+ cam_center=None,
+ K=None,
+ materials=None,
+ sideview_angle=None,
+ ):
+ # unpack
+ if cam_transl is not None:
+ cam_trans = np.copy(cam_transl)
+ cam_trans[0] *= -1.0
+ else:
+ cam_trans = None
+ meshes = copy.deepcopy(meshes)
+ meshes = flip_meshes(meshes)
+
+ if sideview_angle is not None:
+ # center around the final mesh
+ anchor_mesh = meshes[-1]
+ center = anchor_mesh.vertices.mean(axis=0)
+
+ rot = trimesh.transformations.rotation_matrix(
+ np.radians(sideview_angle), [0, 1, 0]
+ )
+ out_meshes = []
+ for mesh in copy.deepcopy(meshes):
+ mesh.vertices -= center
+ mesh.apply_transform(rot)
+ mesh.vertices += center
+ # further away to see more
+ mesh.vertices += np.array([0, 0, -0.10])
+ out_meshes.append(mesh)
+ meshes = out_meshes
+
+ # setting up
+ self.create_scene()
+ self.setup_light()
+ self.position_camera(cam_trans, K)
+ if materials is not None:
+ meshes = [
+ pyrender.Mesh.from_trimesh(mesh, material=material)
+ for mesh, material in zip(meshes, materials)
+ ]
+ else:
+ meshes = [pyrender.Mesh.from_trimesh(mesh) for mesh in meshes]
+
+ for mesh in meshes:
+ self.scene.add(mesh)
+
+ color, valid_mask = self.render_rgb()
+ if image is None:
+ output_img = color[:, :, :3]
+ else:
+ output_img = self.overlay_image(color, valid_mask, image)
+ rend_img = (output_img * 255).astype(np.uint8)
+ return rend_img
+
+ def render_rgb(self):
+ color, rend_depth = self.renderer.render(
+ self.scene, flags=pyrender.RenderFlags.RGBA
+ )
+ color = color.astype(np.float32) / 255.0
+ valid_mask = (rend_depth > 0)[:, :, None]
+ return color, valid_mask
+
+ def overlay_image(self, color, valid_mask, image):
+ output_img = color[:, :, :3] * valid_mask + (1 - valid_mask) * image
+ return output_img
+
+ def position_camera(self, cam_transl, K):
+ camera_pose = np.eye(4)
+ if cam_transl is not None:
+ camera_pose[:3, 3] = cam_transl
+
+ fx = K[0, 0]
+ fy = K[1, 1]
+ cx = K[0, 2]
+ cy = K[1, 2]
+ camera = pyrender.IntrinsicsCamera(fx=fx, fy=fy, cx=cx, cy=cy)
+ self.scene.add(camera, pose=camera_pose)
+
+ def setup_light(self):
+ light = pyrender.DirectionalLight(color=[1.0, 1.0, 1.0], intensity=1)
+ light_pose = np.eye(4)
+
+ light_pose[:3, 3] = np.array([0, -1, 1])
+ self.scene.add(light, pose=light_pose)
+
+ light_pose[:3, 3] = np.array([0, 1, 1])
+ self.scene.add(light, pose=light_pose)
+
+ light_pose[:3, 3] = np.array([1, 1, 2])
+ self.scene.add(light, pose=light_pose)
+
+ def create_scene(self):
+ self.scene = pyrender.Scene(ambient_light=(0.5, 0.5, 0.5))
diff --git a/common/rot.py b/common/rot.py
new file mode 100644
index 0000000..0a25709
--- /dev/null
+++ b/common/rot.py
@@ -0,0 +1,782 @@
+import cv2
+import numpy as np
+import torch
+from torch.nn import functional as F
+
+"""
+Taken from https://pytorch3d.readthedocs.io/en/latest/_modules/pytorch3d/transforms/rotation_conversions.html
+Just to avoid installing pytorch3d at times
+"""
+
+
+def standardize_quaternion(quaternions: torch.Tensor) -> torch.Tensor:
+ """
+ Convert a unit quaternion to a standard form: one in which the real
+ part is non negative.
+
+ Args:
+ quaternions: Quaternions with real part first,
+ as tensor of shape (..., 4).
+
+ Returns:
+ Standardized quaternions as tensor of shape (..., 4).
+ """
+ return torch.where(quaternions[..., 0:1] < 0, -quaternions, quaternions)
+
+
+def quaternion_multiply(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
+ """
+ Multiply two quaternions representing rotations, returning the quaternion
+ representing their composition, i.e. the versor with nonnegative real part.
+ Usual torch rules for broadcasting apply.
+
+ Args:
+ a: Quaternions as tensor of shape (..., 4), real part first.
+ b: Quaternions as tensor of shape (..., 4), real part first.
+
+ Returns:
+ The product of a and b, a tensor of quaternions of shape (..., 4).
+ """
+ ab = quaternion_raw_multiply(a, b)
+ return standardize_quaternion(ab)
+
+
+def _sqrt_positive_part(x: torch.Tensor) -> torch.Tensor:
+ """
+ Returns torch.sqrt(torch.max(0, x))
+ but with a zero subgradient where x is 0.
+ """
+ ret = torch.zeros_like(x)
+ positive_mask = x > 0
+ ret[positive_mask] = torch.sqrt(x[positive_mask])
+ return ret
+
+
+def quaternion_to_axis_angle(quaternions: torch.Tensor) -> torch.Tensor:
+ """
+ Convert rotations given as quaternions to axis/angle.
+
+ Args:
+ quaternions: quaternions with real part first,
+ as tensor of shape (..., 4).
+
+ Returns:
+ Rotations given as a vector in axis angle form, as a tensor
+ of shape (..., 3), where the magnitude is the angle
+ turned anticlockwise in radians around the vector's
+ direction.
+ """
+ norms = torch.norm(quaternions[..., 1:], p=2, dim=-1, keepdim=True)
+ half_angles = torch.atan2(norms, quaternions[..., :1])
+ angles = 2 * half_angles
+ eps = 1e-6
+ small_angles = angles.abs() < eps
+ sin_half_angles_over_angles = torch.empty_like(angles)
+ sin_half_angles_over_angles[~small_angles] = (
+ torch.sin(half_angles[~small_angles]) / angles[~small_angles]
+ )
+ # for x small, sin(x/2) is about x/2 - (x/2)^3/6
+ # so sin(x/2)/x is about 1/2 - (x*x)/48
+ sin_half_angles_over_angles[small_angles] = (
+ 0.5 - (angles[small_angles] * angles[small_angles]) / 48
+ )
+ return quaternions[..., 1:] / sin_half_angles_over_angles
+
+
+def quaternion_to_matrix(quaternions: torch.Tensor) -> torch.Tensor:
+ """
+ Convert rotations given as quaternions to rotation matrices.
+
+ Args:
+ quaternions: quaternions with real part first,
+ as tensor of shape (..., 4).
+
+ Returns:
+ Rotation matrices as tensor of shape (..., 3, 3).
+ """
+ r, i, j, k = torch.unbind(quaternions, -1)
+ # pyre-fixme[58]: `/` is not supported for operand types `float` and `Tensor`.
+ two_s = 2.0 / (quaternions * quaternions).sum(-1)
+
+ o = torch.stack(
+ (
+ 1 - two_s * (j * j + k * k),
+ two_s * (i * j - k * r),
+ two_s * (i * k + j * r),
+ two_s * (i * j + k * r),
+ 1 - two_s * (i * i + k * k),
+ two_s * (j * k - i * r),
+ two_s * (i * k - j * r),
+ two_s * (j * k + i * r),
+ 1 - two_s * (i * i + j * j),
+ ),
+ -1,
+ )
+ return o.reshape(quaternions.shape[:-1] + (3, 3))
+
+
+def matrix_to_quaternion(matrix: torch.Tensor) -> torch.Tensor:
+ """
+ Convert rotations given as rotation matrices to quaternions.
+
+ Args:
+ matrix: Rotation matrices as tensor of shape (..., 3, 3).
+
+ Returns:
+ quaternions with real part first, as tensor of shape (..., 4).
+ """
+ if matrix.size(-1) != 3 or matrix.size(-2) != 3:
+ raise ValueError(f"Invalid rotation matrix shape {matrix.shape}.")
+
+ batch_dim = matrix.shape[:-2]
+ m00, m01, m02, m10, m11, m12, m20, m21, m22 = torch.unbind(
+ matrix.reshape(batch_dim + (9,)), dim=-1
+ )
+
+ q_abs = _sqrt_positive_part(
+ torch.stack(
+ [
+ 1.0 + m00 + m11 + m22,
+ 1.0 + m00 - m11 - m22,
+ 1.0 - m00 + m11 - m22,
+ 1.0 - m00 - m11 + m22,
+ ],
+ dim=-1,
+ )
+ )
+
+ # we produce the desired quaternion multiplied by each of r, i, j, k
+ quat_by_rijk = torch.stack(
+ [
+ # pyre-fixme[58]: `**` is not supported for operand types `Tensor` and
+ # `int`.
+ torch.stack([q_abs[..., 0] ** 2, m21 - m12, m02 - m20, m10 - m01], dim=-1),
+ # pyre-fixme[58]: `**` is not supported for operand types `Tensor` and
+ # `int`.
+ torch.stack([m21 - m12, q_abs[..., 1] ** 2, m10 + m01, m02 + m20], dim=-1),
+ # pyre-fixme[58]: `**` is not supported for operand types `Tensor` and
+ # `int`.
+ torch.stack([m02 - m20, m10 + m01, q_abs[..., 2] ** 2, m12 + m21], dim=-1),
+ # pyre-fixme[58]: `**` is not supported for operand types `Tensor` and
+ # `int`.
+ torch.stack([m10 - m01, m20 + m02, m21 + m12, q_abs[..., 3] ** 2], dim=-1),
+ ],
+ dim=-2,
+ )
+
+ # We floor here at 0.1 but the exact level is not important; if q_abs is small,
+ # the candidate won't be picked.
+ flr = torch.tensor(0.1).to(dtype=q_abs.dtype, device=q_abs.device)
+ quat_candidates = quat_by_rijk / (2.0 * q_abs[..., None].max(flr))
+
+ # if not for numerical problems, quat_candidates[i] should be same (up to a sign),
+ # forall i; we pick the best-conditioned one (with the largest denominator)
+
+ return quat_candidates[
+ F.one_hot(q_abs.argmax(dim=-1), num_classes=4) > 0.5, :
+ ].reshape(batch_dim + (4,))
+
+
+def matrix_to_axis_angle(matrix: torch.Tensor) -> torch.Tensor:
+ """
+ Convert rotations given as rotation matrices to axis/angle.
+
+ Args:
+ matrix: Rotation matrices as tensor of shape (..., 3, 3).
+
+ Returns:
+ Rotations given as a vector in axis angle form, as a tensor
+ of shape (..., 3), where the magnitude is the angle
+ turned anticlockwise in radians around the vector's
+ direction.
+ """
+ return quaternion_to_axis_angle(matrix_to_quaternion(matrix))
+
+
+def rot_aa(aa, rot):
+ """Rotate axis angle parameters."""
+ # pose parameters
+ R = np.array(
+ [
+ [np.cos(np.deg2rad(-rot)), -np.sin(np.deg2rad(-rot)), 0],
+ [np.sin(np.deg2rad(-rot)), np.cos(np.deg2rad(-rot)), 0],
+ [0, 0, 1],
+ ]
+ )
+ # find the rotation of the body in camera frame
+ per_rdg, _ = cv2.Rodrigues(aa)
+ # apply the global rotation to the global orientation
+ resrot, _ = cv2.Rodrigues(np.dot(R, per_rdg))
+ aa = (resrot.T)[0]
+ return aa
+
+
+def quat2mat(quat):
+ """
+ This function is borrowed from https://github.com/MandyMo/pytorch_HMR/blob/master/src/util.py#L50
+ Convert quaternion coefficients to rotation matrix.
+ Args:
+ quat: size = [batch_size, 4] 4 <===>(w, x, y, z)
+ Returns:
+ Rotation matrix corresponding to the quaternion -- size = [batch_size, 3, 3]
+ """
+ norm_quat = quat
+ norm_quat = norm_quat / norm_quat.norm(p=2, dim=1, keepdim=True)
+ w, x, y, z = norm_quat[:, 0], norm_quat[:, 1], norm_quat[:, 2], norm_quat[:, 3]
+
+ batch_size = quat.size(0)
+
+ w2, x2, y2, z2 = w.pow(2), x.pow(2), y.pow(2), z.pow(2)
+ wx, wy, wz = w * x, w * y, w * z
+ xy, xz, yz = x * y, x * z, y * z
+
+ rotMat = torch.stack(
+ [
+ w2 + x2 - y2 - z2,
+ 2 * xy - 2 * wz,
+ 2 * wy + 2 * xz,
+ 2 * wz + 2 * xy,
+ w2 - x2 + y2 - z2,
+ 2 * yz - 2 * wx,
+ 2 * xz - 2 * wy,
+ 2 * wx + 2 * yz,
+ w2 - x2 - y2 + z2,
+ ],
+ dim=1,
+ ).view(batch_size, 3, 3)
+ return rotMat
+
+
+def batch_aa2rot(axisang):
+ # This function is borrowed from https://github.com/MandyMo/pytorch_HMR/blob/master/src/util.py#L37
+ assert len(axisang.shape) == 2
+ assert axisang.shape[1] == 3
+ # axisang N x 3
+ axisang_norm = torch.norm(axisang + 1e-8, p=2, dim=1)
+ angle = torch.unsqueeze(axisang_norm, -1)
+ axisang_normalized = torch.div(axisang, angle)
+ angle = angle * 0.5
+ v_cos = torch.cos(angle)
+ v_sin = torch.sin(angle)
+ quat = torch.cat([v_cos, v_sin * axisang_normalized], dim=1)
+ rot_mat = quat2mat(quat)
+ rot_mat = rot_mat.view(rot_mat.shape[0], 9)
+ return rot_mat
+
+
+def batch_rot2aa(Rs):
+ assert len(Rs.shape) == 3
+ assert Rs.shape[1] == Rs.shape[2]
+ assert Rs.shape[1] == 3
+
+ """
+ Rs is B x 3 x 3
+ void cMathUtil::RotMatToAxisAngle(const tMatrix& mat, tVector& out_axis,
+ double& out_theta)
+ {
+ double c = 0.5 * (mat(0, 0) + mat(1, 1) + mat(2, 2) - 1);
+ c = cMathUtil::Clamp(c, -1.0, 1.0);
+
+ out_theta = std::acos(c);
+
+ if (std::abs(out_theta) < 0.00001)
+ {
+ out_axis = tVector(0, 0, 1, 0);
+ }
+ else
+ {
+ double m21 = mat(2, 1) - mat(1, 2);
+ double m02 = mat(0, 2) - mat(2, 0);
+ double m10 = mat(1, 0) - mat(0, 1);
+ double denom = std::sqrt(m21 * m21 + m02 * m02 + m10 * m10);
+ out_axis[0] = m21 / denom;
+ out_axis[1] = m02 / denom;
+ out_axis[2] = m10 / denom;
+ out_axis[3] = 0;
+ }
+ }
+ """
+ cos = 0.5 * (torch.stack([torch.trace(x) for x in Rs]) - 1)
+ cos = torch.clamp(cos, -1, 1)
+
+ theta = torch.acos(cos)
+
+ m21 = Rs[:, 2, 1] - Rs[:, 1, 2]
+ m02 = Rs[:, 0, 2] - Rs[:, 2, 0]
+ m10 = Rs[:, 1, 0] - Rs[:, 0, 1]
+ denom = torch.sqrt(m21 * m21 + m02 * m02 + m10 * m10)
+
+ axis0 = torch.where(torch.abs(theta) < 0.00001, m21, m21 / denom)
+ axis1 = torch.where(torch.abs(theta) < 0.00001, m02, m02 / denom)
+ axis2 = torch.where(torch.abs(theta) < 0.00001, m10, m10 / denom)
+
+ return theta.unsqueeze(1) * torch.stack([axis0, axis1, axis2], 1)
+
+
+def batch_rodrigues(theta):
+ """Convert axis-angle representation to rotation matrix.
+ Args:
+ theta: size = [B, 3]
+ Returns:
+ Rotation matrix corresponding to the quaternion -- size = [B, 3, 3]
+ """
+ l1norm = torch.norm(theta + 1e-8, p=2, dim=1)
+ angle = torch.unsqueeze(l1norm, -1)
+ normalized = torch.div(theta, angle)
+ angle = angle * 0.5
+ v_cos = torch.cos(angle)
+ v_sin = torch.sin(angle)
+ quat = torch.cat([v_cos, v_sin * normalized], dim=1)
+ return quat_to_rotmat(quat)
+
+
+def quat_to_rotmat(quat):
+ """Convert quaternion coefficients to rotation matrix.
+ Args:
+ quat: size = [B, 4] 4 <===>(w, x, y, z)
+ Returns:
+ Rotation matrix corresponding to the quaternion -- size = [B, 3, 3]
+ """
+ norm_quat = quat
+ norm_quat = norm_quat / norm_quat.norm(p=2, dim=1, keepdim=True)
+ w, x, y, z = norm_quat[:, 0], norm_quat[:, 1], norm_quat[:, 2], norm_quat[:, 3]
+
+ B = quat.size(0)
+
+ w2, x2, y2, z2 = w.pow(2), x.pow(2), y.pow(2), z.pow(2)
+ wx, wy, wz = w * x, w * y, w * z
+ xy, xz, yz = x * y, x * z, y * z
+
+ rotMat = torch.stack(
+ [
+ w2 + x2 - y2 - z2,
+ 2 * xy - 2 * wz,
+ 2 * wy + 2 * xz,
+ 2 * wz + 2 * xy,
+ w2 - x2 + y2 - z2,
+ 2 * yz - 2 * wx,
+ 2 * xz - 2 * wy,
+ 2 * wx + 2 * yz,
+ w2 - x2 - y2 + z2,
+ ],
+ dim=1,
+ ).view(B, 3, 3)
+ return rotMat
+
+
+def rot6d_to_rotmat(x):
+ """Convert 6D rotation representation to 3x3 rotation matrix.
+ Based on Zhou et al., "On the Continuity of Rotation Representations in Neural Networks", CVPR 2019
+ Input:
+ (B,6) Batch of 6-D rotation representations
+ Output:
+ (B,3,3) Batch of corresponding rotation matrices
+ """
+ x = x.reshape(-1, 3, 2)
+ a1 = x[:, :, 0]
+ a2 = x[:, :, 1]
+ b1 = F.normalize(a1)
+ b2 = F.normalize(a2 - torch.einsum("bi,bi->b", b1, a2).unsqueeze(-1) * b1)
+ b3 = torch.cross(b1, b2)
+ return torch.stack((b1, b2, b3), dim=-1)
+
+
+def rotmat_to_rot6d(x):
+ rotmat = x.reshape(-1, 3, 3)
+ rot6d = rotmat[:, :, :2].reshape(x.shape[0], -1)
+ return rot6d
+
+
+def rotation_matrix_to_angle_axis(rotation_matrix):
+ """
+ This function is borrowed from https://github.com/kornia/kornia
+
+ Convert 3x4 rotation matrix to Rodrigues vector
+
+ Args:
+ rotation_matrix (Tensor): rotation matrix.
+
+ Returns:
+ Tensor: Rodrigues vector transformation.
+
+ Shape:
+ - Input: :math:`(N, 3, 4)`
+ - Output: :math:`(N, 3)`
+
+ Example:
+ >>> input = torch.rand(2, 3, 4) # Nx4x4
+ >>> output = tgm.rotation_matrix_to_angle_axis(input) # Nx3
+ """
+ if rotation_matrix.shape[1:] == (3, 3):
+ rot_mat = rotation_matrix.reshape(-1, 3, 3)
+ hom = (
+ torch.tensor([0, 0, 1], dtype=torch.float32, device=rotation_matrix.device)
+ .reshape(1, 3, 1)
+ .expand(rot_mat.shape[0], -1, -1)
+ )
+ rotation_matrix = torch.cat([rot_mat, hom], dim=-1)
+
+ quaternion = rotation_matrix_to_quaternion(rotation_matrix)
+ aa = quaternion_to_angle_axis(quaternion)
+ aa[torch.isnan(aa)] = 0.0
+ return aa
+
+
+def quaternion_to_angle_axis(quaternion: torch.Tensor) -> torch.Tensor:
+ """
+ This function is borrowed from https://github.com/kornia/kornia
+
+ Convert quaternion vector to angle axis of rotation.
+
+ Adapted from ceres C++ library: ceres-solver/include/ceres/rotation.h
+
+ Args:
+ quaternion (torch.Tensor): tensor with quaternions.
+
+ Return:
+ torch.Tensor: tensor with angle axis of rotation.
+
+ Shape:
+ - Input: :math:`(*, 4)` where `*` means, any number of dimensions
+ - Output: :math:`(*, 3)`
+
+ Example:
+ >>> quaternion = torch.rand(2, 4) # Nx4
+ >>> angle_axis = tgm.quaternion_to_angle_axis(quaternion) # Nx3
+ """
+ if not torch.is_tensor(quaternion):
+ raise TypeError(
+ "Input type is not a torch.Tensor. Got {}".format(type(quaternion))
+ )
+
+ if not quaternion.shape[-1] == 4:
+ raise ValueError(
+ "Input must be a tensor of shape Nx4 or 4. Got {}".format(quaternion.shape)
+ )
+ # unpack input and compute conversion
+ q1: torch.Tensor = quaternion[..., 1]
+ q2: torch.Tensor = quaternion[..., 2]
+ q3: torch.Tensor = quaternion[..., 3]
+ sin_squared_theta: torch.Tensor = q1 * q1 + q2 * q2 + q3 * q3
+
+ sin_theta: torch.Tensor = torch.sqrt(sin_squared_theta)
+ cos_theta: torch.Tensor = quaternion[..., 0]
+ two_theta: torch.Tensor = 2.0 * torch.where(
+ cos_theta < 0.0,
+ torch.atan2(-sin_theta, -cos_theta),
+ torch.atan2(sin_theta, cos_theta),
+ )
+
+ k_pos: torch.Tensor = two_theta / sin_theta
+ k_neg: torch.Tensor = 2.0 * torch.ones_like(sin_theta)
+ k: torch.Tensor = torch.where(sin_squared_theta > 0.0, k_pos, k_neg)
+
+ angle_axis: torch.Tensor = torch.zeros_like(quaternion)[..., :3]
+ angle_axis[..., 0] += q1 * k
+ angle_axis[..., 1] += q2 * k
+ angle_axis[..., 2] += q3 * k
+ return angle_axis
+
+
+def rotation_matrix_to_quaternion(rotation_matrix, eps=1e-6):
+ """
+ This function is borrowed from https://github.com/kornia/kornia
+
+ Convert 3x4 rotation matrix to 4d quaternion vector
+
+ This algorithm is based on algorithm described in
+ https://github.com/KieranWynn/pyquaternion/blob/master/pyquaternion/quaternion.py#L201
+
+ Args:
+ rotation_matrix (Tensor): the rotation matrix to convert.
+
+ Return:
+ Tensor: the rotation in quaternion
+
+ Shape:
+ - Input: :math:`(N, 3, 4)`
+ - Output: :math:`(N, 4)`
+
+ Example:
+ >>> input = torch.rand(4, 3, 4) # Nx3x4
+ >>> output = tgm.rotation_matrix_to_quaternion(input) # Nx4
+ """
+ if not torch.is_tensor(rotation_matrix):
+ raise TypeError(
+ "Input type is not a torch.Tensor. Got {}".format(type(rotation_matrix))
+ )
+
+ if len(rotation_matrix.shape) > 3:
+ raise ValueError(
+ "Input size must be a three dimensional tensor. Got {}".format(
+ rotation_matrix.shape
+ )
+ )
+ if not rotation_matrix.shape[-2:] == (3, 4):
+ raise ValueError(
+ "Input size must be a N x 3 x 4 tensor. Got {}".format(
+ rotation_matrix.shape
+ )
+ )
+
+ rmat_t = torch.transpose(rotation_matrix, 1, 2)
+
+ mask_d2 = rmat_t[:, 2, 2] < eps
+
+ mask_d0_d1 = rmat_t[:, 0, 0] > rmat_t[:, 1, 1]
+ mask_d0_nd1 = rmat_t[:, 0, 0] < -rmat_t[:, 1, 1]
+
+ t0 = 1 + rmat_t[:, 0, 0] - rmat_t[:, 1, 1] - rmat_t[:, 2, 2]
+ q0 = torch.stack(
+ [
+ rmat_t[:, 1, 2] - rmat_t[:, 2, 1],
+ t0,
+ rmat_t[:, 0, 1] + rmat_t[:, 1, 0],
+ rmat_t[:, 2, 0] + rmat_t[:, 0, 2],
+ ],
+ -1,
+ )
+ t0_rep = t0.repeat(4, 1).t()
+
+ t1 = 1 - rmat_t[:, 0, 0] + rmat_t[:, 1, 1] - rmat_t[:, 2, 2]
+ q1 = torch.stack(
+ [
+ rmat_t[:, 2, 0] - rmat_t[:, 0, 2],
+ rmat_t[:, 0, 1] + rmat_t[:, 1, 0],
+ t1,
+ rmat_t[:, 1, 2] + rmat_t[:, 2, 1],
+ ],
+ -1,
+ )
+ t1_rep = t1.repeat(4, 1).t()
+
+ t2 = 1 - rmat_t[:, 0, 0] - rmat_t[:, 1, 1] + rmat_t[:, 2, 2]
+ q2 = torch.stack(
+ [
+ rmat_t[:, 0, 1] - rmat_t[:, 1, 0],
+ rmat_t[:, 2, 0] + rmat_t[:, 0, 2],
+ rmat_t[:, 1, 2] + rmat_t[:, 2, 1],
+ t2,
+ ],
+ -1,
+ )
+ t2_rep = t2.repeat(4, 1).t()
+
+ t3 = 1 + rmat_t[:, 0, 0] + rmat_t[:, 1, 1] + rmat_t[:, 2, 2]
+ q3 = torch.stack(
+ [
+ t3,
+ rmat_t[:, 1, 2] - rmat_t[:, 2, 1],
+ rmat_t[:, 2, 0] - rmat_t[:, 0, 2],
+ rmat_t[:, 0, 1] - rmat_t[:, 1, 0],
+ ],
+ -1,
+ )
+ t3_rep = t3.repeat(4, 1).t()
+
+ mask_c0 = mask_d2 * mask_d0_d1
+ mask_c1 = mask_d2 * ~mask_d0_d1
+ mask_c2 = ~mask_d2 * mask_d0_nd1
+ mask_c3 = ~mask_d2 * ~mask_d0_nd1
+ mask_c0 = mask_c0.view(-1, 1).type_as(q0)
+ mask_c1 = mask_c1.view(-1, 1).type_as(q1)
+ mask_c2 = mask_c2.view(-1, 1).type_as(q2)
+ mask_c3 = mask_c3.view(-1, 1).type_as(q3)
+
+ q = q0 * mask_c0 + q1 * mask_c1 + q2 * mask_c2 + q3 * mask_c3
+ q /= torch.sqrt(
+ t0_rep * mask_c0
+ + t1_rep * mask_c1
+ + t2_rep * mask_c2 # noqa
+ + t3_rep * mask_c3
+ ) # noqa
+ q *= 0.5
+ return q
+
+
+def batch_euler2matrix(r):
+ return quaternion_to_rotation_matrix(euler_to_quaternion(r))
+
+
+def euler_to_quaternion(r):
+ x = r[..., 0]
+ y = r[..., 1]
+ z = r[..., 2]
+
+ z = z / 2.0
+ y = y / 2.0
+ x = x / 2.0
+ cz = torch.cos(z)
+ sz = torch.sin(z)
+ cy = torch.cos(y)
+ sy = torch.sin(y)
+ cx = torch.cos(x)
+ sx = torch.sin(x)
+ quaternion = torch.zeros_like(r.repeat(1, 2))[..., :4].to(r.device)
+ quaternion[..., 0] += cx * cy * cz - sx * sy * sz
+ quaternion[..., 1] += cx * sy * sz + cy * cz * sx
+ quaternion[..., 2] += cx * cz * sy - sx * cy * sz
+ quaternion[..., 3] += cx * cy * sz + sx * cz * sy
+ return quaternion
+
+
+def quaternion_to_rotation_matrix(quat):
+ """Convert quaternion coefficients to rotation matrix.
+ Args:
+ quat: size = [B, 4] 4 <===>(w, x, y, z)
+ Returns:
+ Rotation matrix corresponding to the quaternion -- size = [B, 3, 3]
+ """
+ norm_quat = quat
+ norm_quat = norm_quat / norm_quat.norm(p=2, dim=1, keepdim=True)
+ w, x, y, z = norm_quat[:, 0], norm_quat[:, 1], norm_quat[:, 2], norm_quat[:, 3]
+
+ B = quat.size(0)
+
+ w2, x2, y2, z2 = w.pow(2), x.pow(2), y.pow(2), z.pow(2)
+ wx, wy, wz = w * x, w * y, w * z
+ xy, xz, yz = x * y, x * z, y * z
+
+ rotMat = torch.stack(
+ [
+ w2 + x2 - y2 - z2,
+ 2 * xy - 2 * wz,
+ 2 * wy + 2 * xz,
+ 2 * wz + 2 * xy,
+ w2 - x2 + y2 - z2,
+ 2 * yz - 2 * wx,
+ 2 * xz - 2 * wy,
+ 2 * wx + 2 * yz,
+ w2 - x2 - y2 + z2,
+ ],
+ dim=1,
+ ).view(B, 3, 3)
+ return rotMat
+
+
+def euler_angles_from_rotmat(R):
+ """
+ computer euler angles for rotation around x, y, z axis
+ from rotation amtrix
+ R: 4x4 rotation matrix
+ https://www.gregslabaugh.net/publications/euler.pdf
+ """
+ r21 = np.round(R[:, 2, 0].item(), 4)
+ if abs(r21) != 1:
+ y_angle1 = -1 * torch.asin(R[:, 2, 0])
+ y_angle2 = math.pi + torch.asin(R[:, 2, 0])
+ cy1, cy2 = torch.cos(y_angle1), torch.cos(y_angle2)
+
+ x_angle1 = torch.atan2(R[:, 2, 1] / cy1, R[:, 2, 2] / cy1)
+ x_angle2 = torch.atan2(R[:, 2, 1] / cy2, R[:, 2, 2] / cy2)
+ z_angle1 = torch.atan2(R[:, 1, 0] / cy1, R[:, 0, 0] / cy1)
+ z_angle2 = torch.atan2(R[:, 1, 0] / cy2, R[:, 0, 0] / cy2)
+
+ s1 = (x_angle1, y_angle1, z_angle1)
+ s2 = (x_angle2, y_angle2, z_angle2)
+ s = (s1, s2)
+
+ else:
+ z_angle = torch.tensor([0], device=R.device).float()
+ if r21 == -1:
+ y_angle = torch.tensor([math.pi / 2], device=R.device).float()
+ x_angle = z_angle + torch.atan2(R[:, 0, 1], R[:, 0, 2])
+ else:
+ y_angle = -torch.tensor([math.pi / 2], device=R.device).float()
+ x_angle = -z_angle + torch.atan2(-R[:, 0, 1], R[:, 0, 2])
+ s = ((x_angle, y_angle, z_angle),)
+ return s
+
+
+def quaternion_raw_multiply(a, b):
+ """
+ Source: https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/transforms/rotation_conversions.py
+ Multiply two quaternions.
+ Usual torch rules for broadcasting apply.
+
+ Args:
+ a: Quaternions as tensor of shape (..., 4), real part first.
+ b: Quaternions as tensor of shape (..., 4), real part first.
+
+ Returns:
+ The product of a and b, a tensor of quaternions shape (..., 4).
+ """
+ aw, ax, ay, az = torch.unbind(a, -1)
+ bw, bx, by, bz = torch.unbind(b, -1)
+ ow = aw * bw - ax * bx - ay * by - az * bz
+ ox = aw * bx + ax * bw + ay * bz - az * by
+ oy = aw * by - ax * bz + ay * bw + az * bx
+ oz = aw * bz + ax * by - ay * bx + az * bw
+ return torch.stack((ow, ox, oy, oz), -1)
+
+
+def quaternion_invert(quaternion):
+ """
+ Source: https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/transforms/rotation_conversions.py
+ Given a quaternion representing rotation, get the quaternion representing
+ its inverse.
+
+ Args:
+ quaternion: Quaternions as tensor of shape (..., 4), with real part
+ first, which must be versors (unit quaternions).
+
+ Returns:
+ The inverse, a tensor of quaternions of shape (..., 4).
+ """
+
+ return quaternion * quaternion.new_tensor([1, -1, -1, -1])
+
+
+def quaternion_apply(quaternion, point):
+ """
+ Source: https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/transforms/rotation_conversions.py
+ Apply the rotation given by a quaternion to a 3D point.
+ Usual torch rules for broadcasting apply.
+
+ Args:
+ quaternion: Tensor of quaternions, real part first, of shape (..., 4).
+ point: Tensor of 3D points of shape (..., 3).
+
+ Returns:
+ Tensor of rotated points of shape (..., 3).
+ """
+ if point.size(-1) != 3:
+ raise ValueError(f"Points are not in 3D, f{point.shape}.")
+ real_parts = point.new_zeros(point.shape[:-1] + (1,))
+ point_as_quaternion = torch.cat((real_parts, point), -1)
+ out = quaternion_raw_multiply(
+ quaternion_raw_multiply(quaternion, point_as_quaternion),
+ quaternion_invert(quaternion),
+ )
+ return out[..., 1:]
+
+
+def axis_angle_to_quaternion(axis_angle: torch.Tensor) -> torch.Tensor:
+ """
+ Source: https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/transforms/rotation_conversions.py
+ Convert rotations given as axis/angle to quaternions.
+ Args:
+ axis_angle: Rotations given as a vector in axis angle form,
+ as a tensor of shape (..., 3), where the magnitude is
+ the angle turned anticlockwise in radians around the
+ vector's direction.
+ Returns:
+ quaternions with real part first, as tensor of shape (..., 4).
+ """
+ angles = torch.norm(axis_angle, p=2, dim=-1, keepdim=True)
+ half_angles = angles * 0.5
+ eps = 1e-6
+ small_angles = angles.abs() < eps
+ sin_half_angles_over_angles = torch.empty_like(angles)
+ sin_half_angles_over_angles[~small_angles] = (
+ torch.sin(half_angles[~small_angles]) / angles[~small_angles]
+ )
+ # for x small, sin(x/2) is about x/2 - (x/2)^3/6
+ # so sin(x/2)/x is about 1/2 - (x*x)/48
+ sin_half_angles_over_angles[small_angles] = (
+ 0.5 - (angles[small_angles] * angles[small_angles]) / 48
+ )
+ quaternions = torch.cat(
+ [torch.cos(half_angles), axis_angle * sin_half_angles_over_angles], dim=-1
+ )
+ return quaternions
diff --git a/common/sys_utils.py b/common/sys_utils.py
new file mode 100644
index 0000000..204b47b
--- /dev/null
+++ b/common/sys_utils.py
@@ -0,0 +1,44 @@
+import os
+import os.path as op
+import shutil
+from glob import glob
+
+from loguru import logger
+
+
+def copy(src, dst):
+ if os.path.islink(src):
+ linkto = os.readlink(src)
+ os.symlink(linkto, dst)
+ else:
+ if os.path.isdir(src):
+ shutil.copytree(src, dst)
+ else:
+ shutil.copy(src, dst)
+
+
+def copy_repo(src_files, dst_folder, filter_keywords):
+ src_files = [
+ f for f in src_files if not any(keyword in f for keyword in filter_keywords)
+ ]
+ dst_files = [op.join(dst_folder, op.basename(f)) for f in src_files]
+ for src_f, dst_f in zip(src_files, dst_files):
+ logger.info(f"FROM: {src_f}\nTO:{dst_f}")
+ copy(src_f, dst_f)
+
+
+def mkdir(directory):
+ if not os.path.exists(directory):
+ os.makedirs(directory)
+
+
+def mkdir_p(exp_path):
+ os.makedirs(exp_path, exist_ok=True)
+
+
+def count_files(path):
+ """
+ Non-recursively count number of files in a folder.
+ """
+ files = glob(path)
+ return len(files)
diff --git a/common/thing.py b/common/thing.py
new file mode 100644
index 0000000..79924bf
--- /dev/null
+++ b/common/thing.py
@@ -0,0 +1,66 @@
+import numpy as np
+import torch
+
+"""
+This file stores functions for conversion between numpy and torch, torch, list, etc.
+Also deal with general operations such as to(dev), detach, etc.
+"""
+
+
+def thing2list(thing):
+ if isinstance(thing, torch.Tensor):
+ return thing.tolist()
+ if isinstance(thing, np.ndarray):
+ return thing.tolist()
+ if isinstance(thing, dict):
+ return {k: thing2list(v) for k, v in md.items()}
+ if isinstance(thing, list):
+ return [thing2list(ten) for ten in thing]
+ return thing
+
+
+def thing2dev(thing, dev):
+ if hasattr(thing, "to"):
+ thing = thing.to(dev)
+ return thing
+ if isinstance(thing, list):
+ return [thing2dev(ten, dev) for ten in thing]
+ if isinstance(thing, tuple):
+ return tuple(thing2dev(list(thing), dev))
+ if isinstance(thing, dict):
+ return {k: thing2dev(v, dev) for k, v in thing.items()}
+ if isinstance(thing, torch.Tensor):
+ return thing.to(dev)
+ return thing
+
+
+def thing2np(thing):
+ if isinstance(thing, list):
+ return np.array(thing)
+ if isinstance(thing, torch.Tensor):
+ return thing.cpu().detach().numpy()
+ if isinstance(thing, dict):
+ return {k: thing2np(v) for k, v in thing.items()}
+ return thing
+
+
+def thing2torch(thing):
+ if isinstance(thing, list):
+ return torch.tensor(np.array(thing))
+ if isinstance(thing, np.ndarray):
+ return torch.from_numpy(thing)
+ if isinstance(thing, dict):
+ return {k: thing2torch(v) for k, v in thing.items()}
+ return thing
+
+
+def detach_thing(thing):
+ if isinstance(thing, torch.Tensor):
+ return thing.cpu().detach()
+ if isinstance(thing, list):
+ return [detach_thing(ten) for ten in thing]
+ if isinstance(thing, tuple):
+ return tuple(detach_thing(list(thing)))
+ if isinstance(thing, dict):
+ return {k: detach_thing(v) for k, v in thing.items()}
+ return thing
diff --git a/common/torch_utils.py b/common/torch_utils.py
new file mode 100644
index 0000000..37f09b8
--- /dev/null
+++ b/common/torch_utils.py
@@ -0,0 +1,212 @@
+import random
+
+import numpy as np
+import torch
+import torch.nn as nn
+import torch.optim as optim
+
+from common.ld_utils import unsort as unsort_list
+
+
+# pytorch implementation for np.nanmean
+# https://github.com/pytorch/pytorch/issues/21987#issuecomment-539402619
+def nanmean(v, *args, inplace=False, **kwargs):
+ if not inplace:
+ v = v.clone()
+ is_nan = torch.isnan(v)
+ v[is_nan] = 0
+ return v.sum(*args, **kwargs) / (~is_nan).float().sum(*args, **kwargs)
+
+
+def grad_norm(model):
+ # compute norm of gradient for a model
+ total_norm = None
+ for p in model.parameters():
+ if p.grad is not None:
+ if total_norm is None:
+ total_norm = 0
+ param_norm = p.grad.detach().data.norm(2)
+ total_norm += param_norm.item() ** 2
+
+ if total_norm is not None:
+ total_norm = total_norm ** (1.0 / 2)
+ else:
+ total_norm = 0.0
+ return total_norm
+
+
+def pad_tensor_list(v_list: list):
+ dev = v_list[0].device
+ num_meshes = len(v_list)
+ num_dim = 1 if len(v_list[0].shape) == 1 else v_list[0].shape[1]
+ v_len_list = []
+ for verts in v_list:
+ v_len_list.append(verts.shape[0])
+
+ pad_len = max(v_len_list)
+ dtype = v_list[0].dtype
+ if num_dim == 1:
+ padded_tensor = torch.zeros(num_meshes, pad_len, dtype=dtype)
+ else:
+ padded_tensor = torch.zeros(num_meshes, pad_len, num_dim, dtype=dtype)
+ for idx, (verts, v_len) in enumerate(zip(v_list, v_len_list)):
+ padded_tensor[idx, :v_len] = verts
+ padded_tensor = padded_tensor.to(dev)
+ v_len_list = torch.LongTensor(v_len_list).to(dev)
+ return padded_tensor, v_len_list
+
+
+def unpad_vtensor(
+ vtensor: (torch.Tensor), lens: (torch.LongTensor, torch.cuda.LongTensor)
+):
+ tensors_list = []
+ for verts, vlen in zip(vtensor, lens):
+ tensors_list.append(verts[:vlen])
+ return tensors_list
+
+
+def one_hot_embedding(labels, num_classes):
+ """Embedding labels to one-hot form.
+ Args:
+ labels: (LongTensor) class labels, sized [N, D1, D2, ..].
+ num_classes: (int) number of classes.
+ Returns:
+ (tensor) encoded labels, sized [N, D1, D2, .., Dk, #classes].
+ """
+ y = torch.eye(num_classes).float()
+ return y[labels]
+
+
+def unsort(ten, sort_idx):
+ """
+ Unsort a tensor of shape (N, *) using the sort_idx list(N).
+ Return a tensor of the pre-sorting order in shape (N, *)
+ """
+ assert isinstance(ten, torch.Tensor)
+ assert isinstance(sort_idx, list)
+ assert ten.shape[0] == len(sort_idx)
+
+ out_list = list(torch.chunk(ten, ten.size(0), dim=0))
+ out_list = unsort_list(out_list, sort_idx)
+ out_list = torch.cat(out_list, dim=0)
+ return out_list
+
+
+def all_comb(X, Y):
+ """
+ Returns all possible combinations of elements in X and Y.
+ X: (n_x, d_x)
+ Y: (n_y, d_y)
+ Output: Z: (n_x*x_y, d_x+d_y)
+ Example:
+ X = tensor([[8, 8, 8],
+ [7, 5, 9]])
+ Y = tensor([[3, 8, 7, 7],
+ [3, 7, 9, 9],
+ [6, 4, 3, 7]])
+ Z = tensor([[8, 8, 8, 3, 8, 7, 7],
+ [8, 8, 8, 3, 7, 9, 9],
+ [8, 8, 8, 6, 4, 3, 7],
+ [7, 5, 9, 3, 8, 7, 7],
+ [7, 5, 9, 3, 7, 9, 9],
+ [7, 5, 9, 6, 4, 3, 7]])
+ """
+ assert len(X.size()) == 2
+ assert len(Y.size()) == 2
+ X1 = X.unsqueeze(1)
+ Y1 = Y.unsqueeze(0)
+ X2 = X1.repeat(1, Y.shape[0], 1)
+ Y2 = Y1.repeat(X.shape[0], 1, 1)
+ Z = torch.cat([X2, Y2], -1)
+ Z = Z.view(-1, Z.shape[-1])
+ return Z
+
+
+def toggle_parameters(model, requires_grad):
+ """
+ Set all weights to requires_grad or not.
+ """
+ for param in model.parameters():
+ param.requires_grad = requires_grad
+
+
+def detach_tensor(ten):
+ """This function move tensor to cpu and convert to numpy"""
+ if isinstance(ten, torch.Tensor):
+ return ten.cpu().detach().numpy()
+ return ten
+
+
+def count_model_parameters(model):
+ """
+ Return the amount of parameters that requries gradients.
+ """
+ return sum(p.numel() for p in model.parameters() if p.requires_grad)
+
+
+def reset_all_seeds(seed):
+ """Reset all seeds for reproduciability."""
+ random.seed(seed)
+ torch.manual_seed(seed)
+ np.random.seed(seed)
+
+
+def get_activation(name):
+ """This function return an activation constructor by name."""
+ if name == "tanh":
+ return nn.Tanh()
+ elif name == "sigmoid":
+ return nn.Sigmoid()
+ elif name == "relu":
+ return nn.ReLU()
+ elif name == "selu":
+ return nn.SELU()
+ elif name == "relu6":
+ return nn.ReLU6()
+ elif name == "softplus":
+ return nn.Softplus()
+ elif name == "softshrink":
+ return nn.Softshrink()
+ else:
+ print("Undefined activation: %s" % (name))
+ assert False
+
+
+def stack_ll_tensors(tensor_list_list):
+ """
+ Recursively stack a list of lists of lists .. whose elements are tensors with the same shape
+ """
+ if isinstance(tensor_list_list, torch.Tensor):
+ return tensor_list_list
+ assert isinstance(tensor_list_list, list)
+ if isinstance(tensor_list_list[0], torch.Tensor):
+ return torch.stack(tensor_list_list)
+
+ stacked_tensor = []
+ for tensor_list in tensor_list_list:
+ stacked_tensor.append(stack_ll_tensors(tensor_list))
+ stacked_tensor = torch.stack(stacked_tensor)
+ return stacked_tensor
+
+
+def get_optim(name):
+ """This function return an optimizer constructor by name."""
+ if name == "adam":
+ return optim.Adam
+ elif name == "rmsprop":
+ return optim.RMSprop
+ elif name == "sgd":
+ return optim.SGD
+ else:
+ print("Undefined optim: %s" % (name))
+ assert False
+
+
+def decay_lr(optimizer, gamma):
+ """
+ Decay the learning rate by gamma
+ """
+ assert isinstance(gamma, float)
+ assert 0 <= gamma and gamma <= 1.0
+ for param_group in optimizer.param_groups:
+ param_group["lr"] *= gamma
diff --git a/common/transforms.py b/common/transforms.py
new file mode 100644
index 0000000..b6366aa
--- /dev/null
+++ b/common/transforms.py
@@ -0,0 +1,300 @@
+import numpy as np
+import torch
+
+import common.data_utils as data_utils
+from common.np_utils import permute_np
+
+"""
+Useful geometric operations, e.g. Perspective projection and a differentiable Rodrigues formula
+Parts of the code are taken from https://github.com/MandyMo/pytorch_HMR
+"""
+
+
+def to_xy_batch(x_homo):
+ assert isinstance(x_homo, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert x_homo.shape[2] == 3
+ assert len(x_homo.shape) == 3
+ batch_size = x_homo.shape[0]
+ num_pts = x_homo.shape[1]
+ x = torch.ones(batch_size, num_pts, 2, device=x_homo.device)
+ x = x_homo[:, :, :2] / x_homo[:, :, 2:3]
+ return x
+
+
+# VR Distortion Correction Using Vertex Displacement
+# https://stackoverflow.com/questions/44489686/camera-lens-distortion-in-opengl
+def distort_pts3d_all(_pts_cam, dist_coeffs):
+ # egocentric cameras commonly has heavy distortion
+ # this function transform points in the undistorted camera coord
+ # to distorted camera coord such that the 2d projection can match the pixels.
+ pts_cam = _pts_cam.clone().double()
+ z = pts_cam[:, :, 2]
+
+ z_inv = 1 / z
+
+ x1 = pts_cam[:, :, 0] * z_inv
+ y1 = pts_cam[:, :, 1] * z_inv
+
+ # precalculations
+ x1_2 = x1 * x1
+ y1_2 = y1 * y1
+ x1_y1 = x1 * y1
+ r2 = x1_2 + y1_2
+ r4 = r2 * r2
+ r6 = r4 * r2
+
+ r_dist = (1 + dist_coeffs[0] * r2 + dist_coeffs[1] * r4 + dist_coeffs[4] * r6) / (
+ 1 + dist_coeffs[5] * r2 + dist_coeffs[6] * r4 + dist_coeffs[7] * r6
+ )
+
+ # full (rational + tangential) distortion
+ x2 = x1 * r_dist + 2 * dist_coeffs[2] * x1_y1 + dist_coeffs[3] * (r2 + 2 * x1_2)
+ y2 = y1 * r_dist + 2 * dist_coeffs[3] * x1_y1 + dist_coeffs[2] * (r2 + 2 * y1_2)
+ # denormalize for projection (which is a linear operation)
+ cam_pts_dist = torch.stack([x2 * z, y2 * z, z], dim=2).float()
+ return cam_pts_dist
+
+
+def rigid_tf_torch_batch(points, R, T):
+ """
+ Performs rigid transformation to incoming points but batched
+ Q = (points*R.T) + T
+ points: (batch, num, 3)
+ R: (batch, 3, 3)
+ T: (batch, 3, 1)
+ out: (batch, num, 3)
+ """
+ points_out = torch.bmm(R, points.permute(0, 2, 1)) + T
+ points_out = points_out.permute(0, 2, 1)
+ return points_out
+
+
+def solve_rigid_tf_np(A: np.ndarray, B: np.ndarray):
+ """
+ “Least-Squares Fitting of Two 3-D Point Sets”, Arun, K. S. , May 1987
+ Input: expects Nx3 matrix of points
+ Returns R,t
+ R = 3x3 rotation matrix
+ t = 3x1 column vector
+
+ This function should be a fix for compute_rigid_tf when the det == -1
+ """
+
+ assert A.shape == B.shape
+ A = A.T
+ B = B.T
+
+ num_rows, num_cols = A.shape
+ if num_rows != 3:
+ raise Exception(f"matrix A is not 3xN, it is {num_rows}x{num_cols}")
+
+ num_rows, num_cols = B.shape
+ if num_rows != 3:
+ raise Exception(f"matrix B is not 3xN, it is {num_rows}x{num_cols}")
+
+ # find mean column wise
+ centroid_A = np.mean(A, axis=1)
+ centroid_B = np.mean(B, axis=1)
+
+ # ensure centroids are 3x1
+ centroid_A = centroid_A.reshape(-1, 1)
+ centroid_B = centroid_B.reshape(-1, 1)
+
+ # subtract mean
+ Am = A - centroid_A
+ Bm = B - centroid_B
+
+ H = Am @ np.transpose(Bm)
+
+ # find rotation
+ U, S, Vt = np.linalg.svd(H)
+ R = Vt.T @ U.T
+
+ # special reflection case
+ if np.linalg.det(R) < 0:
+ Vt[2, :] *= -1
+ R = Vt.T @ U.T
+
+ t = -R @ centroid_A + centroid_B
+
+ return R, t
+
+
+def batch_solve_rigid_tf(A, B):
+ """
+ “Least-Squares Fitting of Two 3-D Point Sets”, Arun, K. S. , May 1987
+ Input: expects BxNx3 matrix of points
+ Returns R,t
+ R = Bx3x3 rotation matrix
+ t = Bx3x1 column vector
+ """
+
+ assert A.shape == B.shape
+ dev = A.device
+ A = A.cpu().numpy()
+ B = B.cpu().numpy()
+ A = permute_np(A, (0, 2, 1))
+ B = permute_np(B, (0, 2, 1))
+
+ batch, num_rows, num_cols = A.shape
+ if num_rows != 3:
+ raise Exception(f"matrix A is not 3xN, it is {num_rows}x{num_cols}")
+
+ _, num_rows, num_cols = B.shape
+ if num_rows != 3:
+ raise Exception(f"matrix B is not 3xN, it is {num_rows}x{num_cols}")
+
+ # find mean column wise
+ centroid_A = np.mean(A, axis=2)
+ centroid_B = np.mean(B, axis=2)
+
+ # ensure centroids are 3x1
+ centroid_A = centroid_A.reshape(batch, -1, 1)
+ centroid_B = centroid_B.reshape(batch, -1, 1)
+
+ # subtract mean
+ Am = A - centroid_A
+ Bm = B - centroid_B
+
+ H = np.matmul(Am, permute_np(Bm, (0, 2, 1)))
+
+ # find rotation
+ U, S, Vt = np.linalg.svd(H)
+ R = np.matmul(permute_np(Vt, (0, 2, 1)), permute_np(U, (0, 2, 1)))
+
+ # special reflection case
+ neg_idx = np.linalg.det(R) < 0
+ if neg_idx.sum() > 0:
+ raise Exception(
+ f"some rotation matrices are not orthogonal; make sure implementation is correct for such case: {neg_idx}"
+ )
+ Vt[neg_idx, 2, :] *= -1
+ R[neg_idx, :, :] = np.matmul(
+ permute_np(Vt[neg_idx], (0, 2, 1)), permute_np(U[neg_idx], (0, 2, 1))
+ )
+
+ t = np.matmul(-R, centroid_A) + centroid_B
+
+ R = torch.FloatTensor(R).to(dev)
+ t = torch.FloatTensor(t).to(dev)
+ return R, t
+
+
+def rigid_tf_np(points, R, T):
+ """
+ Performs rigid transformation to incoming points
+ Q = (points*R.T) + T
+ points: (num, 3)
+ R: (3, 3)
+ T: (1, 3)
+
+ out: (num, 3)
+ """
+
+ assert isinstance(points, np.ndarray)
+ assert isinstance(R, np.ndarray)
+ assert isinstance(T, np.ndarray)
+ assert len(points.shape) == 2
+ assert points.shape[1] == 3
+ assert R.shape == (3, 3)
+ assert T.shape == (1, 3)
+ points_new = np.matmul(R, points.T).T + T
+ return points_new
+
+
+def transform_points(world2cam_mat, pts):
+ """
+ Map points from one coord to another based on the 4x4 matrix.
+ e.g., map points from world to camera coord.
+ pts: (N, 3), in METERS!!
+ world2cam_mat: (4, 4)
+ Output: points in cam coord (N, 3)
+ We follow this convention:
+ | R T | |pt|
+ | 0 1 | * | 1|
+ i.e. we rotate first then translate as T is the camera translation not position.
+ """
+ assert isinstance(pts, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert isinstance(world2cam_mat, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert world2cam_mat.shape == (4, 4)
+ assert len(pts.shape) == 2
+ assert pts.shape[1] == 3
+ pts_homo = to_homo(pts)
+
+ # mocap to cam
+ pts_cam_homo = torch.matmul(world2cam_mat, pts_homo.T).T
+ pts_cam = to_xyz(pts_cam_homo)
+
+ assert pts_cam.shape[1] == 3
+ return pts_cam
+
+
+def transform_points_batch(world2cam_mat, pts):
+ """
+ Map points from one coord to another based on the 4x4 matrix.
+ e.g., map points from world to camera coord.
+ pts: (B, N, 3), in METERS!!
+ world2cam_mat: (B, 4, 4)
+ Output: points in cam coord (B, N, 3)
+ We follow this convention:
+ | R T | |pt|
+ | 0 1 | * | 1|
+ i.e. we rotate first then translate as T is the camera translation not position.
+ """
+ assert isinstance(pts, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert isinstance(world2cam_mat, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert world2cam_mat.shape[1:] == (4, 4)
+ assert len(pts.shape) == 3
+ assert pts.shape[2] == 3
+ batch_size = pts.shape[0]
+ pts_homo = to_homo_batch(pts)
+
+ # mocap to cam
+ pts_cam_homo = torch.bmm(world2cam_mat, pts_homo.permute(0, 2, 1)).permute(0, 2, 1)
+ pts_cam = to_xyz_batch(pts_cam_homo)
+
+ assert pts_cam.shape[2] == 3
+ return pts_cam
+
+
+def project2d_batch(K, pts_cam):
+ """
+ K: (B, 3, 3)
+ pts_cam: (B, N, 3)
+ """
+
+ assert isinstance(K, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert isinstance(pts_cam, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert K.shape[1:] == (3, 3)
+ assert pts_cam.shape[2] == 3
+ assert len(pts_cam.shape) == 3
+ pts2d_homo = torch.bmm(K, pts_cam.permute(0, 2, 1)).permute(0, 2, 1)
+ pts2d = to_xy_batch(pts2d_homo)
+ return pts2d
+
+
+def project2d_norm_batch(K, pts_cam, patch_width):
+ """
+ K: (B, 3, 3)
+ pts_cam: (B, N, 3)
+ """
+
+ assert isinstance(K, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert isinstance(pts_cam, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert K.shape[1:] == (3, 3)
+ assert pts_cam.shape[2] == 3
+ assert len(pts_cam.shape) == 3
+ v2d = project2d_batch(K, pts_cam)
+ v2d_norm = data_utils.normalize_kp2d(v2d, patch_width)
+ return v2d_norm
+
+
+def project2d(K, pts_cam):
+ assert isinstance(K, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert isinstance(pts_cam, (torch.FloatTensor, torch.cuda.FloatTensor))
+ assert K.shape == (3, 3)
+ assert pts_cam.shape[1] == 3
+ assert len(pts_cam.shape) == 2
+ pts2d_homo = torch.matmul(K, pts_cam.T).T
+ pts2d = to_xy(pts2d_homo)
+ return pts2d
diff --git a/common/viewer.py b/common/viewer.py
new file mode 100644
index 0000000..c956937
--- /dev/null
+++ b/common/viewer.py
@@ -0,0 +1,275 @@
+import os
+import os.path as op
+import re
+from abc import abstractmethod
+
+import matplotlib.cm as cm
+import numpy as np
+from aitviewer.headless import HeadlessRenderer
+from aitviewer.renderables.billboard import Billboard
+from aitviewer.renderables.meshes import Meshes
+from aitviewer.scene.camera import OpenCVCamera
+from aitviewer.scene.material import Material
+from aitviewer.utils.so3 import aa2rot_numpy
+from aitviewer.viewer import Viewer
+from easydict import EasyDict as edict
+from loguru import logger
+from PIL import Image
+from tqdm import tqdm
+
+cmap = cm.get_cmap("plasma")
+materials = {
+ "none": None,
+ "white": Material(color=(1.0, 1.0, 1.0, 1.0), ambient=0.2),
+ "red": Material(color=(0.969, 0.106, 0.059, 1.0), ambient=0.2),
+ "blue": Material(color=(0.0, 0.0, 1.0, 1.0), ambient=0.2),
+ "green": Material(color=(1.0, 0.0, 0.0, 1.0), ambient=0.2),
+ "cyan": Material(color=(0.051, 0.659, 0.051, 1.0), ambient=0.2),
+ "light-blue": Material(color=(0.588, 0.5647, 0.9725, 1.0), ambient=0.2),
+ "cyan-light": Material(color=(0.051, 0.659, 0.051, 1.0), ambient=0.2),
+ "dark-light": Material(color=(0.404, 0.278, 0.278, 1.0), ambient=0.2),
+ "rice": Material(color=(0.922, 0.922, 0.102, 1.0), ambient=0.2),
+}
+
+
+class ViewerData(edict):
+ """
+ Interface to standardize viewer data.
+ """
+
+ def __init__(self, Rt, K, cols, rows, imgnames=None):
+ self.imgnames = imgnames
+ self.Rt = Rt
+ self.K = K
+ self.num_frames = Rt.shape[0]
+ self.cols = cols
+ self.rows = rows
+ self.validate_format()
+
+ def validate_format(self):
+ assert len(self.Rt.shape) == 3
+ assert self.Rt.shape[0] == self.num_frames
+ assert self.Rt.shape[1] == 3
+ assert self.Rt.shape[2] == 4
+
+ assert len(self.K.shape) == 2
+ assert self.K.shape[0] == 3
+ assert self.K.shape[1] == 3
+ if self.imgnames is not None:
+ assert self.num_frames == len(self.imgnames)
+ assert self.num_frames > 0
+ im_p = self.imgnames[0]
+ assert op.exists(im_p), f"Image path {im_p} does not exist"
+
+
+class ARCTICViewer:
+ def __init__(
+ self,
+ render_types=["rgb", "depth", "mask"],
+ interactive=True,
+ size=(2024, 2024),
+ ):
+ if not interactive:
+ v = HeadlessRenderer()
+ else:
+ v = Viewer(size=size)
+
+ self.v = v
+ self.interactive = interactive
+ # self.layers = layers
+ self.render_types = render_types
+
+ def view_interactive(self):
+ self.v.run()
+
+ def view_fn_headless(self, num_iter, out_folder):
+ v = self.v
+
+ v._init_scene()
+
+ logger.info("Rendering to video")
+ if "video" in self.render_types:
+ vid_p = op.join(out_folder, "video.mp4")
+ v.save_video(video_dir=vid_p)
+
+ pbar = tqdm(range(num_iter))
+ for fidx in pbar:
+ out_rgb = op.join(out_folder, "images", f"rgb/{fidx:04d}.png")
+ out_mask = op.join(out_folder, "images", f"mask/{fidx:04d}.png")
+ out_depth = op.join(out_folder, "images", f"depth/{fidx:04d}.npy")
+
+ # render RGB, depth, segmentation masks
+ if "rgb" in self.render_types:
+ v.export_frame(out_rgb)
+ if "depth" in self.render_types:
+ os.makedirs(op.dirname(out_depth), exist_ok=True)
+ render_depth(v, out_depth)
+ if "mask" in self.render_types:
+ os.makedirs(op.dirname(out_mask), exist_ok=True)
+ render_mask(v, out_mask)
+ v.scene.next_frame()
+ logger.info(f"Exported to {out_folder}")
+
+ @abstractmethod
+ def load_data(self):
+ pass
+
+ def check_format(self, batch):
+ meshes_all, data = batch
+ assert isinstance(meshes_all, dict)
+ assert len(meshes_all) > 0
+ for mesh in meshes_all.values():
+ assert isinstance(mesh, Meshes)
+ assert isinstance(data, ViewerData)
+
+ def render_seq(self, batch, out_folder="./render_out"):
+ meshes_all, data = batch
+ self.setup_viewer(data)
+ for mesh in meshes_all.values():
+ self.v.scene.add(mesh)
+ if self.interactive:
+ self.view_interactive()
+ else:
+ num_iter = data["num_frames"]
+ self.view_fn_headless(num_iter, out_folder)
+
+ def setup_viewer(self, data):
+ v = self.v
+ fps = 30
+ if "imgnames" in data:
+ setup_billboard(data, v)
+
+ # camera.show_path()
+ v.run_animations = True # autoplay
+ v.run_animations = False # autoplay
+ v.playback_fps = fps
+ v.scene.fps = fps
+ v.scene.origin.enabled = False
+ v.scene.floor.enabled = False
+ v.auto_set_floor = False
+ v.scene.floor.position[1] = -3
+ # v.scene.camera.position = np.array((0.0, 0.0, 0))
+ self.v = v
+
+
+def dist2vc(dist_ro, dist_lo, dist_o, _cmap, tf_fn=None):
+ if tf_fn is not None:
+ exp_map = tf_fn
+ else:
+ exp_map = small_exp_map
+ dist_ro = exp_map(dist_ro)
+ dist_lo = exp_map(dist_lo)
+ dist_o = exp_map(dist_o)
+
+ vc_ro = _cmap(dist_ro)
+ vc_lo = _cmap(dist_lo)
+ vc_o = _cmap(dist_o)
+ return vc_ro, vc_lo, vc_o
+
+
+def small_exp_map(_dist):
+ dist = np.copy(_dist)
+ # dist = 1.0 - np.clip(dist, 0, 0.1) / 0.1
+ dist = np.exp(-20.0 * dist)
+ return dist
+
+
+def construct_viewer_meshes(data, draw_edges=False, flat_shading=True):
+ rotation_flip = aa2rot_numpy(np.array([1, 0, 0]) * np.pi)
+ meshes = {}
+ for key, val in data.items():
+ if "object" in key:
+ flat_shading = False
+ else:
+ flat_shading = flat_shading
+ v3d = val["v3d"]
+ meshes[key] = Meshes(
+ v3d,
+ val["f3d"],
+ vertex_colors=val["vc"],
+ name=val["name"],
+ flat_shading=flat_shading,
+ draw_edges=draw_edges,
+ material=materials[val["color"]],
+ rotation=rotation_flip,
+ )
+ return meshes
+
+
+def setup_viewer(
+ v, shared_folder_p, video, images_path, data, flag, seq_name, side_angle
+):
+ fps = 10
+ cols, rows = 224, 224
+ focal = 1000.0
+
+ # setup image paths
+ regex = re.compile(r"(\d*)$")
+
+ def sort_key(x):
+ name = os.path.splitext(x)[0]
+ return int(regex.search(name).group(0))
+
+ # setup billboard
+ images_path = op.join(shared_folder_p, "images")
+ images_paths = [
+ os.path.join(images_path, f)
+ for f in sorted(os.listdir(images_path), key=sort_key)
+ ]
+ assert len(images_paths) > 0
+
+ cam_t = data[f"{flag}.object.cam_t"]
+ num_frames = min(cam_t.shape[0], len(images_paths))
+ cam_t = cam_t[:num_frames]
+ # setup camera
+ K = np.array([[focal, 0, rows / 2.0], [0, focal, cols / 2.0], [0, 0, 1]])
+ Rt = np.zeros((num_frames, 3, 4))
+ Rt[:, :, 3] = cam_t
+ Rt[:, :3, :3] = np.eye(3)
+ Rt[:, 1:3, :3] *= -1.0
+
+ camera = OpenCVCamera(K, Rt, cols, rows, viewer=v)
+ if side_angle is None:
+ billboard = Billboard.from_camera_and_distance(
+ camera, 10.0, cols, rows, images_paths
+ )
+ v.scene.add(billboard)
+ v.scene.add(camera)
+ v.run_animations = True # autoplay
+ v.playback_fps = fps
+ v.scene.fps = fps
+ v.scene.origin.enabled = False
+ v.scene.floor.enabled = False
+ v.auto_set_floor = False
+ v.scene.floor.position[1] = -3
+ v.set_temp_camera(camera)
+ # v.scene.camera.position = np.array((0.0, 0.0, 0))
+ return v
+
+
+def render_depth(v, depth_p):
+ depth = np.array(v.get_depth()).astype(np.float16)
+ np.save(depth_p, depth)
+
+
+def render_mask(v, mask_p):
+ mask = np.array(v.get_mask()).astype(np.uint8)
+ mask = Image.fromarray(mask)
+ mask.save(mask_p)
+
+
+def setup_billboard(data, v):
+ images_paths = data.imgnames
+ K = data.K
+ Rt = data.Rt
+ rows = data.rows
+ cols = data.cols
+ camera = OpenCVCamera(K, Rt, cols, rows, viewer=v)
+ if images_paths is not None:
+ billboard = Billboard.from_camera_and_distance(
+ camera, 10.0, cols, rows, images_paths
+ )
+ v.scene.add(billboard)
+ v.scene.add(camera)
+ v.scene.camera.load_cam()
+ v.set_temp_camera(camera)
diff --git a/common/vis_utils.py b/common/vis_utils.py
new file mode 100644
index 0000000..a222a10
--- /dev/null
+++ b/common/vis_utils.py
@@ -0,0 +1,129 @@
+import matplotlib.cm as cm
+import matplotlib.pyplot as plt
+import numpy as np
+from PIL import Image
+
+# connection between the 8 points of 3d bbox
+BONES_3D_BBOX = [
+ (0, 1),
+ (1, 2),
+ (2, 3),
+ (3, 0),
+ (0, 4),
+ (1, 5),
+ (2, 6),
+ (3, 7),
+ (4, 5),
+ (5, 6),
+ (6, 7),
+ (7, 4),
+]
+
+
+def plot_2d_bbox(bbox_2d, bones, color, ax):
+ if ax is None:
+ axx = plt
+ else:
+ axx = ax
+ colors = cm.rainbow(np.linspace(0, 1, len(bbox_2d)))
+ for pt, c in zip(bbox_2d, colors):
+ axx.scatter(pt[0], pt[1], color=c, s=50)
+
+ if bones is None:
+ bones = BONES_3D_BBOX
+ for bone in bones:
+ sidx, eidx = bone
+ # bottom of bbox is white
+ if min(sidx, eidx) >= 4:
+ color = "w"
+ axx.plot(
+ [bbox_2d[sidx][0], bbox_2d[eidx][0]],
+ [bbox_2d[sidx][1], bbox_2d[eidx][1]],
+ color,
+ )
+ return axx
+
+
+# http://www.icare.univ-lille1.fr/tutorials/convert_a_matplotlib_figure
+def fig2data(fig):
+ """
+ @brief Convert a Matplotlib figure to a 4D
+ numpy array with RGBA channels and return it
+ @param fig a matplotlib figure
+ @return a numpy 3D array of RGBA values
+ """
+ # draw the renderer
+ fig.canvas.draw()
+
+ # Get the RGBA buffer from the figure
+ w, h = fig.canvas.get_width_height()
+ buf = np.frombuffer(fig.canvas.tostring_argb(), dtype=np.uint8)
+ buf.shape = (w, h, 4)
+
+ # canvas.tostring_argb give pixmap in ARGB mode.
+ # Roll the ALPHA channel to have it in RGBA mode
+ buf = np.roll(buf, 3, axis=2)
+ return buf
+
+
+# http://www.icare.univ-lille1.fr/tutorials/convert_a_matplotlib_figure
+def fig2img(fig):
+ """
+ @brief Convert a Matplotlib figure to a PIL Image
+ in RGBA format and return it
+ @param fig a matplotlib figure
+ @return a Python Imaging Library ( PIL ) image
+ """
+ # put the figure pixmap into a numpy array
+ buf = fig2data(fig)
+ w, h, _ = buf.shape
+ return Image.frombytes("RGBA", (w, h), buf.tobytes())
+
+
+def concat_pil_images(images):
+ """
+ Put a list of PIL images next to each other
+ """
+ assert isinstance(images, list)
+ widths, heights = zip(*(i.size for i in images))
+
+ total_width = sum(widths)
+ max_height = max(heights)
+
+ new_im = Image.new("RGB", (total_width, max_height))
+
+ x_offset = 0
+ for im in images:
+ new_im.paste(im, (x_offset, 0))
+ x_offset += im.size[0]
+ return new_im
+
+
+def stack_pil_images(images):
+ """
+ Stack a list of PIL images next to each other
+ """
+ assert isinstance(images, list)
+ widths, heights = zip(*(i.size for i in images))
+
+ total_height = sum(heights)
+ max_width = max(widths)
+
+ new_im = Image.new("RGB", (max_width, total_height))
+
+ y_offset = 0
+ for im in images:
+ new_im.paste(im, (0, y_offset))
+ y_offset += im.size[1]
+ return new_im
+
+
+def im_list_to_plt(image_list, figsize, title_list=None):
+ fig, axes = plt.subplots(nrows=1, ncols=len(image_list), figsize=figsize)
+ for idx, (ax, im) in enumerate(zip(axes, image_list)):
+ ax.imshow(im)
+ ax.set_title(title_list[idx])
+ fig.tight_layout()
+ im = fig2img(fig)
+ plt.close()
+ return im
diff --git a/common/xdict.py b/common/xdict.py
new file mode 100644
index 0000000..46e7d0c
--- /dev/null
+++ b/common/xdict.py
@@ -0,0 +1,288 @@
+import numpy as np
+import torch
+
+import common.thing as thing
+
+
+def _print_stat(key, thing):
+ """
+ Helper function for printing statistics about a key-value pair in an xdict.
+ """
+ mytype = type(thing)
+ if isinstance(thing, (list, tuple)):
+ print("{:<20}: {:<30}\t{:}".format(key, len(thing), mytype))
+ elif isinstance(thing, (torch.Tensor)):
+ dev = thing.device
+ shape = str(thing.shape).replace(" ", "")
+ print("{:<20}: {:<30}\t{:}\t{}".format(key, shape, mytype, dev))
+ elif isinstance(thing, (np.ndarray)):
+ dev = ""
+ shape = str(thing.shape).replace(" ", "")
+ print("{:<20}: {:<30}\t{:}".format(key, shape, mytype))
+ else:
+ print("{:<20}: {:}".format(key, mytype))
+
+
+class xdict(dict):
+ """
+ A subclass of Python's built-in dict class, which provides additional methods for manipulating and operating on dictionaries.
+ """
+
+ def __init__(self, mydict=None):
+ """
+ Constructor for the xdict class. Creates a new xdict object and optionally initializes it with key-value pairs from the provided dictionary mydict. If mydict is not provided, an empty xdict is created.
+ """
+ if mydict is None:
+ return
+
+ for k, v in mydict.items():
+ super().__setitem__(k, v)
+
+ def subset(self, keys):
+ """
+ Returns a new xdict object containing only the key-value pairs with keys in the provided list 'keys'.
+ """
+ out_dict = {}
+ for k in keys:
+ out_dict[k] = self[k]
+ return xdict(out_dict)
+
+ def __setitem__(self, key, val):
+ """
+ Overrides the dict.__setitem__ method to raise an assertion error if a key already exists.
+ """
+ assert key not in self.keys(), f"Key already exists {key}"
+ super().__setitem__(key, val)
+
+ def search(self, keyword, replace_to=None):
+ """
+ Returns a new xdict object containing only the key-value pairs with keys that contain the provided keyword.
+ """
+ out_dict = {}
+ for k in self.keys():
+ if keyword in k:
+ if replace_to is None:
+ out_dict[k] = self[k]
+ else:
+ out_dict[k.replace(keyword, replace_to)] = self[k]
+ return xdict(out_dict)
+
+ def rm(self, keyword, keep_list=[], verbose=False):
+ """
+ Returns a new xdict object with keys that contain keyword removed. Keys in keep_list are excluded from the removal.
+ """
+ out_dict = {}
+ for k in self.keys():
+ if keyword not in k or k in keep_list:
+ out_dict[k] = self[k]
+ else:
+ if verbose:
+ print(f"Removing: {k}")
+ return xdict(out_dict)
+
+ def overwrite(self, k, v):
+ """
+ The original assignment operation of Python dict
+ """
+ super().__setitem__(k, v)
+
+ def merge(self, dict2):
+ """
+ Same as dict.update(), but raises an assertion error if there are duplicate keys between the two dictionaries.
+
+ Args:
+ dict2 (dict or xdict): The dictionary or xdict instance to merge with.
+
+ Raises:
+ AssertionError: If dict2 is not a dictionary or xdict instance.
+ AssertionError: If there are duplicate keys between the two instances.
+ """
+ assert isinstance(dict2, (dict, xdict))
+ mykeys = set(self.keys())
+ intersect = mykeys.intersection(set(dict2.keys()))
+ assert len(intersect) == 0, f"Merge failed: duplicate keys ({intersect})"
+ self.update(dict2)
+
+ def mul(self, scalar):
+ """
+ Multiplies each value (could be tensor, np.array, list) in the xdict instance by the provided scalar.
+
+ Args:
+ scalar (float): The scalar to multiply the values by.
+
+ Raises:
+ AssertionError: If scalar is not a float.
+ """
+ if isinstance(scalar, int):
+ scalar = 1.0 * scalar
+ assert isinstance(scalar, float)
+ out_dict = {}
+ for k in self.keys():
+ if isinstance(self[k], list):
+ out_dict[k] = [v * scalar for v in self[k]]
+ else:
+ out_dict[k] = self[k] * scalar
+ return xdict(out_dict)
+
+ def prefix(self, text):
+ """
+ Adds a prefix to each key in the xdict instance.
+
+ Args:
+ text (str): The prefix to add.
+
+ Returns:
+ xdict: The xdict instance with the added prefix.
+ """
+ out_dict = {}
+ for k in self.keys():
+ out_dict[text + k] = self[k]
+ return xdict(out_dict)
+
+ def replace_keys(self, str_src, str_tar):
+ """
+ Replaces a substring in all keys of the xdict instance.
+
+ Args:
+ str_src (str): The substring to replace.
+ str_tar (str): The replacement string.
+
+ Returns:
+ xdict: The xdict instance with the replaced keys.
+ """
+ out_dict = {}
+ for k in self.keys():
+ old_key = k
+ new_key = old_key.replace(str_src, str_tar)
+ out_dict[new_key] = self[k]
+ return xdict(out_dict)
+
+ def postfix(self, text):
+ """
+ Adds a postfix to each key in the xdict instance.
+
+ Args:
+ text (str): The postfix to add.
+
+ Returns:
+ xdict: The xdict instance with the added postfix.
+ """
+ out_dict = {}
+ for k in self.keys():
+ out_dict[k + text] = self[k]
+ return xdict(out_dict)
+
+ def sorted_keys(self):
+ """
+ Returns a sorted list of the keys in the xdict instance.
+
+ Returns:
+ list: A sorted list of keys in the xdict instance.
+ """
+ return sorted(list(self.keys()))
+
+ def to(self, dev):
+ """
+ Moves the xdict instance to a specific device.
+
+ Args:
+ dev (torch.device): The device to move the instance to.
+
+ Returns:
+ xdict: The xdict instance moved to the specified device.
+ """
+ if dev is None:
+ return self
+ raw_dict = dict(self)
+ return xdict(thing.thing2dev(raw_dict, dev))
+
+ def to_torch(self):
+ """
+ Converts elements in the xdict to Torch tensors and returns a new xdict.
+
+ Returns:
+ xdict: A new xdict with Torch tensors as values.
+ """
+ return xdict(thing.thing2torch(self))
+
+ def to_np(self):
+ """
+ Converts elements in the xdict to numpy arrays and returns a new xdict.
+
+ Returns:
+ xdict: A new xdict with numpy arrays as values.
+ """
+ return xdict(thing.thing2np(self))
+
+ def tolist(self):
+ """
+ Converts elements in the xdict to Python lists and returns a new xdict.
+
+ Returns:
+ xdict: A new xdict with Python lists as values.
+ """
+ return xdict(thing.thing2list(self))
+
+ def print_stat(self):
+ """
+ Prints statistics for each item in the xdict.
+ """
+ for k, v in self.items():
+ _print_stat(k, v)
+
+ def detach(self):
+ """
+ Detaches all Torch tensors in the xdict from the computational graph and moves them to the CPU.
+ Non-tensor objects are ignored.
+
+ Returns:
+ xdict: A new xdict with detached Torch tensors as values.
+ """
+ return xdict(thing.detach_thing(self))
+
+ def has_invalid(self):
+ """
+ Checks if any of the Torch tensors in the xdict contain NaN or Inf values.
+
+ Returns:
+ bool: True if at least one tensor contains NaN or Inf values, False otherwise.
+ """
+ for k, v in self.items():
+ if isinstance(v, torch.Tensor):
+ if torch.isnan(v).any():
+ print(f"{k} contains nan values")
+ return True
+ if torch.isinf(v).any():
+ print(f"{k} contains inf values")
+ return True
+ return False
+
+ def apply(self, operation, criterion=None):
+ """
+ Applies an operation to the values in the xdict, based on an optional criterion.
+
+ Args:
+ operation (callable): A callable object that takes a single argument and returns a value.
+ criterion (callable, optional): A callable object that takes two arguments (key and value) and returns a boolean.
+
+ Returns:
+ xdict: A new xdict with the same keys as the original, but with the values modified by the operation.
+ """
+ out = {}
+ for k, v in self.items():
+ if criterion is None or criterion(k, v):
+ out[k] = operation(v)
+ return xdict(out)
+
+ def save(self, path, dev=None, verbose=True):
+ """
+ Saves the xdict to disk as a Torch tensor.
+
+ Args:
+ path (str): The path to save the xdict.
+ dev (torch.device, optional): The device to use for saving the tensor (default is CPU).
+ verbose (bool, optional): Whether to print a message indicating that the xdict has been saved (default is True).
+ """
+ if verbose:
+ print(f"Saving to {path}")
+ torch.save(self.to(dev), path)
diff --git a/docs/data/README.md b/docs/data/README.md
new file mode 100644
index 0000000..3075e5f
--- /dev/null
+++ b/docs/data/README.md
@@ -0,0 +1,195 @@
+# ARCTIC dataset
+
+## Table of content
+
+- [Overview](#overview)
+- [Sanity check](#sanity-check-on-the-data-pipeline)
+- [Download full ARCTIC](#download-full-arctic)
+- [Data documentation](data_doc.md)
+- [Data preprocessing and splitting](processing.md)
+- [Data visualization](visualize.md)
+
+## Overview
+
+So far ARCTIC provides the following data and models:
+- `arctic_data/data/images [649G]`: Full 2K-resolution images
+- `arctic_data/data/cropped_images [116G]`: loosely cropped version of the original images around the object center for fast image loading
+- `arctic_data/data/raw_seqs [215M]`: raw GT sequences in world coordinate (e.g., MANO, SMPLX parameters, egocentric camera trajectory, object poses)
+- `arctic_data/data/splits [18G]`: splits created by aggregating processed sequences together based on the requirement of a specific split.
+- `arctic_data/data/feat [14G]`: validation set image features needed for LSTM models.
+- `arctic_data/data/splits_json [40K]`: json files to define the splits
+- `meta [91M]`: camera parameters, object info, subject info, subject personalized vtemplates, object templates.
+- `arctic_data/model [6G]`: weights of our CVPR baselines, and the same baseline models re-trained after upgrading dependencies.
+
+See [`data_doc.md`](./data_doc.md) for an explanation of the data you will download, how the files are related to each others, and details on each file type.
+
+## Sanity check on the data pipeline
+
+This section provides instructions on downloading a mini-version of ARCTIC to test out the downloading, post-processing, and visualization pipeline before downloading the entire dataset.
+
+⚠️ Register accounts on [ARCTIC](https://arctic.is.tue.mpg.de/register.php), [SMPL-X](https://smpl-x.is.tue.mpg.de/), and [MANO](https://mano.is.tue.mpg.de/), and then export your the username and password with the following commands:
+
+```bash
+export ARCTIC_USERNAME=
+export ARCTIC_PASSWORD=
+export SMPLX_USERNAME=
+export SMPLX_PASSWORD=
+export MANO_USERNAME=
+export MANO_PASSWORD=
+```
+
+Before starting, check if your credentials are exported correctly (following the commands above).
+
+```bash
+echo $ARCTIC_USERNAME
+echo $ARCTIC_PASSWORD
+echo $SMPLX_USERNAME
+echo $SMPLX_PASSWORD
+echo $MANO_USERNAME
+echo $MANO_PASSWORD
+```
+
+⚠️ If the echo is empty, `export` your credentials following the instructions above before moving forward.
+
+Dry run the download to make sure nothing breaks:
+
+```bash
+chmod +x ./bash/*.sh
+./bash/download_dry_run.sh
+python scripts_data/unzip_download.py # unzip downloaded data
+python scripts_data/checksum.py # verify checksums
+```
+
+After running the above, you should expect:
+
+```
+➜ ls unpack/arctic_data/data
+cropped_images feat images meta raw_seqs splits_json
+
+➜ ls unpack/arctic_data/models
+1f9ac0b15 28bf3642f 3558f1342 40ae50712 423c6057b 546c1e997 58e200d16 701a72569
+```
+
+If this unpacked data is what you want at the end, rename it `data` as the code expects the unpacked data is at `./data`:
+
+```bash
+mv unpack data
+```
+
+To visualize a sequence, you need to:
+1. Process the sequence with `--export_verts`
+2. Launch `scripts_data/visualizer.py`
+3. Hit `` on your keyboard to play the animation.
+
+```bash
+# process a specific seq; exporting the vertices for visualization
+python scripts_data/process_seqs.py --mano_p ./data/arctic_data/data/raw_seqs/s01/capsulemachine_use_01.mano.npy --export_verts
+python scripts_data/visualizer.py --seq_p ./outputs/processed_verts/seqs/s01/capsulemachine_use_01.npy --object --mano
+```
+
+
+Our `scripts_data/visualizer.py` supports the following features (see [`visualize.md`](visualize.md)):
+- Select entities to include in the viewer (MANO/SMPL-X/object/images)
+- Interactive mode vs. headless rendering mode
+- Render segmentation masks, depth and GT overlaied on videos.
+
+## Download full ARCTIC
+
+Depending on your usage of the dataset, we suggest different download protocols. Choose the sections below that suits your needs.
+
+Before starting, check if your credentials are exported correctly (following the commands above).
+
+```bash
+echo $ARCTIC_USERNAME
+echo $ARCTIC_PASSWORD
+echo $SMPLX_USERNAME
+echo $SMPLX_PASSWORD
+echo $MANO_USERNAME
+echo $MANO_PASSWORD
+```
+
+⚠️ If the echo is empty, `export` your credentials following the instructions above before moving forward.
+
+Also gives execution piviliage for the scripts:
+```bash
+chmod +x ./bash/*.sh
+```
+
+### Using the same splits as our CVPR baselines
+
+This is for people who want to have follow-up work to compare with our ARCTIC baselines. We provide pre-processed data and the same splits in the paper. That is, you don't need to preprocess the data yourself.
+
+```bash
+./bash/clean_downloads.sh
+./bash/download_body_models.sh
+./bash/download_cropped_images.sh
+./bash/download_splits.sh
+./bash/download_misc.sh
+./bash/download_baselines.sh # (optional) weights of our pre-trained CVPR baselines
+./bash/download_feat.sh # (optional) images features being used by the pre-trained CVPR LSTM baselines
+python scripts_data/checksum.py # verify checksums; this could take a while
+python scripts_data/unzip_download.py # unzip downloaded data
+```
+
+### Having full control on ARCTIC dataset
+
+This section is what you need if one of the following applies to you:
+- You have custom usage such as having a different way to preprocess the data, having different splits, having access to the full resolution images.
+- You want a hard copy for the entire dataset (run all the following commands).
+
+If you do not need images (say, you want to study human motion generation), you can skip the "optional" part. However, we suggest to download all data to save youself from headaches.
+
+Clean download cache:
+```bash
+./bash/clean_downloads.sh
+```
+
+Download "smaller" files (required):
+
+```bash
+./bash/download_body_models.sh
+./bash/download_misc.sh
+```
+
+Download cropped images (optional):
+
+```bash
+./bash/download_cropped_images.sh
+```
+
+Download full-resolution images (optional):
+
+⚠️ If you just want to train and compete with our CVPR models, you only need the cropped images above. The script below download the full-resolution images which can take a while.
+
+```bash
+./bash/download_images.sh
+```
+
+Download pre-processed splits (optional):
+
+```bash
+./bash/download_splits.sh
+```
+
+Download image features used by our CVPR LSTM models (optional):
+
+```bash
+./bash/download_feat.sh
+```
+
+Download our pre-trained CVPR model weights (optional):
+
+```bash
+./bash/download_baselines.sh
+```
+
+After downloading what you need, you can now verify the checksum for corruption, and unzip them all:
+
+```bash
+python scripts_data/checksum.py # verify checksums; this could take a while
+python scripts_data/unzip_download.py # unzip downloaded data
+```
+
+The raw downloaded data can be found under `downloads`. The unzipped data and models can be found under `unpack`. See [`processing.md`](processing.md) for explanation of how the files are organized and what they represent.
+
+
diff --git a/docs/data/data_doc.md b/docs/data/data_doc.md
new file mode 100644
index 0000000..4037103
--- /dev/null
+++ b/docs/data/data_doc.md
@@ -0,0 +1,198 @@
+# Data documentation
+
+## Overview
+
+Data processing pipeline (`raw_seqs -> processed_seqs -> splits`):
+- `raw_seqs` provides GT such as MANO/SMPLX parameters in the world coordinate.
+- `processed_seqs` is the result of converting GT parameters from `raw_seqs` to data such as MANO joints in individual camera coodinates according to the camera parameters in `meta`.
+- `splits` aggregates the `processed_seqs` into training, validation, and test splits. The split is defined by `splits_json` based on sequence names.
+
+Therefore, depending on your end goal, you might not need all the data we provide:
+- If your goal to reconstruct hand-object or estimate their dense relative distances as in the same setting of `ArcticNet` and `InterField` baselines in our CVPR paper, we have provided preprocessed data (see [`model/README.md`](../model/README.md)).
+- If your goal is to generate full human or hand motion interacting object, you only need `raw_seqs` and `meta` as they provide SMPLX and MANO GT and the object template.
+- If you want to customize your usage (preprocessing, building custom splits), for custom data processing, one can modify the stage `raw_seqs -> processed_seqs`; For custom splits, one can modify `processed_seqs -> splits` (see advanced usage below).
+
+## Data folder
+
+```
+arctic_data/data/: ROOT
+arctic_data/data/images/
+arctic_data/data/croppped_images/
+arctic_data/data/raw_seqs/
+ s01/
+ *.egocam.dist.npy: egocentric camera poses
+ *.mano.npy: MANO parameters
+ *.object.npy: object parameters
+ *.smplx.npy: SMPLX parameters
+ s02/
+ ..
+ s10/
+arctic_data/data/processed_seqs/
+arctic_data/data/processed_seqs/s01/
+ *.npy: individual processed sequences
+arctic_data/data/processed_seqs/s02
+..
+arctic_data/data/processed_seqs/s10
+arctic_data/data/splits/
+arctic_data/data/splits_json/
+arctic_data/data/meta/
+ misc.json: camera parameters, offsets used to sync images and groundtruth, image sizes, subject gender
+ object_vtemplates: object templates
+ subject_vtemplates: personalized vtemplate for each subject
+```
+
+**Object models**
+
+As stated above, object models are stored under the object template directory. Here we detail each file in the object model directory by taking the "box" as an example.
+
+We have three types of meshes to define the object model:
+- a simplified mesh without texture (used in our cvpr paper for numbers)
+ - `mesh.obj`: mesh without texture
+ - `parts.json`: each line represent the part of a mesh vertex belong to (1 for top; 0 for bottom)
+- two meshes (top and bottom parts of the object) without texture
+ - `top.obj`: top part object mesh
+ - `bottom.obj`: bottom part object mesh
+- a denser mesh with texture
+ - `mesh_tex.obj`: textured mesh
+ - `material.jpg`: texture map
+ - `material.mtl`: texture map
+
+Note that the three types of meshes do not share topology, but they are aligned in the object canonical space such that they overlap to each other.
+
+Others:
+- `object_params.json`: define mocap marker locations, 3D bounding box corners, object 3D keypoints used for training (16 keypoints for each part) in the object canonical space
+- `top_keypoints_300.json`: 300 keypoints on the top part of object using farthest point sampling
+- `bottom_keypoints_300.json`: bottom part
+
+## Model folder
+
+This folder stores the pre-trained models used in our CVPR paper, including their checkpoints and evaluation results (validation set). Each hashcode is the name of the model.
+
+```
+arctic_data/models/
+```
+
+CVPR paper pre-trained weights:
+
+| **EXP** | **Splits** | **Method** |
+|:---------:|:----------:|:---------------:|
+| 3558f1342 | P1 | ArcticNet-SF |
+| 423c6057b | P1 | ArcticNet-LSTM |
+| 28bf3642f | P2 | ArcticNet-SF |
+| 40ae50712 | P2 | ArcticNet-LSTM (dropout on) |
+| 1f9ac0b15 | P1 | InterField-SF |
+| 546c1e997 | P1 | InterField-LSTM |
+| 58e200d16 | P2 | InterField-SF |
+| 701a72569 | P2 | InterField-LSTM |
+
+Weights after code release:
+
+| **EXP** | **Splits** | **Method** |
+|:---------:|:----------:|:---------------:|
+| 66417ff6e | P1 | ArcticNet-SF |
+| fdc34e6c3 | P1 | ArcticNet-LSTM |
+| 7d09884c6 | P2 | ArcticNet-SF |
+| 49abdaee9 | P2 | ArcticNet-LSTM (dropout off) |
+| fb59bac27 | P1 | InterField-SF |
+| 5e6f6aeb9 | P1 | InterField-LSTM |
+| 782c39821 | P2 | InterField-SF |
+| ec90691f8 | P2 | InterField-LSTM |
+
+The model `40ae50712` is the model in the CVPR paper but it has dropout turned on by mistake. `49abdaee9` is the retrained version with dropout turned off. All other LSTM models are trained without dropout.
+
+## Documentation on file formats
+
+Here we document the details of each field of our dataset.
+
+File: `'./data/arctic_data/data/raw_seqs/s01/box_grab_01.mano.npy'`
+
+- `s01/box_grab_01` denotes: subject 01 and sequence name `box_grab_01`
+- `rot`: MANO global rotation in axis-angle; `frames x 3`
+- `pose`: MANO pose parameters in axis-angle; `frames x 45`
+- `trans`: MANO global translation; `frames x 3`
+- `shape`: MANO shape parameters; `frames x 10`
+- `fitting_err`: MANO global rotation; `frames`
+
+File: `'./data/arctic_data/data/raw_seqs/s01/box_grab_01.smplx.npy'`
+
+- `transl`: SMPL-X global translation; `frames x 3`
+- `global_orient`: SMPL-X global rotation in axis-angle; `frames x 3`
+- `body_pose`: SMPL-X body pose in axis-angle; `frames x 63`
+- `jaw_pose`: SMPL-X jaw pose in axis-angle; `frames x 3`
+- `leye_pose`: SMPL-X left eye pose in axis-angle; `frames x 3`
+- `reye_pose`: SMPL-X right eye pose in axis-angle; `frames x 3`
+- `left_hand_pose`: SMPL-X left hand pose in axis-angle; `frames x 45`
+- `right_hand_pose`: SMPL-X right hand pose in axis-angle; `frames x 45`
+
+File: `'./data/arctic_data/data/raw_seqs/s01/box_grab_01.egocam.dist.npy'`
+
+- `R_k_cam_np`: Rotation matrix for the egocentric camera pose; `frames x 3 x 3`
+- `T_k_cam_np`: Translation for the egocentric camera pose; `frames x 3 x 1`
+- `intrinsics`: Egocentric camera intrinsics; `frames x 3 x 3`
+- `ego_markers.ref`: markers on camera in the canonical frame; `markers x 3`
+- `ego_markers.label`: names for the markers; `markers`
+- `R0`: Rotation for the camera pose in canonical space; `3 x 3`
+- `T0`: Rotation for the camera pose in canonical space; `3 x 1`
+- `dist8`: Calibrated distortion parameters; `8`
+
+File: `'./data/arctic_data/data/raw_seqs/s01/box_grab_01.object.npy'`
+
+- Articulated object pose: 7 dimensions to define the pose (1 dim for articulation, 3 dims for rotation in axis-angle, 3 dims for translation) ; `frames x 7`
+
+File: `'./data/arctic_data/data/splits/p1_val.npy'`
+
+- `data_dict - s05/box_use_01 - cam_coord`:
+ - About views (10 dimensions): 1 egocentric view, 8 allocentric views, 1 egocentric view with distortion
+ - `joints.right`: right hand joints in camera coordinate, frames x views x 21 x 3
+ - `joints.left`: left hand joints in camera coordinate, frames x views x 21 x 3
+ - `bbox3d`: two 3D bounding boxes around object parts, frames x views x 16 x 3
+ - `kp3d`: 3D keypoints for two object parts, frames x views x 32 x 3
+ - `joints.smplx`: SMPL-X joints in camera coordinate, frames x views x 127 x 3
+ - `rot_r_cam`: MANO hand rotation for right hand in axis-angle, frames x views x 3
+ - `rot_l_cam`: MANO hand rotation for left hand in axis-angle, frames x views x 3
+ - `obj_rot_cam`: object rotation in axis-angle, frames x views x 3
+ - `is_valid`: indicator of whether the image is valid for evaluation, frames x views
+ - `left_valid`: indicator of whether the hand is valid for evaluation, frames x views
+ - `right_valid`: indicator of whether the hand is valid for evaluation, frames x views
+- `data_dict - s05/box_use_01 - 2d`
+ - `joints.right`: 2D projection of right-hand joints in image space; `frames x views x 21 x 2`
+ - `joints.left`: 2D projection of left-hand joints in image space; `frames x views x 21 x 2`
+ - `bbox3d`: 2D projection of 3D bounding box corners in image space; `frames x views x 16 x 2`
+ - `kp3d`: 2D projection of 3D keypoints in image space; `frames x views x 32 x 2`
+ - `joints.smplx`: 2D projection of SMPL-X joints in image space; `frames x views x 127 x 2`
+- `data_dict - s05/box_use_01 - bbox`: Bounding box (squared) on image for network input; three-dimensional `cx, cy, scale` where `(cx, cy`) is the bbox center; `scale*200` is the bounding box width and height; `frames x views x 3`
+- `data_dict - s05/box_use_01 - params`
+ - `rot_r`: MANO right-hand rotation (axis-angle) in world coordinate; `frames x 3`
+ - `pose_r`: MANO right-hand pose (axis-angle) in world coordinate; `frames x 45`
+ - `trans_r`: MANO right-hand translation in world coordinate; `frames x 3`
+ - `shape_r`: MANO right-hand shape in world coordinate; `frames x 10`
+ - `fitting_err_r`: Fitting error for MANO; `frames`
+ - `rot_l`: MANO left-hand rotation (axis-angle) in world coordinate; `frames x 3`
+ - `pose_l`: MANO left-hand pose (axis-angle) in world coordinate; `frames x 45`
+ - `trans_l`: MANO left-hand translation in world coordinate; `frames x 3`
+ - `shape_l`: MANO left-hand shape in world coordinate; `frames x 10`
+ - `fitting_err_l`: Fitting error for MANO; `frames`
+ - `smplx_transl`: SMPL-X translation in world coordinate; `frames x 3`
+ - `smplx_global_orient`: SMPL-X global orientation in world coordinate; `frames x 3`
+ - `smplx_body_pose`: SMPL-X body pose (axis-angle) in world coordinate; `frames x 63`
+ - `smplx_jaw_pose`: SMPL-X jaw pose in world coordinate; `frames x 3`
+ - `smplx_leye_pose`: SMPL-X left eye pose in world coordinate; `frames x 3`
+ - `smplx_reye_pose`: SMPL-X right eye pose in world coordinate; `frames x 3`
+ - `smplx_left_hand_pose`: SMPL-X left hand pose in world coordinate; `frames x 45`
+ - `smplx_right_hand_pose`: SMPL-X right hand pose in world coordinate; `frames x 45`
+ - `obj_arti`: object articulation in radian; `frames`
+ - `obj_rot`: object global rotation (axis-angle) in world coordinate; `frames x 3`
+ - `obj_trans`: object translation in world coordinate; `frames x 3`
+ - `world2ego`: egocentric camera pose; transformation matricies from world to egocentric camera canonical frame; `frames x 4 x 4`
+ - `dist`: distortion parameters for egocentric camera; `8`
+ - `K_ego`: egocentric camera intrinsics; `3 x 3`
+- `imgnames`: paths to images; `num_images`
+
+File: `'./data/arctic_data/data/feat/3558f1342/p1_minival.pt'`
+
+- 3558f1342 is the model name
+- `imgnames`: paths to images (used as a unique key); `num_images`
+- `feat_vec`: extracted image features; `num_images x 2048`
+
+
+
diff --git a/docs/data/processing.md b/docs/data/processing.md
new file mode 100644
index 0000000..23602d2
--- /dev/null
+++ b/docs/data/processing.md
@@ -0,0 +1,56 @@
+# Data processing & splits
+
+## Data splits
+
+**CVPR paper splits**
+
+- protocol 1: allocentric split (test set GT is hidden)
+- protocol 2: egocentric split (test set GT is hidden)
+
+Note, allocentric training images in protocol 1 can be used for protocol 2 training as per our evaluation protocol in the paper. In our paper, we first pre-train on protocol 1 images and finetune on protocol 2 for the egocentric regressor. If one wants to directly train on allocentric and egocentric training images for protocol 2 evaluation, she can create a custom split.
+
+See [`docs/data_doc.md`](../data_doc.md) for an explanation of each file in the `arctic_data` folder.
+
+## Advanced usage
+
+### Process raw sequences
+
+```bash
+# process a specific seq; do not save vertices for smaller storage
+python scripts_data/process_seqs.py --mano_p ./unpack/arctic_data/data/raw_seqs/s01/espressomachine_use_01.mano.npy
+
+# process all seqs; do not save vertices for smaller storage
+python scripts_data/process_seqs.py
+
+# process all seqs while exporting the vertices for visualization
+python scripts_data/process_seqs.py --export_verts
+```
+
+### Create data split from processed sequences
+
+Our baseline load the pre-processed split from `data/arctic_data/data/splits`. In case you need a custom split, you can build a data split from the example below (here we show validation set split), which generates the split files under `outputs/splits/`
+
+Build a data split from processed sequence:
+
+```bash
+# Build validation set based on protocol p1 defined at arctic_data/data/splits_json/protocol_p1.json
+python scripts_data/build_splits.py --protocol p1 --split val --process_folder ./outputs/processed/seqs
+
+# Same as above, but build with vertices too
+python scripts_data/build_splits.py --protocol p1 --split val --process_folder ./outputs/processed_verts/seqs
+```
+
+⚠️ The dataloader for our models in our CVPR paper does not require vertices in the split files. If the processed sequences are built with `--export_verts`, this script will try to aggregate the vertices as well, leading to large storage requirement.
+
+### Crop images for faster data loading
+
+Since our images are of high resolution, if reading speed is a limitation for your machine for training models, one can consider cropping the images around a larger region centered at the bounding boxes to reduce data loading requirement in training. We have provided data link for pre-cropped images. In case of a custom crop, one can use the script below:
+
+```bash
+# crop all images from all sequences using bbox defined in the process folder on a single machine
+python scripts_data/crop_images.py --task_id -1 --process_folder ./outputs/processed/seqs
+
+# crop all images from one sequence using bbox defined in the process folder
+# this is used for cluster preprocessing where AGENT_ID is from 0 to num_nodes-1
+python scripts_data/crop_images.py --task_id AGENT_ID --process_folder ./outputs/processed/seqs
+```
diff --git a/docs/data/visualize.md b/docs/data/visualize.md
new file mode 100644
index 0000000..46e31d6
--- /dev/null
+++ b/docs/data/visualize.md
@@ -0,0 +1,51 @@
+# AIT Viewer with ARCTIC
+
+Our visualization is powered by:
+
+
+
+## Examples
+
+```bash
+# render object and MANO for a given sequence
+python scripts_data/visualizer.py --seq_p ./outputs/processed_verts/seqs/s01/capsulemachine_use_01.npy --object --mano
+
+# render object and MANO for a given sequence on view 2
+python scripts_data/visualizer.py --seq_p ./outputs/processed_verts/seqs/s01/capsulemachine_use_01.npy --object --mano --view_idx 2
+
+# render object and MANO for a given sequence on egocentric view
+python scripts_data/visualizer.py --seq_p ./outputs/processed_verts/seqs/s01/capsulemachine_use_01.npy --object --mano --view_idx 0
+
+# render object and MANO for a given sequence on egocentric view while taking lens distortion into account
+python scripts_data/visualizer.py --seq_p ./outputs/processed_verts/seqs/s01/capsulemachine_use_01.npy --object --mano --view_idx 0 --distort
+
+# render in headless mode to obtain RGB images (with meshes), depth, segmentation masks, and mp4 video of the visualization
+python scripts_data/visualizer.py --seq_p ./outputs/processed_verts/seqs/s01/capsulemachine_use_01.npy --object --mano --headless
+
+# render object and SMPLX for a given sequence without images
+python scripts_data/visualizer.py --seq_p ./outputs/processed_verts/seqs/s01/capsulemachine_use_01.npy --object --smplx --no_image
+
+# render all sequences into videos, RGB images with meshes, depth maps, and segmentation masks
+python scripts_data/visualizer.py --object --smplx --headless
+```
+
+## Options
+
+- `view_idx`: camera view to visualize; `0` is for egocentric view; `{1, .., 8}` are for 3rd-person views.
+- `seq_p`: path to processed sequence to visualize. When this option is not specified, the program will run on all sequences (e.g., when you want to render depth masks for all sequences).
+- `headless`: when it is off, user will be have an interactive mode; when it is on, we render and save images with GT, depth maps, segmentation masks, and videos to disks.
+- `mano`: include MANO in the scene
+- `smplx`: include SMPLX in the scene
+- `object`: include object in the scene
+- `no_image`: do not show images.
+- `distort`: in egocentric view, lens distortion is servere as the camera is close to the 3D objects, leading to mismatch in 3D geometry and the images. When turned on, this option makes use of the lens distortion parameters for better GT-image overlaps by simulating the distortion effect using ["vertex displacement for distortion correction"](https://stackoverflow.com/questions/44489686/camera-lens-distortion-in-opengl). It uses the distortion parameters to distort the 3D geometry so that it has better 3D overlaps with the images. However, such a method creates artifacts when the 3D geometry is close to the camera.
+
+## Controls to interact with the viewer
+
+[AITViewer](https://github.com/eth-ait/aitviewer) has lots of useful builtin controls. For an explanation of the frontend and control, visit [here](https://eth-ait.github.io/aitviewer/frontend.html). Here we assume you are in interactive mode (`--headless` is turned off).
+
+- To play/pause the animation, hit ``.
+- To center around an object, click the mesh you want to center, press `X`.
+- To go between the previous and the current frame, press `<` and `>`.
+
+More documentation can be found in [aitviewer github](https://github.com/eth-ait/aitviewer) and in [viewer docs](https://eth-ait.github.io/aitviewer/frontend.html).
diff --git a/docs/faq.md b/docs/faq.md
new file mode 100644
index 0000000..3fc56ed
--- /dev/null
+++ b/docs/faq.md
@@ -0,0 +1,21 @@
+# FAQ
+
+QUESTION: **Why do groundtruth hands not have complete overlap with the image in the visualization (see below) via `scripts_method/train.py`?**
+ANSWER: Like most hand-object reconstruction methods, ArcticNet does not assume camera intrinsics and we use a weak perspective camera model by assuming a fixed focal length. The mismatch of 2D alignment between the groundtruth and the image is caused by the weak perspective camera intrinsics.
+
+
+
+
+
+
+QUESTION: **Why is there more misalignment in egocentric view?**
+ANSWER: Mainly caused by image distortion in the rendering as the 3D geometry is close to the camera. We estimated distortion parameters from camera calibration sequences, which can be used to apply distortion effect on the meshes with ["vertex displacement for distortion correction"](https://stackoverflow.com/questions/44489686/camera-lens-distortion-in-opengl).
+
+See the `--distort` flag for details:
+
+```python
+python scripts_data/visualizer.py --seq_p ./outputs/processed_verts/seqs/s01/capsulemachine_use_01.npy --object --mano --view_idx 0 --distort
+```
+
+QUESTION: **Assertion error related to 21 joints**
+ANSWER: This is because smplx gives 16 joints for the MANO hand by default. See [setup instruction](setup.md) to allow 21 joints.
diff --git a/docs/model/README.md b/docs/model/README.md
new file mode 100644
index 0000000..30a928d
--- /dev/null
+++ b/docs/model/README.md
@@ -0,0 +1,219 @@
+# ARCTIC baselines
+
+## Table of content
+
+- [Overview](#overview)
+- [Getting started](#getting-started)
+- [Training examples](#training)
+- [Full instructions on training](train.md)
+- [Evaluation](#evaluation)
+- [Visualization examples](#visualization)
+- [Full instructions on visualization](visualize.md)
+- [Details on `extract_predicts.py`](extraction.md)
+
+## Overview
+
+**PyTorch Lightning**: To avoid boilerplate code, we use [pytorch lightning (PL) 2.0.0](https://pytorch-lightning.readthedocs.io/en/2.0.0/common/trainer.html) to handle the main logic for training and evaluation. Feel free to consult the documentation, should you have any questions.
+
+**Comet logger**: To keep track of experiments and visualize results, our code logs experiments using [`comet.ml`](https://comet.ml). If you wish to use own logger service, you mostly modify the code in `common/comet_utils.py`. This code is only meant as a guideline; you are free to modify it to whatever extent you deem necessary.
+
+To configure the comet logger, you need to first register an account and create a private project. An API code will be provided for you to log the experiment. Then you export the API code and the workspace ID:
+
+```bash
+export COMET_API_KEY="your_api_key_here"
+export COMET_WORKSPACE="your_workspace_here"
+```
+
+It might be a good idea to add these commands to your `~/.bashrc` file, so you don't have to load the environment every time you login to your machine. Add these lines to the end of `~/.bashrc`.
+
+We call the allocentric split and the egocentric split in our CVPR paper the `p1` split and `p2` split respectively.
+
+Each experiment is tracked with a 9-character ID. When the training procedure starts, a random ID (e.g., `837e1e5b2`) is assigned to the experiment and a folder (e.g., `logs/837e1e5b2`) to save information on this folder.
+
+## Getting started
+
+Here we assume you have:
+1. Finished setting up the environment.
+2. Downloaded and unzipped the data following [`docs/data/README.md`](../data/README.md).
+3. Finished configuring your logger.
+
+To use the data, you need to move them from `unpack`:
+
+```bash
+mv unpack data
+```
+
+You should have a file structure like this after running `ls data/*`:
+
+```
+data/arctic_data:
+data models
+
+data/body_models:
+mano smplx
+```
+
+To sanity check your setup, lets run two dry-run training procedures. We call the allocentric split and the egocentric split in our CVPR paper the `p1` split and `p2` split respectively.
+
+```bash
+# train ArcticNet-SF on a very small ARCTIC dataset (allocentric split)
+python scripts_method/train.py --setup p1 --method arctic_sf -f --mute --num_epoch 2
+```
+
+```bash
+# train Field-SF on a very small ARCTIC dataset (egocentric split)
+python scripts_method/train.py --setup p2 --method field_sf -f --mute --num_epoch 2
+```
+
+Now you enable your comet logger, by removing `--mute` to start logging to comet server. A url will be generated so that you can visualize the prediction of your model in training, and its corresponding groundtruth:
+
+```bash
+python scripts_method/train.py --setup p1 --method arctic_sf -f --num_epoch 2
+```
+
+Click on any example in `Graphics` of your comet experiment. If you can see that the hands and objects are in the left column, the groundtruth (mostly) is overlaid on the image, and the training finished 2 epochs, then your environment is in good shape.
+
+As a first-timer, it is normal to have some issues. See our [FAQ](../faq.md) to see if there is a solution.
+
+## Training
+
+⚠️ As per our CVPR paper, our evaluation protocol requires models to be trained only on the training set. You may not train on the validation set. You may use the validation set for hyper parameter search. The test set groundtruth is hidden for online evaluation. Here we provide instructions to train on the training set and evaluate on the validation set.
+
+Here we detail some options in the `argparse` parsers we use. There are other options in the argparser. You can check `src/parsers` for more details.
+
+- `-f`: Run on a mini training and validation set (i.e., a dry-run).
+- `--mute`: Do not create a new experiment on the remote comet logger.
+- `--setup`: name of the split to use. A split is a partition of ARCTIC data into training, validation and test sets.
+- `--trainsplit`: Training split to use
+- `--valsplit`: Split to use to evaluate during training
+
+The following code trains the single-frame allocentric baselines in our paper in the allocentric setting (i.e., the `p1` split). For the complete training guide for all models, please refer to [`train.md`](train.md)
+
+```bash
+# training on ArcticNet-SF in the allocentric setting in our paper
+python scripts_method/train.py --setup p1 --method arctic_sf --trainsplit train --valsplit minival
+```
+
+```bash
+# training on InterField-SF in the allocentric setting in our paper
+python scripts_method/train.py --setup p1 --method field_sf --trainsplit train --valsplit minival
+```
+
+Since you have the groundtruth in this split, you can view your metrics and losses under `Panels` of your comet experiment. You can also see the visualization of the prediction and groundtruth under `Graphics` in your comet experiment.
+
+## Evaluation
+
+Here we show evaluation steps using our pre-trained models. Copy our pre-trained models to `./logs`:
+
+```bash
+cp -r data/arctic_data/models/* logs/
+```
+
+Our evaluation process consists of two steps: 1) dumping predictions to disk (`extract_predicts.py`); 2) evaluating the prediction against the GT (`evaluate_metrics.py`).
+
+Here we assume you are using the `p1` or `p2` splits and we show instructions for local evaluation.
+
+⚠️ Instructions for `p1` and `p2` evaluation for the test set will be provided once the evaluation server is online.
+
+### Evaluate ArcticNet
+
+Evaluate allocentric ArcticNet-SF on val set on `p1` split:
+```bash
+# PLACEHOLDERS
+python scripts_method/extract_predicts.py --setup p1 --method arctic_sf --load_ckpt logs/3558f1342/checkpoints/last.ckpt --run_on val --extraction_mode eval_pose
+python scripts_method/evaluate_metrics.py --eval_p logs/3558f1342/eval --split val --setup p1 --task pose
+```
+
+Evaluate allocentric ArcticNet-LSTM on val set on `p1` split:
+```bash
+# PLACEHOLDERS
+python scripts_method/extract_predicts.py --setup p1 --method arctic_lstm --load_ckpt logs/423c6057b/checkpoints/last.ckpt --run_on val --extraction_mode eval_pose
+python scripts_method/evaluate_metrics.py --eval_p logs/423c6057b/eval --split val --setup p1 --task pose
+```
+
+Evaluate egocentric ArcticNet-SF on val set on `p2` split:
+```bash
+# PLACEHOLDERS
+python scripts_method/extract_predicts.py --setup p2 --method arctic_sf --load_ckpt logs/28bf3642f/checkpoints/last.ckpt --run_on val --extraction_mode eval_pose
+python scripts_method/evaluate_metrics.py --eval_p logs/28bf3642f/eval --split val --setup p2 --task pose
+```
+
+Evaluate egocentric ArcticNet-LSTM on val set on `p2` split:
+```bash
+# PLACEHOLDERS
+python scripts_method/extract_predicts.py --setup p2 --method arctic_lstm --load_ckpt logs/40ae50712/checkpoints/last.ckpt --run_on val --extraction_mode eval_pose
+python scripts_method/evaluate_metrics.py --eval_p logs/40ae50712/eval --split val --setup p2 --task pose
+```
+
+## Evaluate InterField
+
+Evaluate allocentric InterField-SF on val set on `p1` split:
+```bash
+# PLACEHOLDERS
+python scripts_method/extract_predicts.py --setup p1 --method field_sf --load_ckpt logs/1f9ac0b15/checkpoints/last.ckpt --run_on val --extraction_mode eval_field
+python scripts_method/evaluate_metrics.py --eval_p logs/1f9ac0b15/eval --split val --setup p1 --task field
+```
+
+Evaluate allocentric InterField-LSTM on val set on `p1` split:
+```bash
+# PLACEHOLDERS
+python scripts_method/extract_predicts.py --setup p1 --method field_lstm --load_ckpt logs/546c1e997/checkpoints/last.ckpt --run_on val --extraction_mode eval_field
+python scripts_method/evaluate_metrics.py --eval_p logs/546c1e997/eval --split val --setup p1 --task field
+```
+
+Evaluate egocentric InterField-SF on val set on `p2` split:
+```bash
+# PLACEHOLDERS
+python scripts_method/extract_predicts.py --setup p2 --method field_sf --load_ckpt logs/58e200d16/checkpoints/last.ckpt --run_on val --extraction_mode eval_field
+python scripts_method/evaluate_metrics.py --eval_p logs/58e200d16/eval --split val --setup p2 --task field
+```
+
+Evaluate egocentric InterField-LSTM on val set on `p2` split:
+```bash
+# PLACEHOLDERS
+python scripts_method/extract_predicts.py --setup p2 --method field_lstm --load_ckpt logs/701a72569/checkpoints/last.ckpt --run_on val --extraction_mode eval_field
+python scripts_method/evaluate_metrics.py --eval_p logs/701a72569/eval --split val --setup p2 --task field
+```
+
+For details of `extract_predicts.py`, see [here](extraction.md).
+
+## Visualization
+
+We will use `scripts_method/visualizer.py` to visualize model performance and its corresponding GT. Unlike `scripts_data/visualizer.py`, here we shows the actual model input and output. Therefore, images are not full-resolution.
+
+Options for `visualizer.py`:
+- `--exp_folder`: the path to the experiment; the prediction will be saved there after the extraction.
+- `--seq_name`: sequence to visualize. For example, `s03_box_grab_01_1` denotes the sequence `s03_box_grab_01` and camera id `1`.
+- `--mode`: defines what to visualize; `{gt_mesh, pred_mesh, gt_field_r, gt_field_l, pred_field_r, pred_field_l}`
+- `--headless`: headless rendering. It renders the mesh2rgb overlay video. You can also render other modalities such as segmentation masks.
+
+Examples to visualize pose estimation:
+
+```bash
+# PLACEHOLDERS
+# dump predictions
+python scripts_method/extract_predicts.py --setup p1 --method arctic_sf --load_ckpt logs/3558f1342/checkpoints/last.ckpt --run_on val --extraction_mode vis_pose
+
+# visualize gt_mesh
+python scripts_method/visualizer.py --exp_folder logs/3558f1342 --seq_name s03_box_grab_01_1 --mode gt_mesh
+
+# visualize pred_mesh
+python scripts_method/visualizer.py --exp_folder logs/3558f1342 --seq_name s03_box_grab_01_1 --mode pred_mesh
+```
+
+Examples to visualize interaction field estimation:
+
+```bash
+# PLACEHOLDERS
+# dump predictions
+python scripts_method/extract_predicts.py --setup p1 --method field_sf --load_ckpt logs/1f9ac0b15/checkpoints/last.ckpt --run_on val --extraction_mode vis_field
+
+# visualize gt field for right hand
+python scripts_method/visualizer.py --exp_folder logs/1f9ac0b15 --seq_name s03_box_grab_01_1 --mode gt_field_r
+
+# visualize predicted field for left hand
+python scripts_method/visualizer.py --exp_folder logs/1f9ac0b15 --seq_name s03_box_grab_01_1 --mode pred_field_l
+```
+
+For details of `extract_predicts.py`, see [here](extraction.md).
+
diff --git a/docs/model/extraction.md b/docs/model/extraction.md
new file mode 100644
index 0000000..1668812
--- /dev/null
+++ b/docs/model/extraction.md
@@ -0,0 +1,80 @@
+
+# Extraction
+
+To run our training (for LSTM models), evaluation, and visualization pipelines, we need to save certain predictions to disk in advance. Here we detail the extraction script options.
+
+## Script options
+
+Options:
+- `--setup`: the split to use; `{p1, p2}`
+- `--method`: model name; `{arctic_sf, arctic_lstm, field_sf, field_lstm}`
+- `--load_ckpt`: checkpoint path
+- `--run_on`: split to extract prediction on; `{train, val, test}`
+- `--extraction_mode`: this defines what predicted variables to extract
+
+Explanation of `setup`:
+- `p1`: allocentric split in our CVPR paper
+- `p2`: egocentric split in our CVPR paper
+
+Explanation of `--extraction_mode`:
+- `eval_pose`: dump predicted variables that are related for evaluating pose reconstruction. The evaluation will be done locally (assume GT is provided).
+- `eval_field`: dump predicted variables that are related for evaluating interaction field estimation. The evaluation will be done locally (assume GT is provided).
+- `submit_pose`: dump predicted variables that are related for evaluating pose reconstruction. The evaluation will be done via a submission server for test set evaluation.
+- `submit_field`: dump predicted variables that are related for evaluating interaction field estimation. The evaluation will be done via a submission serverfor test set evaluation.
+- `feat_pose`: extract image feature vectors for pose estimation (e.g., these features are inputs of the LSTM model to avoid a backbone in the training process for speedup).
+- `feat_field`: extract image feature vectors for interaction field estimation
+- `vis_pose`: extract prediction for visualizing pose prediction in our viewer.
+- `vis_field`: extract prediction for visualizing interaction field prediction in our viewer.
+
+## Extraction examples
+
+Here we show extraction examples using our pre-trained models. To start, copy our pre-trained models to `./logs`:
+
+```bash
+mkdir -p logs
+cp -r data/arctic_data/models/* logs/
+```
+
+**Example**: Suppose that I want to:
+- evaluate the *ArcticNet-SF* pose estimation model (`3558f1342`)
+- run on the *val* set
+- use the split `p1` to evaluate locally (therefore, `eval_pose`)
+- use the checkpoint at `logs/3558f1342/checkpoints/last.ckpt`
+
+```bash
+python scripts_method/extract_predicts.py --setup p1 --method arctic_sf --load_ckpt logs/3558f1342/checkpoints/last.ckpt --run_on val --extraction_mode eval_pose
+```
+
+**Example**: Suppose that I want to:
+- evaluate the *ArcticNet-SF* pose estimation model (`3558f1342`)
+- run on the *test* set
+- use the CVPR split `p1` to evaluate so that we submit to the evaluation server later (therefore, `submit_pose`)
+- use the checkpoint at `logs/3558f1342/checkpoints/last.ckpt`
+
+```bash
+python scripts_method/extract_predicts.py --setup p1 --method arctic_sf --load_ckpt logs/3558f1342/checkpoints/last.ckpt --run_on test --extraction_mode submit_pose
+```
+
+**Example**: Suppose that I want to:
+- visualize the prediction of the *ArcticNet-SF* pose estimation model (`3558f1342`); therefore, `vis_pose`
+- run on the *val* set
+- use the split `p1` to evaluate
+- use the checkpoint at `logs/3558f1342/checkpoints/last.ckpt`
+
+```bash
+python scripts_method/extract_predicts.py --setup p1 --method arctic_sf --load_ckpt logs/3558f1342/checkpoints/last.ckpt --run_on val --extraction_mode vis_pose
+```
+
+**Example**: Suppose that I want to:
+- Extract images features of the *ArcticNet-LSTM* pose estimation model (`3558f1342`) on training and val sets.
+- use the split `p1`
+- we need to first save the visual features of *ArcticNet-SF* model to disks; Therefore, `feat_pose`
+
+```bash
+# extract for training
+python scripts_method/extract_predicts.py --setup p1 --method arctic_sf --load_ckpt logs/3558f1342/checkpoints/last.ckpt --run_on train --extraction_mode feat_pose
+
+# extract for evaluation on val set
+python scripts_method/extract_predicts.py --setup p1 --method arctic_sf --load_ckpt logs/3558f1342/checkpoints/last.ckpt --run_on val --extraction_mode feat_pose
+```
+
diff --git a/docs/model/train.md b/docs/model/train.md
new file mode 100644
index 0000000..e20e0ab
--- /dev/null
+++ b/docs/model/train.md
@@ -0,0 +1,211 @@
+# Training our CVPR baselines
+
+To better illustrate the training process, we give hypothetical names for each model, such as `aaaaaaaaa`.
+
+## ArcticNet
+
+### ArcticNet-SF: Allocentric
+
+Model: `aaaaaaaaa`
+
+```bash
+python scripts_method/train.py --setup p1 --method arctic_sf --trainsplit train --valsplit minival
+```
+
+### ArcticNet-SF: Egocentric
+
+Model: `bbbbbbbbb`
+
+As per our experiment protocol, for the egocentric setting, since a model has access to both allocentric and egocentric images during training, to speed up training, we finetune pre-trained allocentric models on egocentric training images (1 camera).
+
+To train the egocentric model, we do:
+
+```bash
+python scripts_method/train.py --setup p2 --method arctic_sf --trainsplit train --valsplit minival --load_ckpt logs/aaaaaaaaa/checkpoints/last.ckpt
+```
+
+### ArcticNet-LSTM: Allocentric
+
+Model: `ccccccccc`
+
+To train the LSTM model, since maintaining an image backbone is extremely costly, following [VIBE](https://github.com/mkocabas/VIBE), to train our temporal baseline, we first store image features for each training and validation images to disk and then train an LSTM to regress these features to hand and object poses directly. The following instructions explains how to 1) extract image feature to disk for the ArcticNet-SF model (`aaaaaaaaa`) backbone; 2) package the feature vectors; 3) train the LSTM model. For details of `extract_predicts.py`, see [here](extraction.md).
+
+Extract image features from `aaaaaaaaa` backbone:
+
+```bash
+# extract image features from aaaaaaaaa on training set
+# this is for training
+python scripts_method/extract_predicts.py --setup p1 --method arctic_sf --load_ckpt logs/aaaaaaaaa/checkpoints/last.ckpt --run_on train --extraction_mode feat_pose
+
+# extract image features from aaaaaaaaa on val set (or test set)
+# this is for evaluation
+python scripts_method/extract_predicts.py --setup p1 --method arctic_sf --load_ckpt logs/aaaaaaaaa/checkpoints/last.ckpt --run_on val --extraction_mode feat_pose
+```
+
+Packaging feature vectors for different splits:
+
+```bash
+python scripts_method/build_feat_split.py --split train --protocol p1 --eval_p logs/aaaaaaaaa/eval
+python scripts_method/build_feat_split.py --split minitrain --protocol p1 --eval_p logs/aaaaaaaaa/eval
+python scripts_method/build_feat_split.py --split val --protocol p1 --eval_p logs/aaaaaaaaa/eval
+python scripts_method/build_feat_split.py --split tinyval --protocol p1 --eval_p logs/aaaaaaaaa/eval
+python scripts_method/build_feat_split.py --split minival --protocol p1 --eval_p logs/aaaaaaaaa/eval
+```
+
+At the end of the packaging, there is a verification process that checks whether each image has a feature vector. If so, `Pass verification` will be printed.
+
+Under `src/parsers/configs/arctic_lstm.py`, update `img_feat_version`. This will decide which models' features to use to train the LSTM model.
+It will also use the single-frame model's decoder weights to initialize the LSTM model decoder. Configure image feature version to `aaaaaaaaa`:
+
+```python
+# allocentric setting
+DEFAULT_ARGS_ALLO["img_feat_version"] = "aaaaaaaaa"
+```
+
+Start training:
+
+```python
+python scripts_method/train.py --setup p1 --method arctic_lstm
+```
+
+### ArcticNet-LSTM: Egocentric
+
+Model: `ddddddddd`
+
+
+Extract image features from `bbbbbbbbb` backbone:
+
+```bash
+# extract image features from bbbbbbbbb on training set
+# this is for training
+python scripts_method/extract_predicts.py --setup p2 --method arctic_sf --load_ckpt logs/bbbbbbbbb/checkpoints/last.ckpt --run_on train --extraction_mode feat_pose
+
+# extract image features from bbbbbbbbb on val set (or test set)
+# this is for evaluation
+python scripts_method/extract_predicts.py --setup p2 --method arctic_sf --load_ckpt logs/bbbbbbbbb/checkpoints/last.ckpt --run_on val --extraction_mode feat_pose
+```
+
+Packaging feature vectors for different splits:
+
+```bash
+python scripts_method/build_feat_split.py --split train --protocol p2 --eval_p logs/bbbbbbbbb/eval
+python scripts_method/build_feat_split.py --split minitrain --protocol p2 --eval_p logs/bbbbbbbbb/eval
+python scripts_method/build_feat_split.py --split val --protocol p2 --eval_p logs/bbbbbbbbb/eval
+python scripts_method/build_feat_split.py --split tinyval --protocol p2 --eval_p logs/bbbbbbbbb/eval
+python scripts_method/build_feat_split.py --split minival --protocol p2 --eval_p logs/bbbbbbbbb/eval
+```
+
+
+Under `src/parsers/configs/arctic_lstm.py`, update `img_feat_version`. This will decide which models' features to use to train the LSTM model.
+It will also use the single-frame model's decoder weights to initialize the LSTM model decoder. Configure image feature version to `bbbbbbbbb`:
+
+```python
+# egocentric setting
+DEFAULT_ARGS_EGO["img_feat_version"] = "bbbbbbbbb"
+```
+
+Start training:
+
+```python
+python scripts_method/train.py --setup p2 --method arctic_lstm
+```
+
+## InterField
+
+### InterField-SF: Allocentric
+
+Model: `eeeeeeeee`
+
+```bash
+# training on InterField-SF in the allocentric setting in our paper
+python scripts_method/train.py --setup p1 --method field_sf --trainsplit train --valsplit minival
+```
+
+### InterField-SF: Egocentric
+
+Model: `fffffffff`
+
+```bash
+python scripts_method/train.py --setup p2 --method field_sf --trainsplit train --valsplit minival --load_ckpt logs/eeeeeeeee/checkpoints/last.ckpt
+```
+
+### InterField-LSTM: Allocentric
+
+Model: `ggggggggg`
+
+
+```bash
+# extract image features from eeeeeeeee on training set
+# this is for training
+python scripts_method/extract_predicts.py --setup p1 --method field_sf --load_ckpt logs/eeeeeeeee/checkpoints/last.ckpt --run_on train --extraction_mode feat_pose
+
+# extract image features from eeeeeeeee on val set (or test set)
+# this is for evaluation
+python scripts_method/extract_predicts.py --setup p1 --method field_sf --load_ckpt logs/eeeeeeeee/checkpoints/last.ckpt --run_on val --extraction_mode feat_pose
+```
+
+Packaging feature vectors for different splits:
+
+```bash
+python scripts_method/build_feat_split.py --split train --protocol p1 --eval_p logs/eeeeeeeee/eval
+python scripts_method/build_feat_split.py --split minitrain --protocol p1 --eval_p logs/eeeeeeeee/eval
+python scripts_method/build_feat_split.py --split val --protocol p1 --eval_p logs/eeeeeeeee/eval
+python scripts_method/build_feat_split.py --split tinyval --protocol p1 --eval_p logs/eeeeeeeee/eval
+python scripts_method/build_feat_split.py --split minival --protocol p1 --eval_p logs/eeeeeeeee/eval
+```
+
+At the end of the packaging, there is a verification process that checks whether each image has a feature vector. If so, `Pass verification` will be printed.
+
+Under `src/parsers/configs/field_lstm.py`, update `img_feat_version`. This will decide which models' features to use to train the LSTM model.
+It will also use the single-frame model's decoder weights to initialize the LSTM model decoder. Configure image feature version to `eeeeeeeee`:
+
+```python
+# allocentric setting
+DEFAULT_ARGS_ALLO["img_feat_version"] = "eeeeeeeee"
+```
+
+Start training:
+
+```python
+python scripts_method/train.py --setup p1 --method field_lstm
+```
+
+### InterField-LSTM: Egocentric
+
+Model: `hhhhhhhhh`
+
+```bash
+# extract image features from fffffffff on training set
+# this is for training
+python scripts_method/extract_predicts.py --setup p2 --method field_sf --load_ckpt logs/fffffffff/checkpoints/last.ckpt --run_on train --extraction_mode feat_pose
+
+# extract image features from fffffffff on val set (or test set)
+# this is for evaluation
+python scripts_method/extract_predicts.py --setup p2 --method field_sf --load_ckpt logs/fffffffff/checkpoints/last.ckpt --run_on val --extraction_mode feat_pose
+```
+
+Packaging feature vectors for different splits:
+
+```bash
+python scripts_method/build_feat_split.py --split train --protocol p2 --eval_p logs/fffffffff/eval
+python scripts_method/build_feat_split.py --split minitrain --protocol p2 --eval_p logs/fffffffff/eval
+python scripts_method/build_feat_split.py --split val --protocol p2 --eval_p logs/fffffffff/eval
+python scripts_method/build_feat_split.py --split tinyval --protocol p2 --eval_p logs/fffffffff/eval
+python scripts_method/build_feat_split.py --split minival --protocol p2 --eval_p logs/fffffffff/eval
+```
+
+At the end of the packaging, there is a verification process that checks whether each image has a feature vector. If so, `Pass verification` will be printed.
+
+Under `src/parsers/configs/field_lstm.py`, update `img_feat_version`. This will decide which models' features to use to train the LSTM model.
+It will also use the single-frame model's decoder weights to initialize the LSTM model decoder. Configure image feature version to `fffffffff`:
+
+```python
+# egocentric setting
+DEFAULT_ARGS_EGO["img_feat_version"] = "fffffffff"
+```
+
+Start training:
+
+```python
+python scripts_method/train.py --setup p2 --method field_lstm
+```
diff --git a/docs/setup.md b/docs/setup.md
new file mode 100644
index 0000000..ba14e44
--- /dev/null
+++ b/docs/setup.md
@@ -0,0 +1,81 @@
+
+
+## Getting started
+
+General Requirements:
+
+- Python 3.10
+- torch 1.13.0
+- CUDA 11.6 (check `nvcc --version`)
+- pytorch3d 0.7.3
+- pytorch-lightning 2.0.0
+- aitviewer 1.8.0
+
+Install the environment:
+
+```bash
+ENV_NAME=arctic_env
+conda create -n $ENV_NAME python=3.10
+conda activate $ENV_NAME
+```
+
+Check your CUDA `nvcc` version:
+
+```
+nvcc --version # should be 11.6
+```
+
+You can install nvcc and cuda via [runfile](https://developer.nvidia.com/cuda-11-6-0-download-archive). If `nvcc --version` is still not `11.6`, check whether you are referring the right nvcc with `which nvcc`. Assuming you have an NVIDIA driver installed, usually, you only need to run the following command to install `nvcc` (as an example):
+
+```bash
+sudo bash cuda_11.6.0_510.39.01_linux.run --toolkit --silent --override
+```
+
+After the installation, make sure the paths pointing to the current cuda toolkit location. For example:
+
+```bash
+export CUDA_HOME=/usr/local/cuda-11.6
+export PATH="/usr/local/cuda-11.6/bin:$PATH"
+export CPATH="/usr/local/cuda-11.6/include:$CPATH"
+export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-11.6/lib64/"
+```
+
+Install packages:
+
+```bash
+pip install -r requirements.txt
+conda install pytorch=1.13.0 torchvision pytorch-cuda=11.6 -c pytorch -c nvidia
+```
+
+Install PyTorch3D:
+
+```bash
+# pytorch3d 0.7.3
+conda install -c fvcore -c iopath -c conda-forge fvcore iopath
+conda install -c bottler nvidiacub
+conda install pytorch3d -c pytorch3d
+```
+
+Install this version of numpy to avoid conflicts:
+
+```bash
+pip install numpy==1.22.4
+```
+
+Modify `smplx` package to return 21 joints for instead of 16:
+
+```bash
+vim /home//anaconda3/envs//lib//site-packages/smplx/body_models.py
+
+# uncomment L1681
+joints = self.vertex_joint_selector(vertices, joints)
+```
+
+If you are unsure about where `body_models.py` is, run these on a terminal:
+
+```bash
+python
+>>> import smplx
+>>> print(smplx.__file__)
+```
+
diff --git a/docs/static/misalignment.png b/docs/static/misalignment.png
new file mode 100644
index 0000000..e42707a
Binary files /dev/null and b/docs/static/misalignment.png differ
diff --git a/docs/static/viewer_demo.gif b/docs/static/viewer_demo.gif
new file mode 100644
index 0000000..febfd0c
Binary files /dev/null and b/docs/static/viewer_demo.gif differ
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 0000000..ce696b9
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,27 @@
+comet_ml==3.32.8
+jpeg4py==0.1.4
+loguru
+matplotlib
+numpy>=1.16.5,<1.23.0
+opencv_python
+chardet
+Pillow
+pyrender
+pytorch_lightning==2.0.0
+scipy
+smplx==0.1.28
+tqdm
+trimesh==3.9.21
+scikit-image
+imgui==1.4.1
+aitviewer==1.8.1
+chumpy
+black
+autopep8
+flake8
+pylint
+isort
+easydict
+pygit2==1.7
+ipdb
+opencv-python-headless
diff --git a/scripts_data/build_splits.py b/scripts_data/build_splits.py
new file mode 100644
index 0000000..1ce1876
--- /dev/null
+++ b/scripts_data/build_splits.py
@@ -0,0 +1,59 @@
+import argparse
+import sys
+
+sys.path = ["."] + sys.path
+from src.arctic.split import build_split
+
+
+def construct_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument(
+ "--protocol",
+ type=str,
+ default=None,
+ )
+ parser.add_argument(
+ "--split",
+ type=str,
+ choices=["train", "val", "test", "all"],
+ default=None,
+ )
+ parser.add_argument(
+ "--request_keys",
+ type=str,
+ default="cam_coord.2d.bbox.params",
+ help="save data with these keys (separated by .)",
+ )
+ parser.add_argument(
+ "--process_folder", type=str, default="./outputs/processed/seqs"
+ )
+ args = parser.parse_args()
+ return args
+
+
+if __name__ == "__main__":
+ args = construct_args()
+ protocol = args.protocol
+ split = args.split
+ request_keys = args.request_keys.split(".")
+ if protocol == "all":
+ protocols = [
+ "p1", # allocentric
+ "p2", # egocentric
+ ]
+ else:
+ protocols = [protocol]
+
+ if split == "all":
+ if protocol in ["p1", "p2"]:
+ splits = ["train", "val", "test"]
+ else:
+ raise ValueError("Unknown protocol for option 'all'")
+ else:
+ splits = [split]
+
+ for protocol in protocols:
+ for split in splits:
+ if protocol in ["p1", "p2"]:
+ assert split not in ["test"], "val/test are hidden"
+ build_split(protocol, split, request_keys, args.process_folder)
diff --git a/scripts_data/checksum.py b/scripts_data/checksum.py
new file mode 100644
index 0000000..7bd454c
--- /dev/null
+++ b/scripts_data/checksum.py
@@ -0,0 +1,56 @@
+import json
+import os.path as op
+import traceback
+from glob import glob
+from hashlib import sha256
+
+from tqdm import tqdm
+
+
+def main():
+ release_folder = "./downloads"
+
+ print("Globing files...")
+ fnames = glob(op.join(release_folder, "**/*"), recursive=True)
+ print("Number of files to checksum: ", len(fnames))
+ pbar = tqdm(fnames)
+
+ with open("./bash/assets/checksum.json", "r") as f:
+ gt_checksum = json.load(f)
+
+ hash_dict = {}
+ for fname in pbar:
+ if op.isdir(fname):
+ continue
+ if ".zip" not in fname:
+ continue
+ if "models_smplx_v1_1.zip" in fname:
+ continue
+ if "mano_v1_2.zip" in fname:
+ continue
+
+ try:
+ with open(fname, "rb") as f:
+ pbar.set_description(f"Reading {fname}")
+ data = f.read()
+ hashcode = sha256(data).hexdigest()
+ key = fname.replace(release_folder, "")
+ hash_dict[key] = hashcode
+ if hashcode != gt_checksum[key]:
+ print(f"Error: {fname} has different checksum!")
+ else:
+ pbar.set_description(f"Hashcode of {fname} is correct!")
+ # print(f'Hashcode of {fname} is correct!')
+ except:
+ print(f"Error processing {fname}")
+ traceback.print_exc()
+ continue
+
+ out_p = op.join(release_folder, "checksum.json")
+ with open(out_p, "w") as f:
+ json.dump(hash_dict, f, indent=4, sort_keys=True)
+ print(f"Checksum file saved to {out_p}!")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/scripts_data/crop_images.py b/scripts_data/crop_images.py
new file mode 100644
index 0000000..8ecb6d8
--- /dev/null
+++ b/scripts_data/crop_images.py
@@ -0,0 +1,125 @@
+import argparse
+import json
+import os
+import os.path as op
+import time
+import traceback
+from glob import glob
+
+import numpy as np
+from loguru import logger
+from PIL import Image
+from tqdm import tqdm
+
+logger.add("file_{time}.log")
+
+
+EGO_IMAGE_SCALE = 0.3
+
+with open(
+ f"./arctic_data/meta/misc.json",
+ "r",
+) as f:
+ misc = json.load(f)
+
+
+def transform_image(im, bbox_loose, cap_dim):
+ cx, cy, dim = bbox_loose.copy()
+ dim *= 200
+ im_cropped = im.crop((cx - dim / 2, cy - dim / 2, cx + dim / 2, cy + dim / 2))
+
+ im_cropped_cap = im_cropped.resize((cap_dim, cap_dim))
+ return im_cropped_cap
+
+
+def process_fname(fname, bbox_loose, sid, view_idx, pbar):
+ vidx = int(op.basename(fname).split(".")[0]) - misc[sid]["ioi_offset"]
+ out_p = fname.replace("./data/arctic_data/data/images", "./outputs/croppped_images")
+ num_frames = bbox_loose.shape[0]
+
+ if vidx < 0:
+ # expected
+ return True
+
+ if vidx >= num_frames:
+ # not expected
+ return False
+
+ if op.exists(out_p):
+ return True
+
+ pbar.set_description(f"Croppping {fname}")
+ im = Image.open(fname)
+ if view_idx > 0:
+ im_cap = transform_image(im, bbox_loose[vidx], cap_dim=1000)
+ else:
+ width, height = im.size
+ width_new = int(width * EGO_IMAGE_SCALE)
+ height_new = int(height * EGO_IMAGE_SCALE)
+ im_cap = im.resize((width_new, height_new))
+ out_folder = op.dirname(out_p)
+ if not op.exists(out_folder):
+ os.makedirs(out_folder)
+
+ im_cap.save(out_p)
+ return True
+
+
+def process_seq(seq_p):
+ print(f"Start {seq_p}")
+
+ seq_data = np.load(seq_p, allow_pickle=True).item()
+ sid, seq_name = seq_p.split("/")[-2:]
+
+ seq_name = seq_name.split(".")[0]
+ stamp = time.time()
+
+ for view_idx in range(9):
+ print(f"Processing view#{view_idx}")
+ bbox = seq_data["bbox"][:, view_idx]
+ bbox_loose = bbox.copy()
+ bbox_loose[:, 2] *= 1.5 # 1.5X around the bbox
+
+ fnames = glob(
+ f"./data/arctic_data/data/images/{sid}/{seq_name}/{view_idx}/*.jpg"
+ )
+ fnames = sorted(fnames)
+ if len(fnames) == 0:
+ logger.info(f"No images in {sid}/{seq_name}/{view_idx}")
+
+ pbar = tqdm(fnames)
+ for fname in pbar:
+ try:
+ status = process_fname(fname, bbox_loose, sid, view_idx, pbar)
+ if status is False:
+ logger.info(f"Skip due to no GT: {fname}")
+ except:
+ traceback.print_exc()
+ logger.info(f"Skip due to Exception: {fname}")
+ time.sleep(1.0)
+
+ print(f"Done! Elapsed {time.time() - stamp:.2f}s")
+
+
+def construct_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--task_id", type=int, default=None)
+ parser.add_argument(
+ "--process_folder", type=str, default="./outputs/processed/seqs"
+ )
+ args = parser.parse_args()
+ return args
+
+
+if __name__ == "__main__":
+ args = construct_args()
+ seq_ps = glob(op.join(args.process_folder, "*/*.npy"))
+ seq_ps = sorted(seq_ps)
+ assert len(seq_ps) > 0
+
+ if args.task_id < 0:
+ for seq_p in seq_ps:
+ process_seq(seq_p)
+ else:
+ seq_p = seq_ps[args.task_id]
+ process_seq(seq_p)
diff --git a/scripts_data/download_data.py b/scripts_data/download_data.py
new file mode 100644
index 0000000..a75c2a9
--- /dev/null
+++ b/scripts_data/download_data.py
@@ -0,0 +1,105 @@
+import argparse
+import os
+import os.path as op
+import warnings
+
+import requests
+from loguru import logger
+from tqdm import tqdm
+
+warnings.filterwarnings("ignore", message="Unverified HTTPS request")
+
+
+def download_data(url_file, out_folder, dry_run):
+ # Define the username and password
+ if "smplx" in url_file:
+ flag = "SMPLX"
+ elif "mano" in url_file:
+ flag = "MANO"
+ else:
+ flag = "ARCTIC"
+
+ username = os.environ[f"{flag}_USERNAME"]
+ password = os.environ[f"{flag}_PASSWORD"]
+ password_fake = "*" * len(password)
+
+ logger.info(f"Username: {username}")
+ logger.info(f"Password: {password_fake}")
+
+ post_data = {"username": username, "password": password}
+ # Read the URLs from the file
+ with open(url_file, "r") as f:
+ urls = f.readlines()
+
+ # Strip newline characters from the URLs
+ urls = [url.strip() for url in urls]
+
+ if dry_run and "images" in url_file:
+ urls = urls[:5]
+
+ # Loop through the URLs and download the files
+ logger.info(f"Start downloading from {url_file}")
+ pbar = tqdm(urls)
+ for url in pbar:
+ pbar.set_description(f"Downloading {url[-40:]}")
+ # Make a POST request with the username and password
+ response = requests.post(
+ url,
+ data=post_data,
+ stream=True,
+ verify=False,
+ allow_redirects=True,
+ )
+
+ if response.status_code == 401:
+ logger.warning(
+ f"Authentication failed for URLs in {url_file}. Username/password correct?"
+ )
+ break
+
+ # Get the filename from the URL
+ filename = url.split("/")[-1]
+ if "models_smplx_v1_1" in url:
+ filename = "models_smplx_v1_1.zip"
+ elif "mano_v1_2" in url:
+ filename = "mano_v1_2.zip"
+ elif "image" in url:
+ filename = "/".join(url.split("/")[-2:])
+
+ # Write the contents of the response to a file
+ out_p = op.join(out_folder, filename)
+ os.makedirs(op.dirname(out_p), exist_ok=True)
+ with open(out_p, "wb") as f:
+ f.write(response.content)
+
+ logger.info("Done")
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Download files from a list of URLs")
+ parser.add_argument(
+ "--url_file",
+ type=str,
+ help="Path to file containing list of URLs",
+ required=True,
+ )
+ parser.add_argument(
+ "--out_folder",
+ type=str,
+ help="Path to folder to store downloaded files",
+ required=True,
+ )
+ parser.add_argument(
+ "--dry_run",
+ action="store_true",
+ help="Select top 5 URLs if enabled and 'images' is in url_file",
+ )
+ args = parser.parse_args()
+ if args.dry_run:
+ logger.info("Running in dry-run mode")
+
+ download_data(args.url_file, args.out_folder, args.dry_run)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/scripts_data/process_seqs.py b/scripts_data/process_seqs.py
new file mode 100644
index 0000000..7934cd8
--- /dev/null
+++ b/scripts_data/process_seqs.py
@@ -0,0 +1,70 @@
+import argparse
+import json
+import sys
+import time
+import traceback
+from glob import glob
+
+import numpy as np
+import torch
+from loguru import logger
+from tqdm import tqdm
+
+sys.path = ["."] + sys.path
+from common.body_models import construct_layers
+
+# from src.arctic.models.object_tensors import ObjectTensors
+from common.object_tensors import ObjectTensors
+from src.arctic.processing import process_seq
+
+
+def construct_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--export_verts", action="store_true")
+ parser.add_argument("--mano_p", type=str, default=None)
+ args = parser.parse_args()
+ return args
+
+
+def main():
+ dev = "cuda:0"
+ args = construct_args()
+
+ with open(
+ f"./data/arctic_data/data/meta/misc.json",
+ "r",
+ ) as f:
+ misc = json.load(f)
+
+ statcams = {}
+ for sub in misc.keys():
+ statcams[sub] = {
+ "world2cam": torch.FloatTensor(np.array(misc[sub]["world2cam"])),
+ "intris_mat": torch.FloatTensor(np.array(misc[sub]["intris_mat"])),
+ }
+
+ if args.mano_p is not None:
+ mano_ps = [args.mano_p]
+ else:
+ mano_ps = glob(f"./data/arctic_data/data/raw_seqs/*/*.mano.npy")
+
+ layers = construct_layers(dev)
+ # object_tensor = ObjectTensors('', './arctic_data/data')
+ object_tensor = ObjectTensors()
+ object_tensor.to(dev)
+ layers["object"] = object_tensor
+
+ pbar = tqdm(mano_ps)
+ for mano_p in pbar:
+ pbar.set_description("Processing %s" % mano_p)
+ try:
+ task = [mano_p, dev, statcams, layers, pbar]
+ process_seq(task, export_verts=args.export_verts)
+ except Exception as e:
+ logger.info(traceback.format_exc())
+ time.sleep(2)
+ logger.info(f"Failed at {mano_p}")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/scripts_data/unzip_download.py b/scripts_data/unzip_download.py
new file mode 100644
index 0000000..ec2409e
--- /dev/null
+++ b/scripts_data/unzip_download.py
@@ -0,0 +1,84 @@
+import os
+import os.path as op
+import zipfile
+from glob import glob
+
+from tqdm import tqdm
+
+
+def unzip(zip_p, out_dir):
+ os.makedirs(out_dir, exist_ok=True)
+ with zipfile.ZipFile(zip_p, "r") as zip_ref:
+ zip_ref.extractall(out_dir)
+
+
+def main():
+ fnames = glob(op.join("downloads/data/", "**/*"), recursive=True)
+
+ full_img_zips = []
+ cropped_images_zips = []
+ misc_zips = []
+ models_zips = []
+ for fname in fnames:
+ if not (".zip" in fname or ".npy" in fname):
+ continue
+ if "/images_zips/" in fname:
+ full_img_zips.append(fname)
+ elif "/cropped_images_zips/" in fname:
+ cropped_images_zips.append(fname)
+ elif "raw_seqs.zip" in fname:
+ misc_zips.append(fname)
+ elif "splits_json.zip" in fname:
+ misc_zips.append(fname)
+ elif "meta.zip" in fname:
+ misc_zips.append(fname)
+ elif "splits.zip" in fname:
+ misc_zips.append(fname)
+ elif "feat.zip" in fname:
+ misc_zips.append(fname)
+ elif "models.zip" in fname:
+ models_zips.append(fname)
+ else:
+ print(f"Unknown zip: {fname}")
+
+ out_dir = "./unpack/arctic_data/data"
+ os.makedirs(out_dir, exist_ok=True)
+
+ # unzip misc files
+ for zip_p in misc_zips:
+ print(f"Unzipping {zip_p} to {out_dir}")
+ unzip(zip_p, out_dir)
+
+ # unzip models files
+ for zip_p in models_zips:
+ model_out = out_dir.replace("/data", "")
+ print(f"Unzipping {zip_p} to {model_out}")
+ unzip(zip_p, model_out)
+
+ # unzip images
+ pbar = tqdm(cropped_images_zips)
+ for zip_p in pbar:
+ out_p = op.join(
+ out_dir,
+ zip_p.replace("downloads/data/", "")
+ .replace(".zip", "")
+ .replace("cropped_images_zips/", "cropped_images/"),
+ )
+ pbar.set_description(f"Unzipping {zip_p} to {out_dir}")
+ unzip(zip_p, out_p)
+
+ # unzip images
+ pbar = tqdm(full_img_zips)
+ for zip_p in pbar:
+ pbar.set_description(f"Unzipping {zip_p} to {out_dir}")
+ out_p = op.join(
+ out_dir,
+ zip_p.replace("downloads/data/", "")
+ .replace(".zip", "")
+ .replace("images_zips/", "images/"),
+ )
+ unzip(zip_p, out_p)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/scripts_data/visualizer.py b/scripts_data/visualizer.py
new file mode 100644
index 0000000..d0e68fd
--- /dev/null
+++ b/scripts_data/visualizer.py
@@ -0,0 +1,108 @@
+import argparse
+import json
+import os.path as op
+import random
+import sys
+from glob import glob
+
+import torch
+from easydict import EasyDict
+from loguru import logger
+
+sys.path = ["."] + sys.path
+
+from common.body_models import construct_layers
+from common.viewer import ARCTICViewer
+
+
+class DataViewer(ARCTICViewer):
+ def __init__(
+ self,
+ render_types=["rgb", "depth", "mask"],
+ interactive=True,
+ size=(2024, 2024),
+ ):
+ dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ self.layers = construct_layers(dev)
+ super().__init__(render_types, interactive, size)
+
+ def load_data(
+ self,
+ seq_p,
+ use_mano,
+ use_object,
+ use_smplx,
+ no_image,
+ use_distort,
+ view_idx,
+ subject_meta,
+ ):
+ logger.info("Creating meshes")
+ from src.mesh_loaders.arctic import construct_meshes
+
+ batch = construct_meshes(
+ seq_p,
+ self.layers,
+ use_mano,
+ use_object,
+ use_smplx,
+ no_image,
+ use_distort,
+ view_idx,
+ subject_meta,
+ )
+ self.check_format(batch)
+ logger.info("Done")
+ return batch
+
+
+def parse_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--view_idx", type=int, default=1)
+ parser.add_argument("--seq_p", type=str, default=None)
+ parser.add_argument("--headless", action="store_true")
+ parser.add_argument("--mano", action="store_true")
+ parser.add_argument("--smplx", action="store_true")
+ parser.add_argument("--object", action="store_true")
+ parser.add_argument("--no_image", action="store_true")
+ parser.add_argument("--distort", action="store_true")
+ config = parser.parse_args()
+ args = EasyDict(vars(config))
+ return args
+
+
+def main():
+ with open(
+ f"./data/arctic_data/data/meta/misc.json",
+ "r",
+ ) as f:
+ subject_meta = json.load(f)
+
+ args = parse_args()
+ random.seed(1)
+
+ viewer = DataViewer(interactive=not args.headless, size=(2024, 2024))
+ if args.seq_p is None:
+ seq_ps = glob("./outputs/processed_verts/seqs/*/*.npy")
+ else:
+ seq_ps = [args.seq_p]
+ assert len(seq_ps) > 0, f"No seqs found on {args.seq_p}"
+
+ for seq_idx, seq_p in enumerate(seq_ps):
+ logger.info(f"Rendering seq#{seq_idx+1}, seq: {seq_p}, view: {args.view_idx}")
+ seq_name = seq_p.split("/")[-1].split(".")[0]
+ batch = viewer.load_data(
+ seq_p,
+ args.mano,
+ args.object,
+ args.smplx,
+ args.no_image,
+ args.distort,
+ args.view_idx,
+ subject_meta,
+ )
+ viewer.render_seq(batch, out_folder=op.join("render_out", seq_name))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/scripts_method/build_feat_split.py b/scripts_method/build_feat_split.py
new file mode 100644
index 0000000..f465214
--- /dev/null
+++ b/scripts_method/build_feat_split.py
@@ -0,0 +1,120 @@
+import argparse
+import json
+import os
+import os.path as op
+from glob import glob
+
+import numpy as np
+import torch
+from easydict import EasyDict
+from tqdm import tqdm
+
+
+def parse_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--eval_p", type=str, default="")
+ parser.add_argument("--split", type=str, default="")
+ parser.add_argument("--protocol", type=str, default="")
+ config = parser.parse_args()
+ args = EasyDict(vars(config))
+ return args
+
+
+def check_imgname_match(imgnames_feat, setup, split):
+ print("Verifying")
+ imgnames_feat = ["/".join(imgname.split("/")[-4:]) for imgname in imgnames_feat]
+ data = np.load(
+ op.join(f"data/arctic_data/data/splits/{setup}_{split}.npy"), allow_pickle=True
+ ).item()
+ imgnames = data["imgnames"]
+ imgnames_npy = ["/".join(imgname.split("/")[-4:]) for imgname in imgnames]
+ assert set(imgnames_npy) == set(imgnames_feat)
+ print("Passed verifcation")
+
+
+def main(split, protocol, eval_p):
+ if protocol in ["p1"]:
+ views = [1, 2, 3, 4, 5, 6, 7, 8]
+ elif protocol in ["p2"]:
+ views = [0]
+ else:
+ assert False, "Undefined protocol"
+
+ short_split = split.replace("mini", "").replace("tiny", "")
+ exp_key = eval_p.split("/")[-2]
+
+ load_ps = glob(op.join(eval_p, "*"))
+ with open(
+ f"./data/arctic_data/data/splits_json/protocol_{protocol}.json", "r"
+ ) as f:
+ seq_names = json.load(f)[short_split]
+
+ # needed seq/view pairs
+ seq_view_specs = []
+ for seq_name in seq_names:
+ for view_idx in views:
+ seq_view_specs.append(f"{seq_name}/{view_idx}")
+ seq_view_specs = set(seq_view_specs)
+
+ if "mini" in split:
+ import random
+
+ random.seed(1)
+ random.shuffle(seq_names)
+ seq_names = seq_names[:10]
+
+ if "tiny" in split:
+ import random
+
+ random.seed(1)
+ random.shuffle(seq_names)
+ seq_names = seq_names[:20]
+
+ # filter seqs within split
+ _load_ps = []
+ for load_p in load_ps:
+ curr_seq = list(op.basename(load_p))
+ view_id = int(curr_seq[-1])
+ curr_seq[3] = "/"
+ curr_seq = "".join(curr_seq)[:-2] # rm view id
+ if curr_seq in seq_names and view_id in views:
+ _load_ps.append(load_p)
+
+ load_ps = _load_ps
+ assert len(load_ps) == len(set(load_ps))
+
+ assert len(load_ps) > 0
+ print("Loading image feat")
+ vecs_list = []
+ imgnames_list = []
+ for load_p in tqdm(load_ps):
+ feat_vec = torch.load(op.join(load_p, "preds", "pred.feat_vec.pt"))
+ imgnames = torch.load(op.join(load_p, "meta_info", "meta_info.imgname.pt"))
+ vecs_list.append(feat_vec)
+ imgnames_list.append(imgnames)
+ vecs_list = torch.cat(vecs_list, dim=0)
+ imgnames_list = sum(imgnames_list, [])
+
+ if short_split == split:
+ check_imgname_match(imgnames_list, protocol, split)
+
+ out = {"imgnames": imgnames_list, "feat_vec": vecs_list}
+ out_folder = "./data/arctic_data/data/feat"
+ out_p = op.join(out_folder, exp_key, f"{protocol}_{split}.pt")
+ assert not op.exists(out_p), f"{out_p} already exists"
+ os.makedirs(op.dirname(out_p), exist_ok=True)
+ print(f"Dumping into {out_p}")
+ torch.save(out, out_p)
+
+
+if __name__ == "__main__":
+ args = parse_args()
+ split = args.split
+ if split in ["all"]:
+ splits = ["minitrain", "minival", "tinytest", "tinyval", "train", "val", "test"]
+ else:
+ splits = [split]
+
+ for split in splits:
+ print(f"Processing {split}")
+ main(split, args.protocol, args.eval_p)
diff --git a/scripts_method/evaluate_metrics.py b/scripts_method/evaluate_metrics.py
new file mode 100644
index 0000000..23168e6
--- /dev/null
+++ b/scripts_method/evaluate_metrics.py
@@ -0,0 +1,122 @@
+import json
+import os
+import sys
+
+import torch
+
+sys.path = ["."] + sys.path
+import argparse
+import os.path as op
+
+import numpy as np
+from easydict import EasyDict
+from loguru import logger
+from tqdm import tqdm
+
+import common.thing as thing
+from common.ld_utils import cat_dl, ld2dl
+from common.xdict import xdict
+from src.extraction.interface import prepare_data
+from src.utils.eval_modules import eval_fn_dict
+
+
+def evalute_results(
+ layers, split, exp_key, setup, device, metrics, data_keys, task, eval_p
+):
+ with open(f"./data/arctic_data/data/splits_json/protocol_{setup}.json", "r") as f:
+ protocols = json.load(f)
+
+ seqs_val = protocols[split]
+
+ if setup in ["p1"]:
+ views = [1, 2, 3, 4, 5, 6, 7, 8]
+ elif setup in ["p2"]:
+ views = [0]
+ else:
+ assert False
+
+ with torch.no_grad():
+ all_metrics = {}
+ pbar = tqdm(seqs_val)
+ for seq_val in pbar:
+ for view in views:
+ curr_seq = seq_val.replace("/", "_") + f"_{view}"
+ pbar.set_description(f"Processing {curr_seq}: load data")
+ data = prepare_data(
+ curr_seq, exp_key, data_keys, layers, device, task, eval_p
+ )
+ pred = data.search("pred.", replace_to="")
+ targets = data.search("targets.", replace_to="")
+ meta_info = data.search("meta_info.", replace_to="")
+ metric_dict = xdict()
+ for metric in metrics:
+ pbar.set_description(f"Processing {curr_seq}: {metric}")
+ # each metric returns a tensor with shape (N, )
+ out = eval_fn_dict[metric](pred, targets, meta_info)
+ metric_dict.merge(out)
+ metric_dict = metric_dict.to_np()
+ all_metrics[curr_seq] = metric_dict
+
+ agg_metrics = cat_dl(ld2dl(list(all_metrics.values())), dim=0)
+ for key, val in agg_metrics.items():
+ agg_metrics[key] = float(np.nanmean(thing.thing2np(val)))
+
+ out_folder = eval_p.replace("/eval", "/results")
+ if not op.exists(out_folder):
+ os.makedirs(out_folder, exist_ok=True)
+ np.save(op.join(out_folder, f"all_metrics_{split}_{setup}.npy"), all_metrics)
+ with open(op.join(out_folder, f"agg_metrics_{split}_{setup}.json"), "w") as f:
+ json.dump(agg_metrics, f, indent=4)
+ logger.info(f"Exported results to {out_folder}")
+
+
+def parse_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--task", type=str, default="")
+ parser.add_argument("--eval_p", type=str, default="")
+ parser.add_argument("--split", type=str, default="")
+ parser.add_argument("--setup", type=str, default="")
+ config = parser.parse_args()
+ args = EasyDict(vars(config))
+ return args
+
+
+def main():
+ args = parse_args()
+ from common.body_models import build_layers
+
+ device = "cuda"
+ layers = build_layers(device)
+
+ eval_p = args.eval_p
+ exp_key = eval_p.split("/")[-2]
+ split = args.split
+ setup = args.setup
+
+ if "pose" in args.task:
+ from src.extraction.keys.eval_pose import KEYS
+
+ metrics = [
+ "aae",
+ "mpjpe.ra",
+ "mrrpe",
+ "success_rate",
+ "cdev",
+ "mdev",
+ "acc_err_pose",
+ ]
+ elif "field" in args.task:
+ from src.extraction.keys.eval_field import KEYS
+
+ metrics = ["avg_err_field", "acc_err_field"]
+ else:
+ assert False
+
+ logger.info(f"Evaluating {exp_key} {split} on setup {setup}")
+ evalute_results(
+ layers, split, exp_key, setup, device, metrics, KEYS, args.task, eval_p
+ )
+
+
+if __name__ == "__main__":
+ main()
diff --git a/scripts_method/extract_predicts.py b/scripts_method/extract_predicts.py
new file mode 100644
index 0000000..7b9d4ad
--- /dev/null
+++ b/scripts_method/extract_predicts.py
@@ -0,0 +1,107 @@
+import json
+import os.path as op
+import sys
+from pprint import pformat
+
+import torch
+from loguru import logger
+from tqdm import tqdm
+
+sys.path.append(".")
+import common.thing as thing
+import src.extraction.interface as interface
+import src.factory as factory
+from common.xdict import xdict
+from src.parsers.parser import construct_args
+
+
+# LSTM models are trained using image features from single-frame models
+# this specify the single-frame model features that the LSTM model was trained on
+# model_dependencies[lstm_model_id] = single_frame_model_id
+model_dependencies = {
+ "423c6057b": "3558f1342",
+ "40ae50712": "28bf3642f",
+ "546c1e997": "1f9ac0b15",
+ "701a72569": "58e200d16",
+ "fdc34e6c3": "66417ff6e",
+ "49abdaee9": "7d09884c6",
+ "5e6f6aeb9": "fb59bac27",
+ "ec90691f8": "782c39821",
+}
+
+
+def main():
+ args = construct_args()
+
+ args.experiment = None
+ args.exp_key = "xxxxxxx"
+
+ device = "cuda:0"
+ wrapper = factory.fetch_model(args).to(device)
+ assert args.load_ckpt != ""
+ wrapper.load_state_dict(torch.load(args.load_ckpt)["state_dict"])
+ logger.info(f"Loaded weights from {args.load_ckpt}")
+ wrapper.eval()
+ wrapper.to(device)
+ wrapper.model.arti_head.object_tensors.to(device)
+ # wrapper.metric_dict = []
+
+ exp_key = op.abspath(args.load_ckpt).split("/")[-3]
+ if exp_key in model_dependencies.keys():
+ assert (
+ args.img_feat_version == model_dependencies[exp_key]
+ ), f"Image features used for training ({model_dependencies[exp_key]}) do not match the ones used for the current inference ({args.img_feat_version})"
+
+ out_dir = op.join(args.load_ckpt.split("checkpoints")[0], "eval")
+
+ with open(
+ f"./data/arctic_data/data/splits_json/protocol_{args.setup}.json",
+ "r",
+ ) as f:
+ seqs = json.load(f)[args.run_on]
+
+ logger.info(f"Hyperparameters: \n {pformat(args)}")
+ logger.info(f"Seqs to process ({len(seqs)}): {seqs}")
+
+ if args.extraction_mode in ["eval_pose"]:
+ from src.extraction.keys.eval_pose import KEYS
+ elif args.extraction_mode in ["eval_field"]:
+ from src.extraction.keys.eval_field import KEYS
+ elif args.extraction_mode in ["submit_pose"]:
+ from src.extraction.keys.submit_pose import KEYS
+ elif args.extraction_mode in ["submit_field"]:
+ from src.extraction.keys.submit_field import KEYS
+ elif args.extraction_mode in ["feat_pose"]:
+ from src.extraction.keys.feat_pose import KEYS
+ elif args.extraction_mode in ["feat_field"]:
+ from src.extraction.keys.feat_field import KEYS
+ elif args.extraction_mode in ["vis_pose"]:
+ from src.extraction.keys.vis_pose import KEYS
+ elif args.extraction_mode in ["vis_field"]:
+ from src.extraction.keys.vis_field import KEYS
+ else:
+ assert False, f"Invalid extract ({args.extraction_mode})"
+
+ for seq_idx, seq in enumerate(seqs):
+ logger.info(f"Processing seq {seq} {seq_idx + 1}/{len(seqs)}")
+ out_list = []
+ val_loader = factory.fetch_dataloader(args, "val", seq)
+ with torch.no_grad():
+ for idx, batch in tqdm(enumerate(val_loader), total=len(val_loader)):
+ batch = thing.thing2dev(batch, device)
+ inputs, targets, meta_info = batch
+ if "submit_" in args.extraction_mode:
+ out_dict = wrapper.inference(inputs, meta_info)
+ else:
+ out_dict = wrapper.forward(inputs, targets, meta_info, "extract")
+ out_dict = xdict(out_dict)
+ out_dict = out_dict.subset(KEYS)
+ out_list.append(out_dict)
+
+ out = interface.std_interface(out_list)
+ interface.save_results(out, out_dir)
+ logger.info("Done")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/scripts_method/train.py b/scripts_method/train.py
new file mode 100644
index 0000000..6084c2f
--- /dev/null
+++ b/scripts_method/train.py
@@ -0,0 +1,75 @@
+import comet_ml
+import os.path as op
+import sys
+from pprint import pformat
+
+import pytorch_lightning as pl
+import torch
+from loguru import logger
+from pytorch_lightning.callbacks import ModelCheckpoint, ModelSummary
+
+sys.path.append(".")
+
+import common.comet_utils as comet_utils
+import src.factory as factory
+from common.torch_utils import reset_all_seeds
+from src.utils.const import args
+
+
+def main(args):
+ if args.experiment is not None:
+ comet_utils.log_exp_meta(args)
+ reset_all_seeds(args.seed)
+ device = "cuda" if torch.cuda.is_available() else "cpu"
+ wrapper = factory.fetch_model(args).to(device)
+ if args.load_ckpt != "":
+ ckpt = torch.load(args.load_ckpt)
+ wrapper.load_state_dict(ckpt["state_dict"])
+ logger.info(f"Loaded weights from {args.load_ckpt}")
+
+ wrapper.model.arti_head.object_tensors.to(device)
+
+ ckpt_callback = ModelCheckpoint(
+ monitor="loss__val",
+ verbose=True,
+ save_top_k=5,
+ mode="min",
+ every_n_epochs=args.eval_every_epoch,
+ save_last=True,
+ dirpath=op.join(args.log_dir, "checkpoints"),
+ )
+
+ pbar_cb = pl.callbacks.progress.TQDMProgressBar(refresh_rate=1)
+
+ model_summary_cb = ModelSummary(max_depth=3)
+ callbacks = [ckpt_callback, pbar_cb, model_summary_cb]
+ trainer = pl.Trainer(
+ gradient_clip_val=args.grad_clip,
+ gradient_clip_algorithm="norm",
+ accumulate_grad_batches=args.acc_grad,
+ devices=1,
+ accelerator="gpu",
+ logger=None,
+ min_epochs=args.num_epoch,
+ max_epochs=args.num_epoch,
+ callbacks=callbacks,
+ log_every_n_steps=args.log_every,
+ default_root_dir=args.log_dir,
+ check_val_every_n_epoch=args.eval_every_epoch,
+ num_sanity_val_steps=0,
+ enable_model_summary=False,
+ )
+
+ reset_all_seeds(args.seed)
+ train_loader = factory.fetch_dataloader(args, "train")
+ logger.info(f"Hyperparameters: \n {pformat(args)}")
+ logger.info("*** Started training ***")
+ reset_all_seeds(args.seed)
+ ckpt_path = None if args.ckpt_p == "" else args.ckpt_p
+ val_loaders = [factory.fetch_dataloader(args, "val")]
+ wrapper.set_training_flags() # load weights if needed
+ trainer.fit(wrapper, train_loader, val_loaders, ckpt_path=ckpt_path)
+
+
+if __name__ == "__main__":
+ main(args)
diff --git a/scripts_method/visualizer.py b/scripts_method/visualizer.py
new file mode 100644
index 0000000..f7f1112
--- /dev/null
+++ b/scripts_method/visualizer.py
@@ -0,0 +1,120 @@
+import argparse
+import sys
+
+from easydict import EasyDict
+
+sys.path = ["."] + sys.path
+import os.path as op
+from glob import glob
+
+import numpy as np
+from loguru import logger
+
+from common.viewer import ARCTICViewer, ViewerData
+from common.xdict import xdict
+
+
+def parse_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--exp_folder", type=str, default="")
+ parser.add_argument("--angle", type=float, default=None)
+ parser.add_argument("--zoom_out", type=float, default=0.5)
+ parser.add_argument("--seq_name", type=str, default="")
+ parser.add_argument(
+ "--mode",
+ type=str,
+ default="",
+ choices=[
+ "gt_mesh",
+ "pred_mesh",
+ "gt_field_r",
+ "gt_field_l",
+ "pred_field_r",
+ "pred_field_l",
+ ],
+ )
+ parser.add_argument("--headless", action="store_true")
+ config = parser.parse_args()
+ args = EasyDict(vars(config))
+ return args
+
+
+class MethodViewer(ARCTICViewer):
+ def load_data(self, exp_folder, seq_name, mode):
+ logger.info("Creating meshes")
+
+ # check if we are loading gt or pred
+ if "pred_mesh" in mode or "pred_field" in mode:
+ flag = "pred"
+ elif "gt_mesh" in mode or "gt_field" in mode:
+ flag = "targets"
+ else:
+ assert False, f"Unknown mode {mode}"
+
+ exp_key = exp_folder.split("/")[1]
+ images_path = op.join(exp_folder, "eval", seq_name, "images")
+
+ # load mesh
+ meshes_all = xdict()
+ print(f"Specs: {exp_key} {seq_name} {flag}")
+ if "_mesh" in mode:
+ from src.mesh_loaders.pose import construct_meshes
+
+ meshes, data = construct_meshes(
+ exp_folder, seq_name, flag, None, zoom_out=None
+ )
+ meshes_all.merge(meshes)
+ elif "_field" in mode:
+ from src.mesh_loaders.field import construct_meshes
+
+ meshes, data = construct_meshes(
+ exp_folder, seq_name, flag, mode, None, zoom_out=None
+ )
+ meshes_all.merge(meshes)
+ if "_r" in mode:
+ meshes_all.pop("left", None)
+ if "_l" in mode:
+ meshes_all.pop("right", None)
+ else:
+ assert False, f"Unknown mode {mode}"
+
+ imgnames = sorted(glob(images_path + "/*"))
+ num_frames = min(len(imgnames), data[f"{flag}.object.cam_t"].shape[0])
+
+ # setup camera
+ focal = 1000.0
+ rows = 224
+ cols = 224
+ K = np.array([[focal, 0, rows / 2.0], [0, focal, cols / 2.0], [0, 0, 1]])
+ cam_t = data[f"{flag}.object.cam_t"]
+ cam_t = cam_t[:num_frames]
+ Rt = np.zeros((num_frames, 3, 4))
+ Rt[:, :, 3] = cam_t
+ Rt[:, :3, :3] = np.eye(3)
+ Rt[:, 1:3, :3] *= -1.0
+
+ # pack data
+ data = ViewerData(Rt=Rt, K=K, cols=cols, rows=rows, imgnames=imgnames)
+ batch = meshes_all, data
+ self.check_format(batch)
+ logger.info("Done")
+ return batch
+
+
+def main():
+ args = parse_args()
+ exp_folder = args.exp_folder
+ seq_name = args.seq_name
+ mode = args.mode
+ viewer = MethodViewer(
+ interactive=not args.headless,
+ size=(2048, 2048),
+ render_types=["rgb", "video"],
+ )
+ logger.info(f"Rendering {seq_name} {mode}")
+ batch = viewer.load_data(exp_folder, seq_name, mode)
+ viewer.render_seq(batch, out_folder=op.join(exp_folder, "render", seq_name, mode))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/src/arctic/preprocess_dataset.py b/src/arctic/preprocess_dataset.py
new file mode 100644
index 0000000..54a88f3
--- /dev/null
+++ b/src/arctic/preprocess_dataset.py
@@ -0,0 +1,171 @@
+import numpy as np
+import torch
+
+
+class PreprocessDataset(torch.utils.data.Dataset):
+ def __init__(self, bundle):
+ self.rot_r = bundle["rot_r"]
+ self.pose_r = bundle["pose_r"]
+ self.trans_r = bundle["trans_r"]
+ self.shape_r = bundle["shape_r"]
+ self.fitting_err_r = bundle["fitting_err_r"]
+
+ self.rot_l = bundle["rot_l"]
+ self.pose_l = bundle["pose_l"]
+ self.trans_l = bundle["trans_l"]
+ self.shape_l = bundle["shape_l"]
+ self.fitting_err_l = bundle["fitting_err_l"]
+
+ # smplx
+ self.smplx_transl = bundle["smplx_transl"]
+ self.smplx_global_orient = bundle["smplx_global_orient"]
+ self.smplx_body_pose = bundle["smplx_body_pose"]
+ self.smplx_jaw_pose = bundle["smplx_jaw_pose"]
+ self.smplx_leye_pose = bundle["smplx_leye_pose"]
+ self.smplx_reye_pose = bundle["smplx_reye_pose"]
+ self.smplx_left_hand_pose = bundle["smplx_left_hand_pose"]
+ self.smplx_right_hand_pose = bundle["smplx_right_hand_pose"]
+
+ self.obj_arti = bundle["obj_params"][:, 0] # radian
+ self.obj_rot = bundle["obj_params"][:, 1:4]
+ self.obj_trans = bundle["obj_params"][:, 4:]
+
+ self.world2ego = bundle["world2ego"]
+ self.K_ego = bundle["K_ego"]
+ self.dist = bundle["dist"]
+ self.obj_name = bundle["obj_name"]
+
+ def __getitem__(self, idx):
+ out_dict = {}
+
+ out_dict["rot_r"] = self.rot_r[idx]
+ out_dict["pose_r"] = self.pose_r[idx]
+ out_dict["trans_r"] = self.trans_r[idx]
+ out_dict["shape_r"] = self.shape_r[idx]
+ out_dict["fitting_err_r"] = self.fitting_err_r[idx]
+
+ out_dict["rot_l"] = self.rot_l[idx]
+ out_dict["pose_l"] = self.pose_l[idx]
+ out_dict["trans_l"] = self.trans_l[idx]
+ out_dict["shape_l"] = self.shape_l[idx]
+ out_dict["fitting_err_l"] = self.fitting_err_l[idx]
+
+ # smplx
+ out_dict["smplx_transl"] = self.smplx_transl[idx]
+ out_dict["smplx_global_orient"] = self.smplx_global_orient[idx]
+ out_dict["smplx_body_pose"] = self.smplx_body_pose[idx]
+ out_dict["smplx_jaw_pose"] = self.smplx_jaw_pose[idx]
+ out_dict["smplx_leye_pose"] = self.smplx_leye_pose[idx]
+ out_dict["smplx_reye_pose"] = self.smplx_reye_pose[idx]
+ out_dict["smplx_left_hand_pose"] = self.smplx_left_hand_pose[idx]
+ out_dict["smplx_right_hand_pose"] = self.smplx_right_hand_pose[idx]
+
+ out_dict["obj_arti"] = self.obj_arti[idx]
+ out_dict["obj_rot"] = self.obj_rot[idx]
+ out_dict["obj_trans"] = self.obj_trans[idx] # to meter
+
+ out_dict["world2ego"] = self.world2ego[idx]
+ out_dict["dist"] = self.dist
+ out_dict["K_ego"] = self.K_ego
+ out_dict["query_names"] = self.obj_name
+ return out_dict
+
+ def __len__(self):
+ return self.rot_r.shape[0]
+
+
+def construct_loader(mano_p):
+ obj_p = mano_p.replace(".mano.", ".object.")
+ ego_p = mano_p.replace(".mano.", ".egocam.dist.")
+
+ # MANO
+ data = np.load(
+ mano_p,
+ allow_pickle=True,
+ ).item()
+
+ num_frames = len(data["right"]["rot"])
+
+ rot_r = torch.FloatTensor(data["right"]["rot"])
+ pose_r = torch.FloatTensor(data["right"]["pose"])
+ trans_r = torch.FloatTensor(data["right"]["trans"])
+ shape_r = torch.FloatTensor(data["right"]["shape"]).repeat(num_frames, 1)
+ fitting_err_r = data["right"]["fitting_err"]
+
+ rot_l = torch.FloatTensor(data["left"]["rot"])
+ pose_l = torch.FloatTensor(data["left"]["pose"])
+ trans_l = torch.FloatTensor(data["left"]["trans"])
+ shape_l = torch.FloatTensor(data["left"]["shape"]).repeat(num_frames, 1)
+ fitting_err_l = data["left"]["fitting_err"]
+ assert len(fitting_err_l) > 50, f"Failed: {mano_p}"
+ assert len(fitting_err_r) > 50, f"Failed: {mano_p}"
+
+ obj_params = torch.FloatTensor(np.load(obj_p, allow_pickle=True))
+ assert rot_r.shape[0] == obj_params.shape[0]
+
+ obj_name = obj_p.split("/")[-1].split("_")[0]
+
+ ego_p = mano_p.replace("mano.npy", "egocam.dist.npy")
+ egocam = np.load(ego_p, allow_pickle=True).item()
+ R_ego = torch.FloatTensor(egocam["R_k_cam_np"])
+ T_ego = torch.FloatTensor(egocam["T_k_cam_np"])
+ K_ego = torch.FloatTensor(egocam["intrinsics"])
+ dist = torch.FloatTensor(egocam["dist8"])
+
+ num_frames = R_ego.shape[0]
+ world2ego = torch.zeros((num_frames, 4, 4))
+ world2ego[:, :3, :3] = R_ego
+ world2ego[:, :3, 3] = T_ego.view(num_frames, 3)
+ world2ego[:, 3, 3] = 1
+
+ assert torch.isnan(obj_params).sum() == 0
+ assert torch.isinf(obj_params).sum() == 0
+
+ # smplx
+ smplx_p = mano_p.replace(".mano.", ".smplx.")
+ smplx_data = np.load(smplx_p, allow_pickle=True).item()
+
+ smplx_transl = torch.FloatTensor(smplx_data["transl"])
+ smplx_global_orient = torch.FloatTensor(smplx_data["global_orient"])
+ smplx_body_pose = torch.FloatTensor(smplx_data["body_pose"])
+ smplx_jaw_pose = torch.FloatTensor(smplx_data["jaw_pose"])
+ smplx_leye_pose = torch.FloatTensor(smplx_data["leye_pose"])
+ smplx_reye_pose = torch.FloatTensor(smplx_data["reye_pose"])
+ smplx_left_hand_pose = torch.FloatTensor(smplx_data["left_hand_pose"])
+ smplx_right_hand_pose = torch.FloatTensor(smplx_data["right_hand_pose"])
+
+ bundle = {}
+ bundle["rot_r"] = rot_r
+ bundle["pose_r"] = pose_r
+ bundle["trans_r"] = trans_r
+ bundle["shape_r"] = shape_r
+ bundle["fitting_err_r"] = fitting_err_r
+ bundle["rot_l"] = rot_l
+ bundle["pose_l"] = pose_l
+ bundle["trans_l"] = trans_l
+ bundle["shape_l"] = shape_l
+ bundle["fitting_err_l"] = fitting_err_l
+ bundle["smplx_transl"] = smplx_transl
+ bundle["smplx_global_orient"] = smplx_global_orient
+ bundle["smplx_body_pose"] = smplx_body_pose
+ bundle["smplx_jaw_pose"] = smplx_jaw_pose
+ bundle["smplx_leye_pose"] = smplx_leye_pose
+ bundle["smplx_reye_pose"] = smplx_reye_pose
+ bundle["smplx_left_hand_pose"] = smplx_left_hand_pose
+ bundle["smplx_right_hand_pose"] = smplx_right_hand_pose
+ bundle["obj_params"] = obj_params
+ bundle["obj_name"] = obj_name
+ bundle["world2ego"] = world2ego
+ bundle["K_ego"] = K_ego
+ bundle["dist"] = dist
+
+ dataset = PreprocessDataset(bundle)
+
+ dataloader = torch.utils.data.DataLoader(
+ dataset,
+ batch_size=320,
+ shuffle=False,
+ num_workers=0
+ # dataset, batch_size=320, shuffle=False, num_workers=8
+ )
+ return dataloader
diff --git a/src/arctic/processing.py b/src/arctic/processing.py
new file mode 100644
index 0000000..f86d643
--- /dev/null
+++ b/src/arctic/processing.py
@@ -0,0 +1,494 @@
+# import pytorch3d.transforms as tf
+import json
+import os
+import os.path as op
+import sys
+
+import numpy as np
+import torch
+
+import common.rot as rot
+import common.transforms as tf
+
+sys.path = ["."] + sys.path
+
+import common.body_models as human_models
+import common.ld_utils as ld_utils
+import common.thing as thing
+from common.ld_utils import cat_dl
+from src.arctic.preprocess_dataset import construct_loader
+
+with open("./data/arctic_data/data/meta/misc.json", "r") as f:
+ misc = json.load(f)
+
+IGNORE_KEYS = ["v_len", "bottom_anchor", "f", "f_len", "parts_ids", "mask", "diameter"]
+
+
+def compute_bbox_batch(kp2d, obj_s):
+ assert isinstance(kp2d, torch.Tensor)
+ assert len(kp2d.shape) == 3
+ # (batch, view, 2)
+ x_max = kp2d[:, :, 0].max(dim=1).values
+ x_min = kp2d[:, :, 0].min(dim=1).values
+
+ y_max = kp2d[:, :, 1].max(dim=1).values
+ y_min = kp2d[:, :, 1].min(dim=1).values
+
+ x_dim = x_max - x_min
+ y_dim = y_max - y_min
+
+ obj_scale = torch.maximum(x_dim, y_dim) * obj_s
+
+ bbox = torch.FloatTensor(get_bbox_from_kp2d(kp2d.cpu().numpy()).T).to(kp2d.device)
+
+ cx = bbox[:, 0]
+ cy = bbox[:, 1]
+ bbox_w = bbox[:, 2]
+ bbox_h = bbox[:, 3]
+ bbox_dim = torch.maximum(bbox_w, bbox_h) + obj_scale
+
+ scale = bbox_dim / 200.0
+ center = [cx, cy]
+ return scale, center
+
+
+def forward_define_bbox(out_all_2d, obj_s):
+ # statcams bbox
+ kp2d = out_all_2d["verts.object"][:, :9]
+ batch_size = kp2d.shape[0]
+ kp2d = kp2d.reshape(batch_size * 9, -1, 2)
+
+ scale, (cx, cy) = compute_bbox_batch(kp2d, obj_s)
+ scale = scale.view(batch_size, 9)
+ cx = cx.view(batch_size, 9)
+ cy = cy.view(batch_size, 9)
+
+ # egocam bbox: it has fixed dim
+ ego_cx = 2800 / 2.0
+ ego_cy = 2000 / 2.0
+ ego_dim = 2800 / 200.0
+ cx[:, 0] = ego_cx
+ cy[:, 0] = ego_cy
+ scale[:, 0] = ego_dim
+
+ # smallest bbox with size 600px x 600px
+ scale[:, 1:] = torch.clamp(scale[:, 1:], 3.0, None)
+ bbox = torch.stack((cx, cy, scale), dim=2)
+ return bbox
+
+
+def process_batch(
+ batch,
+ layers,
+ smplx_m,
+ world2cam,
+ intris_mat,
+ image_sizes,
+ sid,
+ export_verts,
+):
+ out_world = forward_gt_world(batch, layers, smplx_m)
+ out_all_views = forward_world2cam(batch, out_world, world2cam)
+ out_pts_views = []
+ for out in out_all_views:
+ out_pts = {k: v for k, v in out.items() if "rot" not in k}
+ out_pts_views.append(out_pts)
+ out_all_2d = forward_project2d(batch, out_pts_views, intris_mat)
+ bbox = forward_define_bbox(out_all_2d, obj_s=0.6)
+ out_valid = forward_valid(
+ bbox,
+ out_all_2d["joints.right"],
+ out_all_2d["joints.left"],
+ out_all_2d["verts.object"],
+ image_sizes,
+ sid,
+ )
+
+ if not export_verts:
+ # remove verts from wolrd
+ keys = list(out_world.keys())
+ _out_world = {}
+ for key in keys:
+ if "verts" in key:
+ continue
+ _out_world[key] = out_world[key]
+ out_world = _out_world
+
+ # remove not needed terms in out_all_views
+ keys = list(out_all_views[0].keys())
+ out_views = {}
+ for key in keys:
+ if not export_verts and "verts" in key:
+ continue
+ if "rot" in key:
+ out_views[key] = torch.stack(
+ [out_all_views[idx][key] for idx in range(len(out_all_views) - 1)],
+ dim=1,
+ )
+ else:
+ out_views[key] = torch.stack(
+ [out_all_views[idx][key] for idx in range(len(out_all_views))], dim=1
+ )
+
+ out_views.update(out_valid)
+
+ if not export_verts:
+ # remove not needed terms in 2d
+ out_all_2d = {k: v for k, v in out_all_2d.items() if "verts" not in k}
+ return out_world, out_views, out_all_2d, bbox
+
+
+def transform_mano_rot_cam(rot_r_world, world2cam):
+ world2cam_batch = world2cam[:, :3, :3]
+ quat_world2cam = rot.matrix_to_quaternion(world2cam_batch).cuda()
+ rot_r_quat = rot.axis_angle_to_quaternion(rot_r_world)
+ rot_r_cam = rot.quaternion_to_axis_angle(
+ rot.quaternion_multiply(quat_world2cam, rot_r_quat)
+ )
+ return rot_r_cam
+
+
+def transform_points_dict(world2cam, pts3d_dict):
+ out_all_cam = {}
+ for key, pts_world in pts3d_dict.items():
+ if "rot" not in key and key not in IGNORE_KEYS:
+ out_all_cam[key] = tf.transform_points_batch(world2cam, pts_world)
+ rot_r_cam = transform_mano_rot_cam(pts3d_dict["rot_r"], world2cam)
+ rot_l_cam = transform_mano_rot_cam(pts3d_dict["rot_l"], world2cam)
+ obj_rot_cam = transform_mano_rot_cam(pts3d_dict["obj_rot"], world2cam)
+ out_all_cam["rot_r_cam"] = rot_r_cam
+ out_all_cam["rot_l_cam"] = rot_l_cam
+ out_all_cam["obj_rot_cam"] = obj_rot_cam
+ return out_all_cam
+
+
+def project_2d_dict(K, pts_dict):
+ # project all points in a dict via intrinsics
+ # return dict
+ out_2d = {}
+ for key, pts3d_cam in pts_dict.items():
+ out_2d[key] = tf.project2d_batch(K, pts3d_cam)
+ return out_2d
+
+
+def forward_gt_world(batch, layers, smplx_m):
+ # world coord GT
+ out = layers["right"](
+ global_orient=batch["rot_r"],
+ hand_pose=batch["pose_r"],
+ betas=batch["shape_r"],
+ transl=batch["trans_r"],
+ )
+ mano_r = {"verts.right": out.vertices, "joints.right": out.joints}
+
+ out = layers["left"](
+ global_orient=batch["rot_l"],
+ hand_pose=batch["pose_l"],
+ betas=batch["shape_l"],
+ transl=batch["trans_l"],
+ )
+ mano_l = {"verts.left": out.vertices, "joints.left": out.joints}
+ assert mano_l["joints.left"].shape[1] == 21
+
+ # smplx
+ params = {}
+ params["global_orient"] = batch["smplx_global_orient"]
+ params["body_pose"] = batch["smplx_body_pose"]
+ params["left_hand_pose"] = batch["smplx_left_hand_pose"]
+ params["right_hand_pose"] = batch["smplx_right_hand_pose"]
+ params["jaw_pose"] = batch["smplx_jaw_pose"]
+ params["leye_pose"] = batch["smplx_leye_pose"]
+ params["reye_pose"] = batch["smplx_reye_pose"]
+ params["transl"] = batch["smplx_transl"]
+
+ out = smplx_m(**params)
+ smplx_v = out.vertices
+ smplx_j = out.joints
+
+ smplx_out = {"verts.smplx": smplx_v, "joints.smplx": smplx_j}
+ query_names = batch["query_names"]
+
+ with torch.no_grad():
+ obj_out = layers["object"](
+ angles=batch["obj_arti"].view(-1, 1),
+ global_orient=batch["obj_rot"],
+ transl=batch["obj_trans"] / 1000,
+ query_names=query_names,
+ ) # vicon coord
+
+ # v_sub
+ # parts_sub_ids
+ obj_out.pop("v_sub")
+ obj_out.pop("parts_sub_ids")
+ vo = obj_out["v"]
+ obj_out.pop("v")
+ obj_out["verts.object"] = vo
+
+ out_all = {}
+ out_all.update(mano_r)
+ out_all.update(mano_l)
+ out_all.update(obj_out)
+ out_all.update(smplx_out)
+ out_all["rot_r"] = batch["rot_r"]
+ out_all["rot_l"] = batch["rot_l"]
+ out_all["obj_rot"] = batch["obj_rot"]
+ return out_all
+
+
+def forward_world2cam(batch, out_world, world2cam):
+ # [ego, 8 views, distort ego]
+ batch_size = batch["world2ego"].shape[0]
+
+ # egocentric view: undistorted space
+ out_all_views = []
+ out_all_views.append(transform_points_dict(batch["world2ego"], out_world))
+ # allocentric views
+ for view_idx in range(8):
+ out_all_views.append(
+ transform_points_dict(
+ world2cam[view_idx : view_idx + 1].repeat(batch_size, 1, 1), out_world
+ )
+ )
+ # egocentric view: distorted space
+ out_all_cam = {}
+ for key, pts_world in out_world.items():
+ if "rot" in key or key in IGNORE_KEYS:
+ continue
+
+ pts3d_ego = tf.transform_points_batch(batch["world2ego"], pts_world)
+ dist = batch["dist"][0] # same within subject
+ dist_pts = tf.distort_pts3d_all(pts3d_ego, dist)
+ out_all_cam[key] = dist_pts
+ out_all_views.append(out_all_cam)
+ return out_all_views
+
+
+def forward_project2d(batch, out_all_views, intris_mat):
+ batch_size = batch["K_ego"].shape[0]
+
+ # project 2d: ego undist
+ out_all_2d = []
+ out_all_2d.append(project_2d_dict(batch["K_ego"], out_all_views[0]))
+
+ # project 2d: allocentric
+ for view_idx in range(8):
+ # ego, allo, allo, ..
+ out_2d = project_2d_dict(
+ intris_mat[view_idx : view_idx + 1].repeat(batch_size, 1, 1),
+ out_all_views[view_idx + 1],
+ )
+ out_all_2d.append(out_2d)
+
+ # project 2d: ego dist
+ out_all_2d.append(project_2d_dict(batch["K_ego"], out_all_views[-1]))
+
+ # reform tensors
+ out_all_views = ld_utils.ld2dl(out_all_views)
+ for key, out in out_all_views.items():
+ out_all_views[key] = torch.stack(out, dim=1)
+
+ out_all_2d = ld_utils.ld2dl(out_all_2d)
+ for key, out in out_all_2d.items():
+ out_all_2d[key] = torch.stack(out, dim=1)
+
+ return out_all_2d
+
+
+def get_bbox_from_kp2d(kp_2d):
+ if len(kp_2d.shape) > 2:
+ ul = np.array(
+ [kp_2d[:, :, 0].min(axis=1), kp_2d[:, :, 1].min(axis=1)]
+ ) # upper left
+ lr = np.array(
+ [kp_2d[:, :, 0].max(axis=1), kp_2d[:, :, 1].max(axis=1)]
+ ) # lower right
+ else:
+ ul = np.array([kp_2d[:, 0].min(), kp_2d[:, 1].min()]) # upper left
+ lr = np.array([kp_2d[:, 0].max(), kp_2d[:, 1].max()]) # lower right
+
+ # ul[1] -= (lr[1] - ul[1]) * 0.10 # prevent cutting the head
+ w = lr[0] - ul[0]
+ h = lr[1] - ul[1]
+ c_x, c_y = ul[0] + w / 2, ul[1] + h / 2
+ # to keep the aspect ratio
+ w = h = np.where(w / h > 1, w, h)
+ w = h = h * 1.1
+
+ bbox = np.array([c_x, c_y, w, h]) # shape = (4,N)
+ return bbox
+
+
+def bbox_jts_to_valid(bboxes, j2d):
+ # scale is de-normalized
+ assert isinstance(bboxes, torch.Tensor)
+ assert isinstance(j2d, torch.Tensor)
+ assert bboxes.shape[0] == j2d.shape[0]
+ assert j2d.shape[1] == 9
+ assert bboxes.shape[1] == 9
+ xmin = bboxes[:, :, 0]
+ ymin = bboxes[:, :, 1]
+ xmax = bboxes[:, :, 2]
+ ymax = bboxes[:, :, 3]
+
+ xvalid = (xmin[:, :, None] <= j2d[:, :, :, 0]) * (
+ j2d[:, :, :, 0] <= xmax[:, :, None]
+ )
+
+ yvalid = (ymin[:, :, None] <= j2d[:, :, :, 1]) * (
+ j2d[:, :, :, 1] <= ymax[:, :, None]
+ )
+
+ jts_valid = xvalid * yvalid
+
+ jts_valid = jts_valid.long()
+ return jts_valid
+
+
+def forward_valid(bboxes, joints_right, joints_left, verts_object, image_sizes, sid):
+ view_ind = np.arange(9)
+ view_ind[0] = 9
+ j2d_r = joints_right[:, view_ind].clone()
+ j2d_l = joints_left[:, view_ind].clone()
+ v2d_o = verts_object[:, view_ind].clone().mean(dim=2)[:, :, None, :]
+ dev = v2d_o.device
+
+ # prepare bboxes
+ im_sizes = torch.FloatTensor(np.array(image_sizes[sid])).to(dev) # width, height
+ im_w = im_sizes[:, 0]
+ im_h = im_sizes[:, 1]
+
+ bbox_stat = bboxes[:, 1:].clone()
+ bbox_stat[:, :, 2] *= 200
+ num_frames = bbox_stat.shape[0]
+ bboxes_stat = fetch_bbox_stat(bbox_stat, im_w[1:], im_h[1:])
+ bboxes_ego = (
+ torch.FloatTensor(np.array([1, 1, 2800, 2000]))[None, None, :]
+ .repeat(num_frames, 1, 1)
+ .to(dev)
+ )
+ bboxes_all = torch.cat((bboxes_ego, bboxes_stat), dim=1)
+
+ hand_valid_r = bbox_jts_to_valid(bboxes_all.clone(), j2d_r)
+ hand_valid_l = bbox_jts_to_valid(bboxes_all.clone(), j2d_l)
+ is_valid = bbox_jts_to_valid(bboxes_all.clone(), v2d_o) # center of object verts
+
+ is_valid = is_valid[:, :, 0]
+
+ # right_valid if at least 3 joints are valid and root is inside the bbox
+ # is_valid if center of object is inside bbox
+ right_valid = hand_valid_r[:, :, 0] * (hand_valid_r.sum(dim=2) >= 3).long()
+ left_valid = hand_valid_l[:, :, 0] * (hand_valid_l.sum(dim=2) >= 3).long()
+ out = {"is_valid": is_valid, "left_valid": left_valid, "right_valid": right_valid}
+ return out
+
+
+def fetch_bbox_stat(bbox_stat, im_w, im_h):
+ assert isinstance(bbox_stat, torch.Tensor)
+ assert isinstance(im_w, torch.Tensor)
+ assert isinstance(im_h, torch.Tensor)
+ num_frames, num_views = bbox_stat.shape[:2]
+ assert im_w.shape[0] == num_views
+ assert im_h.shape == im_w.shape
+ cx = bbox_stat[:, :, 0]
+ cy = bbox_stat[:, :, 1]
+ scale = bbox_stat[:, :, 2]
+
+ # bbox to define limits of xy
+ xmin = torch.clamp(cx - scale / 2, 1)
+ ymin = torch.clamp(cy - scale / 2, 1)
+
+ im_w_batch = im_w[None, :].repeat(num_frames, 1)
+ im_h_batch = im_h[None, :].repeat(num_frames, 1)
+
+ xmax = torch.minimum(cx + scale / 2, im_w_batch)
+ ymax = torch.minimum(cy + scale / 2, im_h_batch)
+ boxes = torch.stack((xmin, ymin, xmax, ymax), dim=2)
+ return boxes
+
+
+def process_seq(task, export_verts=False):
+ """
+ Process one sequence
+ """
+
+ with torch.no_grad():
+ mano_p, dev, statcams, layers, pbar = task
+
+ image_sizes = {}
+ for sub in misc.keys():
+ image_sizes[sub] = misc[sub]["image_size"]
+ sub = mano_p.split("/")[-2]
+
+ curr_batch_size = None
+
+ cams = statcams[sub]
+ world2cam = cams["world2cam"].to(dev)
+ intris_mat = cams["intris_mat"].to(dev)
+
+ loader = construct_loader(mano_p)
+ out_world_list = []
+ out_views_list = []
+ out_2d_list = []
+ out_bbox_list = []
+ batch_list = []
+ for batch in loader:
+ batch = thing.thing2dev(batch, dev)
+ batch_size = batch["rot_r"].shape[0]
+ if batch_size != curr_batch_size:
+ curr_batch_size = batch_size
+ smplx_m = human_models.build_subject_smplx(curr_batch_size, sub).to(dev)
+ out_world, out_views, out_all_2d, bbox = process_batch(
+ batch,
+ layers,
+ smplx_m,
+ world2cam,
+ intris_mat,
+ image_sizes,
+ sub,
+ export_verts,
+ )
+ out_world_list.append(out_world)
+ out_views_list.append(out_views)
+ out_2d_list.append(out_all_2d)
+ out_bbox_list.append(bbox)
+
+ batch.pop("query_names")
+ batch_list.append(batch)
+
+ out_world_list = ld_utils.ld2dl(out_world_list)
+ out_views_list = ld_utils.ld2dl(out_views_list)
+ out_2d_list = ld_utils.ld2dl(out_2d_list)
+ batch_list = ld_utils.ld2dl(batch_list)
+ out_bbox_list = torch.cat(out_bbox_list, dim=0)
+
+ out_world_list = cat_dl(out_world_list, dim=0)
+ out_views_list = cat_dl(out_views_list, dim=0)
+ out_2d_list = cat_dl(out_2d_list, dim=0)
+ batch_list = cat_dl(batch_list, dim=0)
+
+ out_world_list = thing.thing2np(out_world_list)
+ out_views_list = thing.thing2np(out_views_list)
+ out_2d_list = thing.thing2np(out_2d_list)
+ batch_list = thing.thing2np(batch_list)
+ out_bbox_list = out_bbox_list.cpu().detach().numpy()
+
+ out = {}
+ out["world_coord"] = out_world_list
+ out["cam_coord"] = out_views_list
+ out["2d"] = out_2d_list
+ out["bbox"] = out_bbox_list
+ out["params"] = batch_list
+
+ sid, seqname = mano_p.split("/")[-2:]
+ if export_verts:
+ out_p = f"./outputs/processed_verts/seqs/{sid}/{seqname}"
+ else:
+ out_p = f"./outputs/processed/seqs/{sid}/{seqname}"
+ out_p = out_p.replace(".mano", "").replace("/annot", "")
+ out_folder = op.dirname(out_p)
+
+ if not op.exists(out_folder):
+ os.makedirs(out_folder)
+
+ pbar.set_description(f"Save to {out_p}")
+ np.save(out_p, out)
diff --git a/src/arctic/split.py b/src/arctic/split.py
new file mode 100644
index 0000000..f08582a
--- /dev/null
+++ b/src/arctic/split.py
@@ -0,0 +1,192 @@
+import json
+import os
+import os.path as op
+
+import numpy as np
+from loguru import logger
+from tqdm import tqdm
+
+# view 0 is the egocentric view
+_VIEWS = [0, 1, 2, 3, 4, 5, 6, 7, 8]
+_SUBJECTS = [
+ "s01", # F
+ "s02", # F
+ "s03", # M
+ "s04", # M
+ "s05", # F
+ "s06", # M
+ "s07", # M
+ "s08", # F
+ "s09", # F
+ "s10", # M
+]
+
+
+def get_selected_seqs(setup, split):
+ assert split in ["train", "val", "test"]
+
+ # load seq names from json
+ with open(
+ op.join("./data/arctic_data/data/splits_json/", f"protocol_{setup}.json"), "r"
+ ) as f:
+ splits = json.load(f)
+
+ train_seqs = splits["train"]
+ val_seqs = splits["val"]
+ test_seqs = splits["test"]
+
+ # sanity check no overlap seqs
+ all_seqs = train_seqs + val_seqs + test_seqs
+ val_test_seqs = val_seqs + test_seqs
+ assert len(set(val_test_seqs)) == len(set(val_seqs)) + len(set(test_seqs))
+ for seq in val_test_seqs:
+ if seq not in all_seqs:
+ logger.info(seq)
+ assert False, f"{seq} not in all_seqs"
+
+ train_seqs = [seq for seq in all_seqs if seq not in val_test_seqs]
+ all_seqs = train_seqs + val_test_seqs
+ assert len(all_seqs) == len(set(all_seqs))
+
+ # return
+ if split == "train":
+ return train_seqs
+ if split == "val":
+ return val_seqs
+ if split == "test":
+ return test_seqs
+
+
+def get_selected_views(setup, split):
+ # return view ids to use based on setup and split
+ assert split in ["train", "val", "test"]
+ assert setup in [
+ "p1",
+ "p2",
+ "p1a",
+ "p2a",
+ ]
+ # only static views
+ if setup in ["p1", "p1a"]:
+ return _VIEWS[1:]
+
+ # seen ego view
+ if setup in ["p2", "p2a"]:
+ return _VIEWS[:1]
+
+
+def glob_fnames(num_frames, seq, chosen_views):
+ # construct paths to images
+ sid, seq_name = seq.split("/")
+ folder_p = op.join(f"./data/arctic_data/data/images/{sid}/{seq_name}/")
+
+ # ignore first 10 and last 10 frames as images may be entirely black
+ glob_ps = [
+ op.join(folder_p, "2", "%05d.jpg" % (frame_idx))
+ for frame_idx in range(10, num_frames - 10)
+ ]
+
+ # create jpg paths based on selected views
+ fnames = []
+ for glob_p in glob_ps:
+ for view in chosen_views:
+ new_p = glob_p.replace("/2/", f"/{view}/")
+ fnames.append(new_p)
+
+ assert len(fnames) == len(chosen_views) * len(glob_ps)
+ assert len(fnames) == len(set(fnames))
+ return fnames
+
+
+def sanity_check_splits(protocol):
+ # make sure no overlapping seq
+ train_seqs = get_selected_seqs(protocol, "train")
+ val_seqs = get_selected_seqs(protocol, "val")
+ test_seqs = get_selected_seqs(protocol, "test")
+ all_seqs = list(set(train_seqs + val_seqs + test_seqs))
+ assert len(train_seqs) == len(set(train_seqs))
+ assert len(val_seqs) == len(set(val_seqs))
+ assert len(test_seqs) == len(set(test_seqs))
+
+ train_seqs = set(train_seqs)
+ val_seqs = set(val_seqs)
+ test_seqs = set(test_seqs)
+ assert len(set.intersection(train_seqs, val_seqs)) == 0
+ assert len(set.intersection(train_seqs, test_seqs)) == 0
+ assert len(set.intersection(test_seqs, val_seqs)) == 0
+ assert len(all_seqs) == len(train_seqs) + len(val_seqs) + len(test_seqs)
+
+
+def sanity_check_annot(seq_name, data):
+ # make sure no NaN or Inf
+ num_frames = data["params"]["pose_r"].shape[0]
+ for pkey, side_dict in data.items():
+ if isinstance(side_dict, dict):
+ for key, val in side_dict.items():
+ if "smplx" in key:
+ # smplx distortion can be undefined
+ continue
+ assert np.isnan(val).sum() == 0, f"{seq_name}: {pkey}_{key} has NaN"
+ assert np.isinf(val).sum() == 0, f"{seq_name}: {pkey}_{key} has Inf"
+ assert num_frames == val.shape[0]
+ else:
+ if "smplx" in pkey:
+ # smplx distortion can be undefined
+ continue
+ assert np.isnan(side_dict).sum() == 0, f"{seq_name}: {pkey}_{key} has NaN"
+ assert np.isinf(side_dict).sum() == 0, f"{seq_name}: {pkey}_{key} has Inf"
+ assert num_frames == side_dict.shape[0]
+
+
+def build_split(protocol, split, request_keys, process_folder):
+ logger.info(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>")
+ logger.info(f"Constructing split {split} for protocol {protocol}")
+ # extract seq_names
+ # unpack protocol
+ sanity_check_splits(protocol)
+ chosen_seqs = get_selected_seqs(protocol, split)
+ logger.info(f"Chosen {len(chosen_seqs)} seqs:")
+ logger.info(chosen_seqs)
+ chosen_views = get_selected_views(protocol, split)
+ logger.info(f"Chosen {len(chosen_views)} views:")
+ logger.info(chosen_views)
+ fseqs = chosen_seqs
+
+ # do not need world in reconstruction
+ data_dict = {}
+ for seq in tqdm(fseqs):
+ seq_p = op.join(process_folder, f"{seq}.npy")
+ if "_verts" in seq_p:
+ logger.warning(
+ "Trying to build split with verts. This will require lots of storage"
+ )
+ data = np.load(seq_p, allow_pickle=True).item()
+ sanity_check_annot(seq_p, data)
+ data = {k: v for k, v in data.items() if k in request_keys}
+ data_dict[seq] = data
+
+ logger.info(f"Constructing image filenames from {len(fseqs)} seqs")
+ fnames = []
+ for seq in tqdm(fseqs):
+ fnames.append(
+ glob_fnames(data_dict[seq]["params"]["rot_r"].shape[0], seq, chosen_views)
+ )
+ fnames = sum(fnames, [])
+ assert len(fnames) == len(set(fnames))
+
+ logger.info(f"Done. Total {len(fnames)} images")
+
+ out_data = {}
+ out_data["data_dict"] = data_dict
+ out_data["imgnames"] = fnames
+
+ if "_verts" in process_folder:
+ out_p = f"./outputs/splits_verts/{protocol}_{split}.npy"
+ else:
+ out_p = f"./outputs/splits/{protocol}_{split}.npy"
+ out_folder = op.dirname(out_p)
+ if not op.exists(out_folder):
+ os.makedirs(out_folder)
+ logger.info("Dumping data")
+ np.save(out_p, out_data)
+ logger.info(f"Exported: {out_p}")
diff --git a/src/callbacks/loss/loss_arctic_lstm.py b/src/callbacks/loss/loss_arctic_lstm.py
new file mode 100644
index 0000000..9ea5bd6
--- /dev/null
+++ b/src/callbacks/loss/loss_arctic_lstm.py
@@ -0,0 +1,172 @@
+import torch
+import torch.nn as nn
+from pytorch3d.transforms.rotation_conversions import axis_angle_to_matrix
+
+from src.utils.loss_modules import (
+ compute_contact_devi_loss,
+ hand_kp3d_loss,
+ joints_loss,
+ mano_loss,
+ object_kp3d_loss,
+ vector_loss,
+)
+
+l1_loss = nn.L1Loss(reduction="none")
+mse_loss = nn.MSELoss(reduction="none")
+
+
+def compute_loss(pred, gt, meta_info, args):
+ # unpacking pred and gt
+ pred_betas_r = pred["mano.beta.r"]
+ pred_rotmat_r = pred["mano.pose.r"]
+ pred_joints_r = pred["mano.j3d.cam.r"]
+ pred_projected_keypoints_2d_r = pred["mano.j2d.norm.r"]
+ pred_betas_l = pred["mano.beta.l"]
+ pred_rotmat_l = pred["mano.pose.l"]
+ pred_joints_l = pred["mano.j3d.cam.l"]
+ pred_projected_keypoints_2d_l = pred["mano.j2d.norm.l"]
+ pred_kp2d_o = pred["object.kp2d.norm"]
+ pred_kp3d_o = pred["object.kp3d.cam"]
+ pred_rot = pred["object.rot"].view(-1, 3).float()
+ pred_radian = pred["object.radian"].view(-1).float()
+
+ gt_pose_r = gt["mano.pose.r"]
+ gt_betas_r = gt["mano.beta.r"]
+ gt_joints_r = gt["mano.j3d.cam.r"]
+ gt_keypoints_2d_r = gt["mano.j2d.norm.r"]
+ gt_pose_l = gt["mano.pose.l"]
+ gt_betas_l = gt["mano.beta.l"]
+ gt_joints_l = gt["mano.j3d.cam.l"]
+ gt_keypoints_2d_l = gt["mano.j2d.norm.l"]
+ gt_kp2d_o = torch.cat((gt["object.kp2d.norm.t"], gt["object.kp2d.norm.b"]), dim=1)
+ gt_kp3d_o = gt["object.kp3d.cam"]
+ gt_rot = gt["object.rot"].view(-1, 3).float()
+ gt_radian = gt["object.radian"].view(-1).float()
+
+ is_valid = gt["is_valid"]
+ right_valid = gt["right_valid"]
+ left_valid = gt["left_valid"]
+ joints_valid_r = gt["joints_valid_r"]
+ joints_valid_l = gt["joints_valid_l"]
+
+ # reshape
+ gt_pose_r = axis_angle_to_matrix(gt_pose_r.reshape(-1, 3)).reshape(-1, 16, 3, 3)
+ gt_pose_l = axis_angle_to_matrix(gt_pose_l.reshape(-1, 3)).reshape(-1, 16, 3, 3)
+
+ # Compute loss on MANO parameters
+ loss_regr_pose_r, loss_regr_betas_r = mano_loss(
+ pred_rotmat_r,
+ pred_betas_r,
+ gt_pose_r,
+ gt_betas_r,
+ criterion=mse_loss,
+ is_valid=right_valid,
+ )
+ loss_regr_pose_l, loss_regr_betas_l = mano_loss(
+ pred_rotmat_l,
+ pred_betas_l,
+ gt_pose_l,
+ gt_betas_l,
+ criterion=mse_loss,
+ is_valid=left_valid,
+ )
+
+ # Compute 2D reprojection loss for the keypoints
+ loss_keypoints_r = joints_loss(
+ pred_projected_keypoints_2d_r,
+ gt_keypoints_2d_r,
+ criterion=mse_loss,
+ jts_valid=joints_valid_r,
+ )
+ loss_keypoints_l = joints_loss(
+ pred_projected_keypoints_2d_l,
+ gt_keypoints_2d_l,
+ criterion=mse_loss,
+ jts_valid=joints_valid_l,
+ )
+
+ loss_keypoints_o = vector_loss(
+ pred_kp2d_o, gt_kp2d_o, criterion=mse_loss, is_valid=is_valid
+ )
+
+ # Compute 3D keypoint loss
+ loss_keypoints_3d_r = hand_kp3d_loss(
+ pred_joints_r, gt_joints_r, mse_loss, joints_valid_r
+ )
+ loss_keypoints_3d_l = hand_kp3d_loss(
+ pred_joints_l, gt_joints_l, mse_loss, joints_valid_l
+ )
+ loss_keypoints_3d_o = object_kp3d_loss(pred_kp3d_o, gt_kp3d_o, mse_loss, is_valid)
+
+ loss_radian = vector_loss(pred_radian, gt_radian, mse_loss, is_valid)
+ loss_rot = vector_loss(pred_rot, gt_rot, mse_loss, is_valid)
+ loss_transl_l = vector_loss(
+ pred["mano.cam_t.wp.l"] - pred["mano.cam_t.wp.r"],
+ gt["mano.cam_t.wp.l"] - gt["mano.cam_t.wp.r"],
+ mse_loss,
+ right_valid * left_valid,
+ )
+ loss_transl_o = vector_loss(
+ pred["object.cam_t.wp"] - pred["mano.cam_t.wp.r"],
+ gt["object.cam_t.wp"] - gt["mano.cam_t.wp.r"],
+ mse_loss,
+ right_valid * is_valid,
+ )
+
+ loss_cam_t_r = vector_loss(
+ pred["mano.cam_t.wp.r"],
+ gt["mano.cam_t.wp.r"],
+ mse_loss,
+ right_valid,
+ )
+ loss_cam_t_l = vector_loss(
+ pred["mano.cam_t.wp.l"],
+ gt["mano.cam_t.wp.l"],
+ mse_loss,
+ left_valid,
+ )
+ loss_cam_t_o = vector_loss(
+ pred["object.cam_t.wp"], gt["object.cam_t.wp"], mse_loss, is_valid
+ )
+
+ loss_cam_t_r += vector_loss(
+ pred["mano.cam_t.wp.init.r"],
+ gt["mano.cam_t.wp.r"],
+ mse_loss,
+ right_valid,
+ )
+ loss_cam_t_l += vector_loss(
+ pred["mano.cam_t.wp.init.l"],
+ gt["mano.cam_t.wp.l"],
+ mse_loss,
+ left_valid,
+ )
+ loss_cam_t_o += vector_loss(
+ pred["object.cam_t.wp.init"],
+ gt["object.cam_t.wp"],
+ mse_loss,
+ is_valid,
+ )
+ cd_ro, cd_lo = compute_contact_devi_loss(pred, gt)
+
+ loss_dict = {
+ "loss/mano/cam_t/r": (loss_cam_t_r, 1.0),
+ "loss/mano/cam_t/l": (loss_cam_t_l, 1.0),
+ "loss/object/cam_t": (loss_cam_t_o, 1.0),
+ "loss/mano/kp2d/r": (loss_keypoints_r, 5.0),
+ "loss/mano/kp3d/r": (loss_keypoints_3d_r, 5.0),
+ "loss/mano/pose/r": (loss_regr_pose_r, 10.0),
+ "loss/mano/beta/r": (loss_regr_betas_r, 0.001),
+ "loss/mano/kp2d/l": (loss_keypoints_l, 5.0),
+ "loss/mano/kp3d/l": (loss_keypoints_3d_l, 5.0),
+ "loss/mano/pose/l": (loss_regr_pose_l, 10.0),
+ "loss/cd": (cd_ro + cd_lo, 1.0),
+ "loss/mano/transl/l": (loss_transl_l, 1.0),
+ "loss/mano/beta/l": (loss_regr_betas_l, 0.001),
+ "loss/object/kp2d": (loss_keypoints_o, 1.0),
+ "loss/object/kp3d": (loss_keypoints_3d_o, 5.0),
+ "loss/object/radian": (loss_radian, 1.0),
+ "loss/object/rot": (loss_rot, 1.0),
+ "loss/object/transl": (loss_transl_o, 1.0),
+ }
+ return loss_dict
diff --git a/src/callbacks/loss/loss_arctic_sf.py b/src/callbacks/loss/loss_arctic_sf.py
new file mode 100644
index 0000000..ac45e19
--- /dev/null
+++ b/src/callbacks/loss/loss_arctic_sf.py
@@ -0,0 +1,172 @@
+import torch
+import torch.nn as nn
+from pytorch3d.transforms.rotation_conversions import axis_angle_to_matrix
+
+from src.utils.loss_modules import (
+ compute_contact_devi_loss,
+ hand_kp3d_loss,
+ joints_loss,
+ mano_loss,
+ object_kp3d_loss,
+ vector_loss,
+)
+
+l1_loss = nn.L1Loss(reduction="none")
+mse_loss = nn.MSELoss(reduction="none")
+
+
+def compute_loss(pred, gt, meta_info, args):
+ # unpacking pred and gt
+ pred_betas_r = pred["mano.beta.r"]
+ pred_rotmat_r = pred["mano.pose.r"]
+ pred_joints_r = pred["mano.j3d.cam.r"]
+ pred_projected_keypoints_2d_r = pred["mano.j2d.norm.r"]
+ pred_betas_l = pred["mano.beta.l"]
+ pred_rotmat_l = pred["mano.pose.l"]
+ pred_joints_l = pred["mano.j3d.cam.l"]
+ pred_projected_keypoints_2d_l = pred["mano.j2d.norm.l"]
+ pred_kp2d_o = pred["object.kp2d.norm"]
+ pred_kp3d_o = pred["object.kp3d.cam"]
+ pred_rot = pred["object.rot"].view(-1, 3).float()
+ pred_radian = pred["object.radian"].view(-1).float()
+
+ gt_pose_r = gt["mano.pose.r"]
+ gt_betas_r = gt["mano.beta.r"]
+ gt_joints_r = gt["mano.j3d.cam.r"]
+ gt_keypoints_2d_r = gt["mano.j2d.norm.r"]
+ gt_pose_l = gt["mano.pose.l"]
+ gt_betas_l = gt["mano.beta.l"]
+ gt_joints_l = gt["mano.j3d.cam.l"]
+ gt_keypoints_2d_l = gt["mano.j2d.norm.l"]
+ gt_kp2d_o = torch.cat((gt["object.kp2d.norm.t"], gt["object.kp2d.norm.b"]), dim=1)
+ gt_kp3d_o = gt["object.kp3d.cam"]
+ gt_rot = gt["object.rot"].view(-1, 3).float()
+ gt_radian = gt["object.radian"].view(-1).float()
+
+ is_valid = gt["is_valid"]
+ right_valid = gt["right_valid"]
+ left_valid = gt["left_valid"]
+ joints_valid_r = gt["joints_valid_r"]
+ joints_valid_l = gt["joints_valid_l"]
+
+ # reshape
+ gt_pose_r = axis_angle_to_matrix(gt_pose_r.reshape(-1, 3)).reshape(-1, 16, 3, 3)
+ gt_pose_l = axis_angle_to_matrix(gt_pose_l.reshape(-1, 3)).reshape(-1, 16, 3, 3)
+
+ # Compute loss on MANO parameters
+ loss_regr_pose_r, loss_regr_betas_r = mano_loss(
+ pred_rotmat_r,
+ pred_betas_r,
+ gt_pose_r,
+ gt_betas_r,
+ criterion=mse_loss,
+ is_valid=right_valid,
+ )
+ loss_regr_pose_l, loss_regr_betas_l = mano_loss(
+ pred_rotmat_l,
+ pred_betas_l,
+ gt_pose_l,
+ gt_betas_l,
+ criterion=mse_loss,
+ is_valid=left_valid,
+ )
+
+ # Compute 2D reprojection loss for the keypoints
+ loss_keypoints_r = joints_loss(
+ pred_projected_keypoints_2d_r,
+ gt_keypoints_2d_r,
+ criterion=mse_loss,
+ jts_valid=joints_valid_r,
+ )
+ loss_keypoints_l = joints_loss(
+ pred_projected_keypoints_2d_l,
+ gt_keypoints_2d_l,
+ criterion=mse_loss,
+ jts_valid=joints_valid_l,
+ )
+
+ loss_keypoints_o = vector_loss(
+ pred_kp2d_o, gt_kp2d_o, criterion=mse_loss, is_valid=is_valid
+ )
+
+ # Compute 3D keypoint loss
+ loss_keypoints_3d_r = hand_kp3d_loss(
+ pred_joints_r, gt_joints_r, mse_loss, joints_valid_r
+ )
+ loss_keypoints_3d_l = hand_kp3d_loss(
+ pred_joints_l, gt_joints_l, mse_loss, joints_valid_l
+ )
+ loss_keypoints_3d_o = object_kp3d_loss(pred_kp3d_o, gt_kp3d_o, mse_loss, is_valid)
+
+ loss_radian = vector_loss(pred_radian, gt_radian, mse_loss, is_valid)
+ loss_rot = vector_loss(pred_rot, gt_rot, mse_loss, is_valid)
+ loss_transl_l = vector_loss(
+ pred["mano.cam_t.wp.l"] - pred["mano.cam_t.wp.r"],
+ gt["mano.cam_t.wp.l"] - gt["mano.cam_t.wp.r"],
+ mse_loss,
+ right_valid * left_valid,
+ )
+ loss_transl_o = vector_loss(
+ pred["object.cam_t.wp"] - pred["mano.cam_t.wp.r"],
+ gt["object.cam_t.wp"] - gt["mano.cam_t.wp.r"],
+ mse_loss,
+ right_valid * is_valid,
+ )
+
+ loss_cam_t_r = vector_loss(
+ pred["mano.cam_t.wp.r"],
+ gt["mano.cam_t.wp.r"],
+ mse_loss,
+ right_valid,
+ )
+ loss_cam_t_l = vector_loss(
+ pred["mano.cam_t.wp.l"],
+ gt["mano.cam_t.wp.l"],
+ mse_loss,
+ left_valid,
+ )
+ loss_cam_t_o = vector_loss(
+ pred["object.cam_t.wp"], gt["object.cam_t.wp"], mse_loss, is_valid
+ )
+
+ loss_cam_t_r += vector_loss(
+ pred["mano.cam_t.wp.init.r"],
+ gt["mano.cam_t.wp.r"],
+ mse_loss,
+ right_valid,
+ )
+ loss_cam_t_l += vector_loss(
+ pred["mano.cam_t.wp.init.l"],
+ gt["mano.cam_t.wp.l"],
+ mse_loss,
+ left_valid,
+ )
+ loss_cam_t_o += vector_loss(
+ pred["object.cam_t.wp.init"],
+ gt["object.cam_t.wp"],
+ mse_loss,
+ is_valid,
+ )
+
+ cd_ro, cd_lo = compute_contact_devi_loss(pred, gt)
+ loss_dict = {
+ "loss/mano/cam_t/r": (loss_cam_t_r, 1.0),
+ "loss/mano/cam_t/l": (loss_cam_t_l, 1.0),
+ "loss/object/cam_t": (loss_cam_t_o, 1.0),
+ "loss/mano/kp2d/r": (loss_keypoints_r, 5.0),
+ "loss/mano/kp3d/r": (loss_keypoints_3d_r, 5.0),
+ "loss/mano/pose/r": (loss_regr_pose_r, 10.0),
+ "loss/mano/beta/r": (loss_regr_betas_r, 0.001),
+ "loss/mano/kp2d/l": (loss_keypoints_l, 5.0),
+ "loss/mano/kp3d/l": (loss_keypoints_3d_l, 5.0),
+ "loss/mano/pose/l": (loss_regr_pose_l, 10.0),
+ "loss/cd": (cd_ro + cd_lo, 1.0),
+ "loss/mano/transl/l": (loss_transl_l, 1.0),
+ "loss/mano/beta/l": (loss_regr_betas_l, 0.001),
+ "loss/object/kp2d": (loss_keypoints_o, 1.0),
+ "loss/object/kp3d": (loss_keypoints_3d_o, 5.0),
+ "loss/object/radian": (loss_radian, 1.0),
+ "loss/object/rot": (loss_rot, 1.0),
+ "loss/object/transl": (loss_transl_o, 1.0),
+ }
+ return loss_dict
diff --git a/src/callbacks/loss/loss_field.py b/src/callbacks/loss/loss_field.py
new file mode 100644
index 0000000..362d158
--- /dev/null
+++ b/src/callbacks/loss/loss_field.py
@@ -0,0 +1,52 @@
+import torch.nn as nn
+
+from common.xdict import xdict
+
+l1_loss = nn.L1Loss(reduction="none")
+mse_loss = nn.MSELoss(reduction="none")
+ce_loss = nn.CrossEntropyLoss(reduction="none")
+
+
+def dist_loss(loss_dict, pred, gt, meta_info):
+ is_valid = gt["is_valid"]
+ mask_o = meta_info["mask"]
+
+ # interfield
+ loss_ro = mse_loss(pred[f"dist.ro"], gt["dist.ro"])
+ loss_lo = mse_loss(pred[f"dist.lo"], gt["dist.lo"])
+
+ pad_olen = min(pred[f"dist.or"].shape[1], gt["dist.or"].shape[1])
+
+ loss_or = mse_loss(pred[f"dist.or"][:, :pad_olen], gt["dist.or"][:, :pad_olen])
+ loss_ol = mse_loss(pred[f"dist.ol"][:, :pad_olen], gt["dist.ol"][:, :pad_olen])
+
+ # too many 10cm. Skip them in the loss to prevent overfitting
+ bnd = 0.1 # 10cm
+ bnd_idx_ro = gt["dist.ro"] == bnd
+ bnd_idx_lo = gt["dist.lo"] == bnd
+ bnd_idx_or = gt["dist.or"][:, :pad_olen] == bnd
+ bnd_idx_ol = gt["dist.ol"][:, :pad_olen] == bnd
+
+ loss_or = loss_or * mask_o * is_valid[:, None]
+ loss_ol = loss_ol * mask_o * is_valid[:, None]
+
+ loss_ro = loss_ro * is_valid[:, None]
+ loss_lo = loss_lo * is_valid[:, None]
+
+ loss_or[bnd_idx_or] *= 0.1
+ loss_ol[bnd_idx_ol] *= 0.1
+ loss_ro[bnd_idx_ro] *= 0.1
+ loss_lo[bnd_idx_lo] *= 0.1
+
+ weight = 100.0
+ loss_dict[f"loss/dist/ro"] = (loss_ro.mean(), weight)
+ loss_dict[f"loss/dist/lo"] = (loss_lo.mean(), weight)
+ loss_dict[f"loss/dist/or"] = (loss_or.mean(), weight)
+ loss_dict[f"loss/dist/ol"] = (loss_ol.mean(), weight)
+ return loss_dict
+
+
+def compute_loss(pred, gt, meta_info, args):
+ loss_dict = xdict()
+ loss_dict = dist_loss(loss_dict, pred, gt, meta_info)
+ return loss_dict
diff --git a/src/callbacks/process/process_arctic.py b/src/callbacks/process/process_arctic.py
new file mode 100644
index 0000000..ebc0192
--- /dev/null
+++ b/src/callbacks/process/process_arctic.py
@@ -0,0 +1,151 @@
+import common.camera as camera
+import common.data_utils as data_utils
+import common.transforms as tf
+import src.callbacks.process.process_generic as generic
+
+
+def process_data(
+ models, inputs, targets, meta_info, mode, args, field_max=float("inf")
+):
+ img_res = 224
+ K = meta_info["intrinsics"]
+ gt_pose_r = targets["mano.pose.r"] # MANO pose parameters
+ gt_betas_r = targets["mano.beta.r"] # MANO beta parameters
+
+ gt_pose_l = targets["mano.pose.l"] # MANO pose parameters
+ gt_betas_l = targets["mano.beta.l"] # MANO beta parameters
+
+ gt_kp2d_b = targets["object.kp2d.norm.b"] # 2D keypoints for object base
+ gt_object_rot = targets["object.rot"].view(-1, 3)
+
+ # pose the object without translation (call it object cano space)
+ out = models["arti_head"].object_tensors.forward(
+ angles=targets["object.radian"].view(-1, 1),
+ global_orient=gt_object_rot,
+ transl=None,
+ query_names=meta_info["query_names"],
+ )
+ diameters = out["diameter"]
+ parts_idx = out["parts_ids"]
+ meta_info["part_ids"] = parts_idx
+ meta_info["diameter"] = diameters
+
+ # targets keypoints of hand and objects are in camera coord (full resolution image) space
+ # map all entities from camera coord to object cano space based on the rigid-transform
+ # between the object base keypoints in camera coord and object cano space
+ # since R, T is used, relative distance btw hand and object is preserved
+ num_kps = out["kp3d"].shape[1] // 2
+ kp3d_b_cano = out["kp3d"][:, num_kps:]
+ R0, T0 = tf.batch_solve_rigid_tf(targets["object.kp3d.full.b"], kp3d_b_cano)
+ joints3d_r0 = tf.rigid_tf_torch_batch(targets["mano.j3d.full.r"], R0, T0)
+ joints3d_l0 = tf.rigid_tf_torch_batch(targets["mano.j3d.full.l"], R0, T0)
+
+ # pose MANO in MANO canonical space
+ gt_out_r = models["mano_r"](
+ betas=gt_betas_r,
+ hand_pose=gt_pose_r[:, 3:],
+ global_orient=gt_pose_r[:, :3],
+ transl=None,
+ )
+ gt_model_joints_r = gt_out_r.joints
+ gt_vertices_r = gt_out_r.vertices
+ gt_root_cano_r = gt_out_r.joints[:, 0]
+
+ gt_out_l = models["mano_l"](
+ betas=gt_betas_l,
+ hand_pose=gt_pose_l[:, 3:],
+ global_orient=gt_pose_l[:, :3],
+ transl=None,
+ )
+ gt_model_joints_l = gt_out_l.joints
+ gt_vertices_l = gt_out_l.vertices
+ gt_root_cano_l = gt_out_l.joints[:, 0]
+
+ # map MANO mesh to object canonical space
+ Tr0 = (joints3d_r0 - gt_model_joints_r).mean(dim=1)
+ Tl0 = (joints3d_l0 - gt_model_joints_l).mean(dim=1)
+ gt_model_joints_r = joints3d_r0
+ gt_model_joints_l = joints3d_l0
+ gt_vertices_r += Tr0[:, None, :]
+ gt_vertices_l += Tl0[:, None, :]
+
+ # now that everything is in the object canonical space
+ # find camera translation for rendering relative to the object
+
+ # unnorm 2d keypoints
+ gt_kp2d_b_cano = data_utils.unormalize_kp2d(gt_kp2d_b, img_res)
+
+ # estimate camera translation by solving 2d to 3d correspondence
+ gt_transl = camera.estimate_translation_k(
+ kp3d_b_cano,
+ gt_kp2d_b_cano,
+ meta_info["intrinsics"].cpu().numpy(),
+ use_all_joints=True,
+ pad_2d=True,
+ )
+
+ # move to camera coord
+ gt_vertices_r = gt_vertices_r + gt_transl[:, None, :]
+ gt_vertices_l = gt_vertices_l + gt_transl[:, None, :]
+ gt_model_joints_r = gt_model_joints_r + gt_transl[:, None, :]
+ gt_model_joints_l = gt_model_joints_l + gt_transl[:, None, :]
+
+ ####
+ gt_kp3d_o = out["kp3d"] + gt_transl[:, None, :]
+ gt_bbox3d_o = out["bbox3d"] + gt_transl[:, None, :]
+
+ # roots
+ gt_root_cam_patch_r = gt_model_joints_r[:, 0]
+ gt_root_cam_patch_l = gt_model_joints_l[:, 0]
+ gt_cam_t_r = gt_root_cam_patch_r - gt_root_cano_r
+ gt_cam_t_l = gt_root_cam_patch_l - gt_root_cano_l
+ gt_cam_t_o = gt_transl
+
+ targets["mano.cam_t.r"] = gt_cam_t_r
+ targets["mano.cam_t.l"] = gt_cam_t_l
+ targets["object.cam_t"] = gt_cam_t_o
+
+ avg_focal_length = (K[:, 0, 0] + K[:, 1, 1]) / 2.0
+ gt_cam_t_wp_r = camera.perspective_to_weak_perspective_torch(
+ gt_cam_t_r, avg_focal_length, img_res
+ )
+
+ gt_cam_t_wp_l = camera.perspective_to_weak_perspective_torch(
+ gt_cam_t_l, avg_focal_length, img_res
+ )
+
+ gt_cam_t_wp_o = camera.perspective_to_weak_perspective_torch(
+ gt_cam_t_o, avg_focal_length, img_res
+ )
+
+ targets["mano.cam_t.wp.r"] = gt_cam_t_wp_r
+ targets["mano.cam_t.wp.l"] = gt_cam_t_wp_l
+ targets["object.cam_t.wp"] = gt_cam_t_wp_o
+
+ # cam coord of patch
+ targets["object.cam_t.kp3d.b"] = gt_transl
+
+ targets["mano.v3d.cam.r"] = gt_vertices_r
+ targets["mano.v3d.cam.l"] = gt_vertices_l
+ targets["mano.j3d.cam.r"] = gt_model_joints_r
+ targets["mano.j3d.cam.l"] = gt_model_joints_l
+ targets["object.kp3d.cam"] = gt_kp3d_o
+ targets["object.bbox3d.cam"] = gt_bbox3d_o
+
+ out = models["arti_head"].object_tensors.forward(
+ angles=targets["object.radian"].view(-1, 1),
+ global_orient=gt_object_rot,
+ transl=None,
+ query_names=meta_info["query_names"],
+ )
+
+ # GT vertices relative to right hand root
+ targets["object.v.cam"] = out["v"] + gt_transl[:, None, :]
+ targets["object.v_len"] = out["v_len"]
+
+ targets["object.f"] = out["f"]
+ targets["object.f_len"] = out["f_len"]
+
+ targets = generic.prepare_interfield(targets, field_max)
+
+ return inputs, targets, meta_info
diff --git a/src/callbacks/process/process_field.py b/src/callbacks/process/process_field.py
new file mode 100644
index 0000000..ba00736
--- /dev/null
+++ b/src/callbacks/process/process_field.py
@@ -0,0 +1,41 @@
+import src.callbacks.process.process_arctic as process_arctic
+import src.callbacks.process.process_generic as generic
+
+
+def process_data(models, inputs, targets, meta_info, mode, args):
+ batch_size = meta_info["intrinsics"].shape[0]
+
+ (
+ v0_r,
+ v0_l,
+ v0_o,
+ pidx,
+ v0_r_full,
+ v0_l_full,
+ v0_o_full,
+ mask,
+ cams,
+ ) = generic.prepare_templates(
+ batch_size,
+ models["mano_r"],
+ models["mano_l"],
+ models["mesh_sampler"],
+ models["arti_head"],
+ meta_info["query_names"],
+ )
+
+ meta_info["v0.r"] = v0_r
+ meta_info["v0.l"] = v0_l
+ meta_info["v0.o"] = v0_o
+ meta_info["cams0"] = cams
+ meta_info["parts_idx"] = pidx
+ meta_info["v0.r.full"] = v0_r_full
+ meta_info["v0.l.full"] = v0_l_full
+ meta_info["v0.o.full"] = v0_o_full
+ meta_info["mask"] = mask
+
+ inputs, targets, meta_info = process_arctic.process_data(
+ models, inputs, targets, meta_info, mode, args, field_max=args.max_dist
+ )
+
+ return inputs, targets, meta_info
diff --git a/src/callbacks/process/process_generic.py b/src/callbacks/process/process_generic.py
new file mode 100644
index 0000000..7cbb785
--- /dev/null
+++ b/src/callbacks/process/process_generic.py
@@ -0,0 +1,138 @@
+import torch
+
+import src.utils.interfield as inter
+
+
+def prepare_mano_template(batch_size, mano_layer, mesh_sampler, is_right):
+ root_idx = 0
+
+ # Generate T-pose template mesh
+ template_pose = torch.zeros((1, 48))
+ template_pose = template_pose.cuda()
+ template_betas = torch.zeros((1, 10)).cuda()
+ out = mano_layer(
+ betas=template_betas,
+ hand_pose=template_pose[:, 3:],
+ global_orient=template_pose[:, :3],
+ transl=None,
+ )
+ template_3d_joints = out.joints
+ template_vertices = out.vertices
+ template_vertices_sub = mesh_sampler.downsample(template_vertices, is_right)
+
+ # normalize
+ template_root = template_3d_joints[:, root_idx, :]
+ template_3d_joints = template_3d_joints - template_root[:, None, :]
+ template_vertices = template_vertices - template_root[:, None, :]
+ template_vertices_sub = template_vertices_sub - template_root[:, None, :]
+
+ # concatinate template joints and template vertices, and then duplicate to batch size
+ ref_vertices = torch.cat([template_3d_joints, template_vertices_sub], dim=1)
+ ref_vertices = ref_vertices.expand(batch_size, -1, -1)
+
+ ref_vertices_full = torch.cat([template_3d_joints, template_vertices], dim=1)
+ ref_vertices_full = ref_vertices_full.expand(batch_size, -1, -1)
+ return ref_vertices, ref_vertices_full
+
+
+def prepare_templates(
+ batch_size,
+ mano_r,
+ mano_l,
+ mesh_sampler,
+ arti_head,
+ query_names,
+):
+ v0_r, v0_r_full = prepare_mano_template(
+ batch_size, mano_r, mesh_sampler, is_right=True
+ )
+ v0_l, v0_l_full = prepare_mano_template(
+ batch_size, mano_l, mesh_sampler, is_right=False
+ )
+ (v0_o, pidx, v0_full, mask) = prepare_object_template(
+ batch_size,
+ arti_head.object_tensors,
+ query_names,
+ )
+ CAM_R, CAM_L, CAM_O = list(range(100))[-3:]
+ cams = (
+ torch.FloatTensor([CAM_R, CAM_L, CAM_O]).view(1, 3, 1).repeat(batch_size, 1, 3)
+ / 100
+ )
+ cams = cams.to(v0_r.device)
+ return (
+ v0_r,
+ v0_l,
+ v0_o,
+ pidx,
+ v0_r_full,
+ v0_l_full,
+ v0_full,
+ mask,
+ cams,
+ )
+
+
+def prepare_object_template(batch_size, object_tensors, query_names):
+ template_angles = torch.zeros((batch_size, 1)).cuda()
+ template_rot = torch.zeros((batch_size, 3)).cuda()
+ out = object_tensors.forward(
+ angles=template_angles,
+ global_orient=template_rot,
+ transl=None,
+ query_names=query_names,
+ )
+ ref_vertices = out["v_sub"]
+ parts_idx = out["parts_ids"]
+
+ mask = out["mask"]
+
+ ref_mean = ref_vertices.mean(dim=1)[:, None, :]
+ ref_vertices -= ref_mean
+
+ v_template = out["v"]
+ return (ref_vertices, parts_idx, v_template, mask)
+
+
+def prepare_interfield(targets, max_dist):
+ dist_min = 0.0
+ dist_max = max_dist
+ dist_ro, dist_ro_idx = inter.compute_dist_mano_to_obj(
+ targets["mano.v3d.cam.r"],
+ targets["object.v.cam"],
+ targets["object.v_len"],
+ dist_min,
+ dist_max,
+ )
+ dist_lo, dist_lo_idx = inter.compute_dist_mano_to_obj(
+ targets["mano.v3d.cam.l"],
+ targets["object.v.cam"],
+ targets["object.v_len"],
+ dist_min,
+ dist_max,
+ )
+ dist_or, dist_or_idx = inter.compute_dist_obj_to_mano(
+ targets["mano.v3d.cam.r"],
+ targets["object.v.cam"],
+ targets["object.v_len"],
+ dist_min,
+ dist_max,
+ )
+ dist_ol, dist_ol_idx = inter.compute_dist_obj_to_mano(
+ targets["mano.v3d.cam.l"],
+ targets["object.v.cam"],
+ targets["object.v_len"],
+ dist_min,
+ dist_max,
+ )
+
+ targets["dist.ro"] = dist_ro
+ targets["dist.lo"] = dist_lo
+ targets["dist.or"] = dist_or
+ targets["dist.ol"] = dist_ol
+
+ targets["idx.ro"] = dist_ro_idx
+ targets["idx.lo"] = dist_lo_idx
+ targets["idx.or"] = dist_or_idx
+ targets["idx.ol"] = dist_ol_idx
+ return targets
diff --git a/src/callbacks/vis/visualize_arctic.py b/src/callbacks/vis/visualize_arctic.py
new file mode 100644
index 0000000..9d8fb56
--- /dev/null
+++ b/src/callbacks/vis/visualize_arctic.py
@@ -0,0 +1,343 @@
+import matplotlib.pyplot as plt
+import numpy as np
+import torch
+
+import common.thing as thing
+import common.transforms as tf
+import common.vis_utils as vis_utils
+from common.data_utils import denormalize_images
+from common.mesh import Mesh
+from common.rend_utils import color2material
+from common.torch_utils import unpad_vtensor
+
+mesh_color_dict = {
+ "right": [200, 200, 250],
+ "left": [100, 100, 250],
+ "object": [144, 250, 100],
+ "top": [144, 250, 100],
+ "bottom": [129, 159, 214],
+}
+
+
+def visualize_one_example(
+ images_i,
+ kp2d_proj_b_i,
+ kp2d_proj_t_i,
+ joints2d_r_i,
+ joints2d_l_i,
+ kp2d_b_i,
+ kp2d_t_i,
+ bbox2d_b_i,
+ bbox2d_t_i,
+ joints2d_proj_r_i,
+ joints2d_proj_l_i,
+ bbox2d_proj_b_i,
+ bbox2d_proj_t_i,
+ joints_valid_r,
+ joints_valid_l,
+ flag,
+):
+ # whether the hand is cleary visible
+ valid_idx_r = (joints_valid_r.long() == 1).nonzero().view(-1).numpy()
+ valid_idx_l = (joints_valid_l.long() == 1).nonzero().view(-1).numpy()
+
+ fig, ax = plt.subplots(2, 2, figsize=(8, 8))
+ ax = ax.reshape(-1)
+
+ # GT 2d keypoints (good overlap as it is from perspective camera)
+ ax[0].imshow(images_i)
+ ax[0].scatter(
+ kp2d_b_i[:, 0], kp2d_b_i[:, 1], color="r"
+ ) # keypoints from bottom part of object
+ ax[0].scatter(kp2d_t_i[:, 0], kp2d_t_i[:, 1], color="b") # keypoints from top part
+
+ # right hand keypoints
+ ax[0].scatter(
+ joints2d_r_i[valid_idx_r, 0],
+ joints2d_r_i[valid_idx_r, 1],
+ color="r",
+ marker="x",
+ )
+ ax[0].scatter(
+ joints2d_l_i[valid_idx_l, 0],
+ joints2d_l_i[valid_idx_l, 1],
+ color="b",
+ marker="x",
+ )
+ ax[0].set_title(f"{flag} 2D keypoints")
+
+ # GT 2d keypoints (good overlap as it is from perspective camera)
+ ax[1].imshow(images_i)
+ vis_utils.plot_2d_bbox(bbox2d_b_i, None, "r", ax[1])
+ vis_utils.plot_2d_bbox(bbox2d_t_i, None, "b", ax[1])
+ ax[1].set_title(f"{flag} 2D bbox")
+
+ # GT 3D keypoints projected to 2D using weak perspective projection
+ # (sometimes not completely overlap because of a weak perspective camera)
+ ax[2].imshow(images_i)
+ ax[2].scatter(kp2d_proj_b_i[:, 0], kp2d_proj_b_i[:, 1], color="r")
+ ax[2].scatter(kp2d_proj_t_i[:, 0], kp2d_proj_t_i[:, 1], color="b")
+ ax[2].scatter(
+ joints2d_proj_r_i[valid_idx_r, 0],
+ joints2d_proj_r_i[valid_idx_r, 1],
+ color="r",
+ marker="x",
+ )
+ ax[2].scatter(
+ joints2d_proj_l_i[valid_idx_l, 0],
+ joints2d_proj_l_i[valid_idx_l, 1],
+ color="b",
+ marker="x",
+ )
+ ax[2].set_title(f"{flag} 3D keypoints reprojection from cam")
+
+ # GT 3D bbox projected to 2D using weak perspective projection
+ # (sometimes not completely overlap because of a weak perspective camera)
+ ax[3].imshow(images_i)
+ vis_utils.plot_2d_bbox(bbox2d_proj_b_i, None, "r", ax[3])
+ vis_utils.plot_2d_bbox(bbox2d_proj_t_i, None, "b", ax[3])
+ ax[3].set_title(f"{flag} 3D keypoints reprojection from cam")
+
+ plt.subplots_adjust(wspace=0.05, hspace=0.2)
+ fig.tight_layout()
+ plt.close()
+
+ im = vis_utils.fig2img(fig)
+ return im
+
+
+def visualize_kps(vis_dict, flag, max_examples):
+ # visualize keypoints for predition or GT
+
+ images = (vis_dict["vis.images"].permute(0, 2, 3, 1) * 255).numpy().astype(np.uint8)
+ K = vis_dict["meta_info.intrinsics"]
+ kp2d_b = vis_dict[f"{flag}.object.kp2d.b"].numpy()
+ kp2d_t = vis_dict[f"{flag}.object.kp2d.t"].numpy()
+ bbox2d_b = vis_dict[f"{flag}.object.bbox2d.b"].numpy()
+ bbox2d_t = vis_dict[f"{flag}.object.bbox2d.t"].numpy()
+
+ joints2d_r = vis_dict[f"{flag}.mano.j2d.r"].numpy()
+ joints2d_l = vis_dict[f"{flag}.mano.j2d.l"].numpy()
+
+ kp3d_o = vis_dict[f"{flag}.object.kp3d.cam"]
+ bbox3d_o = vis_dict[f"{flag}.object.bbox3d.cam"]
+ kp2d_proj = tf.project2d_batch(K, kp3d_o)
+ kp2d_proj_t, kp2d_proj_b = torch.split(kp2d_proj, [16, 16], dim=1)
+ kp2d_proj_t = kp2d_proj_t.numpy()
+ kp2d_proj_b = kp2d_proj_b.numpy()
+
+ bbox2d_proj = tf.project2d_batch(K, bbox3d_o)
+ bbox2d_proj_t, bbox2d_proj_b = torch.split(bbox2d_proj, [8, 8], dim=1)
+ bbox2d_proj_t = bbox2d_proj_t.numpy()
+ bbox2d_proj_b = bbox2d_proj_b.numpy()
+
+ # project 3D to 2D using weak perspective camera (not completely overlap)
+ joints3d_r = vis_dict[f"{flag}.mano.j3d.cam.r"]
+ joints2d_proj_r = tf.project2d_batch(K, joints3d_r).numpy()
+ joints3d_l = vis_dict[f"{flag}.mano.j3d.cam.l"]
+ joints2d_proj_l = tf.project2d_batch(K, joints3d_l).numpy()
+
+ joints_valid_r = vis_dict["targets.joints_valid_r"]
+ joints_valid_l = vis_dict["targets.joints_valid_l"]
+
+ im_list = []
+ for idx in range(min(images.shape[0], max_examples)):
+ image_id = vis_dict["vis.image_ids"][idx]
+ im = visualize_one_example(
+ images[idx],
+ kp2d_proj_b[idx],
+ kp2d_proj_t[idx],
+ joints2d_r[idx],
+ joints2d_l[idx],
+ kp2d_b[idx],
+ kp2d_t[idx],
+ bbox2d_b[idx],
+ bbox2d_t[idx],
+ joints2d_proj_r[idx],
+ joints2d_proj_l[idx],
+ bbox2d_proj_b[idx],
+ bbox2d_proj_t[idx],
+ joints_valid_r[idx],
+ joints_valid_l[idx],
+ flag,
+ )
+ im_list.append({"fig_name": f"{image_id}__kps", "im": im})
+ return im_list
+
+
+def visualize_rend(
+ renderer,
+ vertices_r,
+ vertices_l,
+ mano_faces_r,
+ mano_faces_l,
+ vertices_o,
+ faces_o,
+ r_valid,
+ l_valid,
+ K,
+ img,
+):
+ # render 3d meshes
+ mesh_r = Mesh(v=vertices_r, f=mano_faces_r)
+ mesh_l = Mesh(v=vertices_l, f=mano_faces_l)
+ mesh_o = Mesh(v=thing.thing2np(vertices_o), f=thing.thing2np(faces_o))
+
+ # render only valid meshes
+ meshes = []
+ mesh_names = []
+ if r_valid:
+ meshes.append(mesh_r)
+ mesh_names.append("right")
+
+ if l_valid:
+ meshes.append(mesh_l)
+ mesh_names.append("left")
+ meshes = meshes + [mesh_o]
+ mesh_names = mesh_names + ["object"]
+
+ materials = [color2material(mesh_color_dict[name]) for name in mesh_names]
+
+ # render in image space
+ render_img_img = renderer.render_meshes_pose(
+ cam_transl=None,
+ meshes=meshes,
+ image=img,
+ materials=materials,
+ sideview_angle=None,
+ K=K,
+ )
+ render_img_list = [render_img_img]
+
+ # render rotated meshes
+ for angle in list(np.linspace(45, 300, 3)):
+ render_img_angle = renderer.render_meshes_pose(
+ cam_transl=None,
+ meshes=meshes,
+ image=None,
+ materials=materials,
+ sideview_angle=angle,
+ K=K,
+ )
+ render_img_list.append(render_img_angle)
+
+ # cat all images
+ render_img = np.concatenate(render_img_list, axis=0)
+ return render_img
+
+
+def visualize_rends(renderer, vis_dict, max_examples):
+ # render meshes
+
+ # unpack data
+ image_ids = vis_dict["vis.image_ids"]
+ right_valid = vis_dict["targets.right_valid"].bool()
+ left_valid = vis_dict["targets.left_valid"].bool()
+ images = vis_dict["vis.images"].permute(0, 2, 3, 1).numpy()
+ gt_vertices_r_cam = vis_dict["targets.mano.v3d.cam.r"]
+ gt_vertices_l_cam = vis_dict["targets.mano.v3d.cam.l"]
+ mano_faces_r = vis_dict["meta_info.mano.faces.r"]
+ mano_faces_l = vis_dict["meta_info.mano.faces.l"]
+ pred_vertices_r_cam = vis_dict["pred.mano.v3d.cam.r"]
+ pred_vertices_l_cam = vis_dict["pred.mano.v3d.cam.l"]
+
+ # object
+ gt_obj_v_cam = unpad_vtensor(
+ vis_dict["targets.object.v.cam"], vis_dict["targets.object.v_len"]
+ ) # meter
+ pred_obj_v_cam = unpad_vtensor(
+ vis_dict["pred.object.v.cam"], vis_dict["pred.object.v_len"]
+ )
+ pred_obj_f = unpad_vtensor(vis_dict["pred.object.f"], vis_dict["pred.object.f_len"])
+ K = vis_dict["meta_info.intrinsics"]
+
+ # rendering
+ im_list = []
+ for idx in range(min(len(image_ids), max_examples)):
+ r_valid = right_valid[idx]
+ l_valid = left_valid[idx]
+ K_i = K[idx]
+ image_id = image_ids[idx]
+
+ # render gt
+ image_list = []
+ image_list.append(images[idx])
+ image_gt = visualize_rend(
+ renderer,
+ gt_vertices_r_cam[idx],
+ gt_vertices_l_cam[idx],
+ mano_faces_r,
+ mano_faces_l,
+ gt_obj_v_cam[idx],
+ pred_obj_f[idx],
+ r_valid,
+ l_valid,
+ K_i,
+ images[idx],
+ )
+ image_list.append(image_gt)
+
+ # render pred
+ image_pred = visualize_rend(
+ renderer,
+ pred_vertices_r_cam[idx],
+ pred_vertices_l_cam[idx],
+ mano_faces_r,
+ mano_faces_l,
+ pred_obj_v_cam[idx],
+ pred_obj_f[idx],
+ r_valid,
+ l_valid,
+ K_i,
+ images[idx],
+ )
+ image_list.append(image_pred)
+
+ # stack images into one
+ image_pred = vis_utils.im_list_to_plt(
+ image_list,
+ figsize=(15, 8),
+ title_list=["input image", "GT", "pred w/ pred_cam_t"],
+ )
+ im_list.append(
+ {
+ "fig_name": f"{image_id}__rend_rvalid={r_valid}, lvalid={l_valid} ",
+ "im": image_pred,
+ }
+ )
+ return im_list
+
+
+def visualize_all(vis_dict, max_examples, renderer, postfix, no_tqdm):
+ # unpack
+ image_ids = [
+ "/".join(key.split("/")[-5:]).replace(".jpg", "")
+ for key in vis_dict["meta_info.imgname"]
+ ]
+ images = denormalize_images(vis_dict["inputs.img"])
+ vis_dict.pop("inputs.img", None)
+ vis_dict["vis.images"] = images
+ vis_dict["vis.image_ids"] = image_ids
+
+ # render 3D meshes
+ im_list = visualize_rends(renderer, vis_dict, max_examples)
+
+ # visualize keypoints
+ im_list_kp_gt = visualize_kps(vis_dict, "targets", max_examples)
+ im_list_kp_pred = visualize_kps(vis_dict, "pred", max_examples)
+
+ # concat side by side pred and gt
+ for im_gt, im_pred in zip(im_list_kp_gt, im_list_kp_pred):
+ im = {
+ "fig_name": im_gt["fig_name"],
+ "im": vis_utils.concat_pil_images([im_gt["im"], im_pred["im"]]),
+ }
+ im_list.append(im)
+
+ # post fix image list
+ im_list_postfix = []
+ for im in im_list:
+ im["fig_name"] += postfix
+ im_list_postfix.append(im)
+
+ return im_list
diff --git a/src/callbacks/vis/visualize_field.py b/src/callbacks/vis/visualize_field.py
new file mode 100644
index 0000000..e4e5d7d
--- /dev/null
+++ b/src/callbacks/vis/visualize_field.py
@@ -0,0 +1,270 @@
+import copy
+
+import matplotlib
+import numpy as np
+import torch
+
+import common.thing as thing
+import common.vis_utils as vis_utils
+from common.data_utils import denormalize_images
+from common.mesh import Mesh
+from common.torch_utils import unpad_vtensor
+
+cmap = matplotlib.cm.get_cmap("plasma")
+
+
+def dist2vc_hands_cnt(contact_r):
+ contact_r = torch.clamp(contact_r.clone(), 0, 0.1) / 0.1
+ contact_r = 1 - contact_r
+ vc = cmap(contact_r)
+ return vc
+
+
+def dist2vc_hands(contact_r, decision_bnd):
+ return dist2vc_hands_cnt(contact_r)
+
+
+def dist2vc_obj(contact_t, norm_f):
+ return dist2vc_hands(contact_t, norm_f)
+
+
+def render_result(
+ renderer,
+ vertices_r,
+ vertices_l,
+ vc_r,
+ vc_l,
+ mano_faces_r,
+ mano_faces_l,
+ vertices_t,
+ vc_t,
+ faces_t,
+ r_valid,
+ l_valid,
+ K,
+ img,
+):
+ img = img.permute(1, 2, 0).cpu().numpy()
+ mesh_top = Mesh(
+ v=thing.thing2np(vertices_t),
+ f=thing.thing2np(faces_t),
+ vc=vc_t,
+ )
+
+ # render only valid meshes
+ meshes = []
+ mesh_names = []
+ if r_valid:
+ mesh_r = Mesh(
+ v=vertices_r,
+ f=mano_faces_r,
+ vc=vc_r,
+ )
+ meshes.append(mesh_r)
+ mesh_names.append("right")
+
+ if l_valid:
+ mesh_l = Mesh(v=vertices_l, f=mano_faces_l, vc=vc_l)
+
+ meshes.append(mesh_l)
+ mesh_names.append("left")
+
+ meshes = meshes + [mesh_top]
+ mesh_names = mesh_names + ["object"]
+
+ # render meshes
+ render_img_img = render_meshes(
+ renderer, meshes, mesh_names, K, img, sideview_angle=None
+ )
+
+ # render in different angles
+ render_img_angles = []
+ for angle in list(np.linspace(45, 300, 3)):
+ render_img_angle = render_meshes(
+ renderer, meshes, mesh_names, K, img=None, sideview_angle=angle
+ )
+ render_img_angles.append(render_img_angle)
+ render_img_angles = [render_img_img] + render_img_angles
+ render_img = np.concatenate(render_img_angles, axis=0)
+ return render_img
+
+
+def render_meshes(renderer, meshes, mesh_names, K, img, sideview_angle):
+ materials = None
+ rend_img = renderer.render_meshes_pose(
+ cam_transl=None,
+ meshes=meshes,
+ image=img,
+ materials=materials,
+ sideview_angle=sideview_angle,
+ K=K,
+ )
+ return rend_img
+
+
+def visualize_all(_vis_dict, max_examples, renderer, postfix, no_tqdm):
+ # unpack
+ vis_dict = copy.deepcopy(_vis_dict)
+ K = vis_dict["meta_info.intrinsics"]
+ image_ids = [
+ "/".join(key.split("/")[-5:]).replace(".jpg", "")
+ for key in vis_dict["meta_info.imgname"]
+ ]
+ images = denormalize_images(vis_dict["inputs.img"])
+ vis_dict.pop("inputs.img", None)
+ vis_dict["vis.images"] = images
+ vis_dict["vis.image_ids"] = image_ids
+
+ # unpack MANO
+ pred_vertices_r_cam = vis_dict["targets.mano.v3d.cam.r"]
+ pred_vertices_l_cam = vis_dict["targets.mano.v3d.cam.l"]
+ gt_vertices_r_cam = vis_dict["targets.mano.v3d.cam.r"]
+ gt_vertices_l_cam = vis_dict["targets.mano.v3d.cam.l"]
+ mano_faces_r = vis_dict["meta_info.mano.faces.r"]
+ mano_faces_l = vis_dict["meta_info.mano.faces.l"]
+
+ # unpack object
+ gt_obj_vtop_cam = unpad_vtensor(
+ vis_dict["targets.object.v.cam"], vis_dict["targets.object.v_len"]
+ )
+ gt_obj_ftop = unpad_vtensor(
+ vis_dict["targets.object.f"], vis_dict["targets.object.f_len"]
+ )
+ pred_obj_vtop_cam = unpad_vtensor(
+ vis_dict["targets.object.v.cam"],
+ vis_dict["targets.object.v_len"],
+ )
+
+ # valid flag
+ right_valid = vis_dict["targets.right_valid"].bool()
+ left_valid = vis_dict["targets.left_valid"].bool()
+
+ # unpack dist
+ gt_dist_r = vis_dict["targets.dist.ro"]
+ gt_dist_l = vis_dict["targets.dist.lo"]
+ gt_dist_or = vis_dict["targets.dist.or"]
+ gt_dist_ol = vis_dict["targets.dist.ol"]
+
+ pred_dist_r = vis_dict["pred.dist.ro"]
+ pred_dist_l = vis_dict["pred.dist.lo"]
+ pred_dist_or = vis_dict["pred.dist.or"]
+ pred_dist_ol = vis_dict["pred.dist.ol"]
+
+ im_list = []
+ # rendering
+ for idx in range(min(len(image_ids), max_examples)):
+ r_valid = right_valid[idx]
+ l_valid = left_valid[idx]
+ K_i = K[idx]
+ image_id = image_ids[idx]
+
+ top_len = vis_dict["targets.object.v_len"][idx]
+
+ # dist to vertex color
+ max_dist = 0.10
+ gt_vc_r = dist2vc_hands(gt_dist_r[idx], max_dist)
+ gt_vc_l = dist2vc_hands(gt_dist_l[idx], max_dist)
+ gt_vc_or = dist2vc_obj(gt_dist_or[idx][:top_len], max_dist)
+ gt_vc_ol = dist2vc_obj(gt_dist_ol[idx][:top_len], max_dist)
+
+ pred_vc_r = dist2vc_hands(pred_dist_r[idx], max_dist)
+ pred_vc_l = dist2vc_hands(pred_dist_l[idx], max_dist)
+ pred_vc_or = dist2vc_obj(pred_dist_or[idx][:top_len], max_dist)
+ pred_vc_ol = dist2vc_obj(pred_dist_ol[idx][:top_len], max_dist)
+
+ # render GT
+ image_list = []
+ image_list.append(images[idx].permute(1, 2, 0).cpu().numpy())
+ # render one hand at a time
+ image_gt_r = render_result(
+ renderer,
+ gt_vertices_r_cam[idx],
+ gt_vertices_l_cam[idx],
+ gt_vc_r,
+ gt_vc_l,
+ mano_faces_r,
+ mano_faces_l,
+ gt_obj_vtop_cam[idx],
+ gt_vc_or,
+ gt_obj_ftop[idx],
+ r_valid,
+ False,
+ K_i,
+ images[idx],
+ )
+ image_gt_l = render_result(
+ renderer,
+ gt_vertices_r_cam[idx],
+ gt_vertices_l_cam[idx],
+ gt_vc_r,
+ gt_vc_l,
+ mano_faces_r,
+ mano_faces_l,
+ gt_obj_vtop_cam[idx],
+ gt_vc_ol,
+ gt_obj_ftop[idx],
+ False,
+ l_valid,
+ K_i,
+ images[idx],
+ )
+
+ # prediction
+ image_pred_r = render_result(
+ renderer,
+ pred_vertices_r_cam[idx],
+ pred_vertices_l_cam[idx],
+ pred_vc_r,
+ pred_vc_l,
+ mano_faces_r,
+ mano_faces_l,
+ pred_obj_vtop_cam[idx],
+ pred_vc_or,
+ gt_obj_ftop[idx],
+ r_valid,
+ False,
+ K_i,
+ images[idx],
+ )
+ image_pred_l = render_result(
+ renderer,
+ pred_vertices_r_cam[idx],
+ pred_vertices_l_cam[idx],
+ pred_vc_r,
+ pred_vc_l,
+ mano_faces_r,
+ mano_faces_l,
+ pred_obj_vtop_cam[idx],
+ pred_vc_ol,
+ gt_obj_ftop[idx],
+ False,
+ l_valid,
+ K_i,
+ images[idx],
+ )
+
+ image_list.append(image_pred_r)
+ image_list.append(image_gt_r)
+
+ image_list.append(image_pred_l)
+ image_list.append(image_gt_l)
+
+ image_pred = vis_utils.im_list_to_plt(
+ image_list,
+ figsize=(15, 8),
+ title_list=[
+ "input image",
+ "PRED-R",
+ "GT-R",
+ "PRED-L",
+ "GT-L",
+ ],
+ )
+ im_list.append({"fig_name": f"{image_id}", "im": image_pred})
+
+ # post fix image list
+ im_list_postfix = []
+ for im in im_list:
+ im["fig_name"] += postfix
+ im_list_postfix.append(im)
+ return im_list
diff --git a/src/datasets/arctic_dataset.py b/src/datasets/arctic_dataset.py
new file mode 100644
index 0000000..0790f21
--- /dev/null
+++ b/src/datasets/arctic_dataset.py
@@ -0,0 +1,461 @@
+import json
+import os.path as op
+
+import numpy as np
+import torch
+from loguru import logger
+from torch.utils.data import Dataset
+from torchvision.transforms import Normalize
+
+import common.data_utils as data_utils
+import common.rot as rot
+import common.transforms as tf
+import src.datasets.dataset_utils as dataset_utils
+from common.data_utils import read_img
+from common.object_tensors import ObjectTensors
+from src.datasets.dataset_utils import get_valid, pad_jts2d
+
+
+class ArcticDataset(Dataset):
+ def __getitem__(self, index):
+ imgname = self.imgnames[index]
+ data = self.getitem(imgname)
+ return data
+
+ def getitem(self, imgname, load_rgb=True):
+ args = self.args
+ # LOADING START
+ speedup = args.speedup
+ sid, seq_name, view_idx, image_idx = imgname.split("/")[-4:]
+ obj_name = seq_name.split("_")[0]
+ view_idx = int(view_idx)
+
+ seq_data = self.data[f"{sid}/{seq_name}"]
+
+ data_cam = seq_data["cam_coord"]
+ data_2d = seq_data["2d"]
+ data_bbox = seq_data["bbox"]
+ data_params = seq_data["params"]
+
+ vidx = int(image_idx.split(".")[0]) - self.ioi_offset[sid]
+ vidx, is_valid, right_valid, left_valid = get_valid(
+ data_2d, data_cam, vidx, view_idx, imgname
+ )
+
+ if view_idx == 0:
+ intrx = data_params["K_ego"][vidx].copy()
+ else:
+ intrx = np.array(self.intris_mat[sid][view_idx - 1])
+
+ # hands
+ joints2d_r = pad_jts2d(data_2d["joints.right"][vidx, view_idx].copy())
+ joints3d_r = data_cam["joints.right"][vidx, view_idx].copy()
+
+ joints2d_l = pad_jts2d(data_2d["joints.left"][vidx, view_idx].copy())
+ joints3d_l = data_cam["joints.left"][vidx, view_idx].copy()
+
+ pose_r = data_params["pose_r"][vidx].copy()
+ betas_r = data_params["shape_r"][vidx].copy()
+ pose_l = data_params["pose_l"][vidx].copy()
+ betas_l = data_params["shape_l"][vidx].copy()
+
+ # distortion parameters for egocam rendering
+ dist = data_params["dist"][vidx].copy()
+ # NOTE:
+ # kp2d, kp3d are in undistored space
+ # thus, results for evaluation is in the undistorted space (non-curved)
+ # dist parameters can be used for rendering in visualization
+
+ # objects
+ bbox2d = pad_jts2d(data_2d["bbox3d"][vidx, view_idx].copy())
+ bbox3d = data_cam["bbox3d"][vidx, view_idx].copy()
+ bbox2d_t = bbox2d[:8]
+ bbox2d_b = bbox2d[8:]
+ bbox3d_t = bbox3d[:8]
+ bbox3d_b = bbox3d[8:]
+
+ kp2d = pad_jts2d(data_2d["kp3d"][vidx, view_idx].copy())
+ kp3d = data_cam["kp3d"][vidx, view_idx].copy()
+ kp2d_t = kp2d[:16]
+ kp2d_b = kp2d[16:]
+ kp3d_t = kp3d[:16]
+ kp3d_b = kp3d[16:]
+
+ obj_radian = data_params["obj_arti"][vidx].copy()
+
+ image_size = self.image_sizes[sid][view_idx]
+ image_size = {"width": image_size[0], "height": image_size[1]}
+
+ bbox = data_bbox[vidx, view_idx] # original bbox
+ is_egocam = "/0/" in imgname
+
+ # LOADING END
+
+ # SPEEDUP PROCESS
+ (
+ joints2d_r,
+ joints2d_l,
+ kp2d_b,
+ kp2d_t,
+ bbox2d_b,
+ bbox2d_t,
+ bbox,
+ ) = dataset_utils.transform_2d_for_speedup(
+ speedup,
+ is_egocam,
+ joints2d_r,
+ joints2d_l,
+ kp2d_b,
+ kp2d_t,
+ bbox2d_b,
+ bbox2d_t,
+ bbox,
+ args.ego_image_scale,
+ )
+ img_status = True
+ if load_rgb:
+ if speedup:
+ imgname = imgname.replace("/images/", "/cropped_images/")
+ imgname = imgname.replace(
+ "/arctic_data/", "/data/arctic_data/data/"
+ ).replace("/data/data/", "/data/")
+ # imgname = imgname.replace("/arctic_data/", "/data/arctic_data/")
+ cv_img, img_status = read_img(imgname, (2800, 2000, 3))
+ else:
+ norm_img = None
+
+ center = [bbox[0], bbox[1]]
+ scale = bbox[2]
+
+ # augment parameters
+ augm_dict = data_utils.augm_params(
+ self.aug_data,
+ args.flip_prob,
+ args.noise_factor,
+ args.rot_factor,
+ args.scale_factor,
+ )
+
+ use_gt_k = args.use_gt_k
+ if is_egocam:
+ # no scaling for egocam to make intrinsics consistent
+ use_gt_k = True
+ augm_dict["sc"] = 1.0
+
+ joints2d_r = data_utils.j2d_processing(
+ joints2d_r, center, scale, augm_dict, args.img_res
+ )
+ joints2d_l = data_utils.j2d_processing(
+ joints2d_l, center, scale, augm_dict, args.img_res
+ )
+ kp2d_b = data_utils.j2d_processing(
+ kp2d_b, center, scale, augm_dict, args.img_res
+ )
+ kp2d_t = data_utils.j2d_processing(
+ kp2d_t, center, scale, augm_dict, args.img_res
+ )
+ bbox2d_b = data_utils.j2d_processing(
+ bbox2d_b, center, scale, augm_dict, args.img_res
+ )
+ bbox2d_t = data_utils.j2d_processing(
+ bbox2d_t, center, scale, augm_dict, args.img_res
+ )
+ bbox2d = np.concatenate((bbox2d_t, bbox2d_b), axis=0)
+ kp2d = np.concatenate((kp2d_t, kp2d_b), axis=0)
+
+ # data augmentation: image
+ if load_rgb:
+ img = data_utils.rgb_processing(
+ self.aug_data,
+ cv_img,
+ center,
+ scale,
+ augm_dict,
+ img_res=args.img_res,
+ )
+ img = torch.from_numpy(img).float()
+ norm_img = self.normalize_img(img)
+
+ # exporting starts
+ inputs = {}
+ targets = {}
+ meta_info = {}
+ inputs["img"] = norm_img
+ meta_info["imgname"] = imgname
+ rot_r = data_cam["rot_r_cam"][vidx, view_idx]
+ rot_l = data_cam["rot_l_cam"][vidx, view_idx]
+
+ pose_r = np.concatenate((rot_r, pose_r), axis=0)
+ pose_l = np.concatenate((rot_l, pose_l), axis=0)
+
+ # hands
+ targets["mano.pose.r"] = torch.from_numpy(
+ data_utils.pose_processing(pose_r, augm_dict)
+ ).float()
+ targets["mano.pose.l"] = torch.from_numpy(
+ data_utils.pose_processing(pose_l, augm_dict)
+ ).float()
+ targets["mano.beta.r"] = torch.from_numpy(betas_r).float()
+ targets["mano.beta.l"] = torch.from_numpy(betas_l).float()
+ targets["mano.j2d.norm.r"] = torch.from_numpy(joints2d_r[:, :2]).float()
+ targets["mano.j2d.norm.l"] = torch.from_numpy(joints2d_l[:, :2]).float()
+
+ # object
+ targets["object.kp3d.full.b"] = torch.from_numpy(kp3d_b[:, :3]).float()
+ targets["object.kp2d.norm.b"] = torch.from_numpy(kp2d_b[:, :2]).float()
+ targets["object.kp3d.full.t"] = torch.from_numpy(kp3d_t[:, :3]).float()
+ targets["object.kp2d.norm.t"] = torch.from_numpy(kp2d_t[:, :2]).float()
+
+ targets["object.bbox3d.full.b"] = torch.from_numpy(bbox3d_b[:, :3]).float()
+ targets["object.bbox2d.norm.b"] = torch.from_numpy(bbox2d_b[:, :2]).float()
+ targets["object.bbox3d.full.t"] = torch.from_numpy(bbox3d_t[:, :3]).float()
+ targets["object.bbox2d.norm.t"] = torch.from_numpy(bbox2d_t[:, :2]).float()
+ targets["object.radian"] = torch.FloatTensor(np.array(obj_radian))
+
+ targets["object.kp2d.norm"] = torch.from_numpy(kp2d[:, :2]).float()
+ targets["object.bbox2d.norm"] = torch.from_numpy(bbox2d[:, :2]).float()
+
+ # compute RT from cano space to augmented space
+ # this transform match j3d processing
+ obj_idx = self.obj_names.index(obj_name)
+ meta_info["kp3d.cano"] = self.kp3d_cano[obj_idx] / 1000 # meter
+ kp3d_cano = meta_info["kp3d.cano"].numpy()
+ kp3d_target = targets["object.kp3d.full.b"][:, :3].numpy()
+
+ # rotate canonical kp3d to match original image
+ R, _ = tf.solve_rigid_tf_np(kp3d_cano, kp3d_target)
+ obj_rot = (
+ rot.batch_rot2aa(torch.from_numpy(R).float().view(1, 3, 3)).view(3).numpy()
+ )
+
+ # multiply rotation from data augmentation
+ obj_rot_aug = rot.rot_aa(obj_rot, augm_dict["rot"])
+ targets["object.rot"] = torch.FloatTensor(obj_rot_aug).view(1, 3)
+
+ # full image camera coord
+ targets["mano.j3d.full.r"] = torch.FloatTensor(joints3d_r[:, :3])
+ targets["mano.j3d.full.l"] = torch.FloatTensor(joints3d_l[:, :3])
+ targets["object.kp3d.full.b"] = torch.FloatTensor(kp3d_b[:, :3])
+
+ meta_info["query_names"] = obj_name
+ meta_info["window_size"] = torch.LongTensor(np.array([args.window_size]))
+
+ # scale and center in the original image space
+ scale_original = max([image_size["width"], image_size["height"]]) / 200.0
+ center_original = [image_size["width"] / 2.0, image_size["height"] / 2.0]
+ intrx = data_utils.get_aug_intrix(
+ intrx,
+ args.focal_length,
+ args.img_res,
+ use_gt_k,
+ center_original[0],
+ center_original[1],
+ augm_dict["sc"] * scale_original,
+ )
+
+ if is_egocam and self.egocam_k is None:
+ self.egocam_k = intrx
+ elif is_egocam and self.egocam_k is not None:
+ intrx = self.egocam_k
+
+ meta_info["intrinsics"] = torch.FloatTensor(intrx)
+ if not is_egocam:
+ dist = dist * float("nan")
+ meta_info["dist"] = torch.FloatTensor(dist)
+ meta_info["center"] = np.array(center, dtype=np.float32)
+ meta_info["is_flipped"] = augm_dict["flip"]
+ meta_info["rot_angle"] = np.float32(augm_dict["rot"])
+ # meta_info["sample_index"] = index
+
+ # root and at least 3 joints inside image
+ targets["is_valid"] = float(is_valid)
+ targets["left_valid"] = float(left_valid) * float(is_valid)
+ targets["right_valid"] = float(right_valid) * float(is_valid)
+ targets["joints_valid_r"] = np.ones(21) * targets["right_valid"]
+ targets["joints_valid_l"] = np.ones(21) * targets["left_valid"]
+
+ return inputs, targets, meta_info
+
+ def _process_imgnames(self, seq, split):
+ imgnames = self.imgnames
+ if seq is not None:
+ imgnames = [imgname for imgname in imgnames if "/" + seq + "/" in imgname]
+ assert len(imgnames) == len(set(imgnames))
+ imgnames = dataset_utils.downsample(imgnames, split)
+ self.imgnames = imgnames
+
+ def _load_data(self, args, split, seq):
+ self.args = args
+ self.split = split
+ self.aug_data = split.endswith("train")
+ # during inference, turn off
+ if seq is not None:
+ self.aug_data = False
+ self.normalize_img = Normalize(mean=args.img_norm_mean, std=args.img_norm_std)
+
+ if "train" in split:
+ self.mode = "train"
+ elif "val" in split:
+ self.mode = "val"
+ elif "test" in split:
+ self.mode = "test"
+
+ short_split = split.replace("mini", "").replace("tiny", "").replace("small", "")
+ data_p = op.join(
+ f"./data/arctic_data/data/splits/{args.setup}_{short_split}.npy"
+ )
+ logger.info(f"Loading {data_p}")
+ data = np.load(data_p, allow_pickle=True).item()
+
+ self.data = data["data_dict"]
+ self.imgnames = data["imgnames"]
+
+ with open("./data/arctic_data/data/meta/misc.json", "r") as f:
+ misc = json.load(f)
+
+ # unpack
+ subjects = list(misc.keys())
+ intris_mat = {}
+ world2cam = {}
+ image_sizes = {}
+ ioi_offset = {}
+ for subject in subjects:
+ world2cam[subject] = misc[subject]["world2cam"]
+ intris_mat[subject] = misc[subject]["intris_mat"]
+ image_sizes[subject] = misc[subject]["image_size"]
+ ioi_offset[subject] = misc[subject]["ioi_offset"]
+
+ self.world2cam = world2cam
+ self.intris_mat = intris_mat
+ self.image_sizes = image_sizes
+ self.ioi_offset = ioi_offset
+
+ object_tensors = ObjectTensors()
+ self.kp3d_cano = object_tensors.obj_tensors["kp_bottom"]
+ self.obj_names = object_tensors.obj_tensors["names"]
+ self.egocam_k = None
+
+ def __init__(self, args, split, seq=None):
+ self._load_data(args, split, seq)
+ self._process_imgnames(seq, split)
+ logger.info(
+ f"ImageDataset Loaded {self.split} split, num samples {len(self.imgnames)}"
+ )
+
+ def __len__(self):
+ return len(self.imgnames)
+
+ def getitem_eval(self, imgname, load_rgb=True):
+ args = self.args
+ # LOADING START
+ speedup = args.speedup
+ sid, seq_name, view_idx, image_idx = imgname.split("/")[-4:]
+ obj_name = seq_name.split("_")[0]
+ view_idx = int(view_idx)
+
+ seq_data = self.data[f"{sid}/{seq_name}"]
+
+ data_bbox = seq_data["bbox"]
+ data_params = seq_data["params"]
+
+ vidx = int(image_idx.split(".")[0]) - self.ioi_offset[sid]
+
+ if view_idx == 0:
+ intrx = data_params["K_ego"][vidx].copy()
+ else:
+ intrx = np.array(self.intris_mat[sid][view_idx - 1])
+
+ # distortion parameters for egocam rendering
+ dist = data_params["dist"][vidx].copy()
+
+ bbox = data_bbox[vidx, view_idx] # original bbox
+ is_egocam = "/0/" in imgname
+
+ image_size = self.image_sizes[sid][view_idx]
+ image_size = {"width": image_size[0], "height": image_size[1]}
+
+ # SPEEDUP PROCESS
+ bbox = dataset_utils.transform_bbox_for_speedup(
+ speedup,
+ is_egocam,
+ bbox,
+ args.ego_image_scale,
+ )
+ img_status = True
+ if load_rgb:
+ if speedup:
+ imgname = imgname.replace("/images/", "/cropped_images/")
+ imgname = imgname.replace("/arctic_data/", "/data/arctic_data/data/")
+ cv_img, img_status = read_img(imgname, (2800, 2000, 3))
+ else:
+ norm_img = None
+
+ center = [bbox[0], bbox[1]]
+ scale = bbox[2]
+ self.aug_data = False
+
+ # augment parameters
+ augm_dict = data_utils.augm_params(
+ self.aug_data,
+ args.flip_prob,
+ args.noise_factor,
+ args.rot_factor,
+ args.scale_factor,
+ )
+
+ use_gt_k = args.use_gt_k
+ if is_egocam:
+ # no scaling for egocam to make intrinsics consistent
+ use_gt_k = True
+ augm_dict["sc"] = 1.0
+
+ # data augmentation: image
+ if load_rgb:
+ img = data_utils.rgb_processing(
+ self.aug_data,
+ cv_img,
+ center,
+ scale,
+ augm_dict,
+ img_res=args.img_res,
+ )
+ img = torch.from_numpy(img).float()
+ norm_img = self.normalize_img(img)
+
+ # exporting starts
+ inputs = {}
+ targets = {}
+ meta_info = {}
+ inputs["img"] = norm_img
+ meta_info["imgname"] = imgname
+
+ meta_info["query_names"] = obj_name
+ meta_info["window_size"] = torch.LongTensor(np.array([args.window_size]))
+
+ # scale and center in the original image space
+ scale_original = max([image_size["width"], image_size["height"]]) / 200.0
+ center_original = [image_size["width"] / 2.0, image_size["height"] / 2.0]
+ intrx = data_utils.get_aug_intrix(
+ intrx,
+ args.focal_length,
+ args.img_res,
+ use_gt_k,
+ center_original[0],
+ center_original[1],
+ augm_dict["sc"] * scale_original,
+ )
+
+ if is_egocam and self.egocam_k is None:
+ self.egocam_k = intrx
+ elif is_egocam and self.egocam_k is not None:
+ intrx = self.egocam_k
+
+ meta_info["intrinsics"] = torch.FloatTensor(intrx)
+ if not is_egocam:
+ dist = dist * float("nan")
+
+ meta_info["dist"] = torch.FloatTensor(dist)
+ meta_info["center"] = np.array(center, dtype=np.float32)
+ meta_info["is_flipped"] = augm_dict["flip"]
+ meta_info["rot_angle"] = np.float32(augm_dict["rot"])
+ return inputs, targets, meta_info
diff --git a/src/datasets/arctic_dataset_eval.py b/src/datasets/arctic_dataset_eval.py
new file mode 100644
index 0000000..0695d1c
--- /dev/null
+++ b/src/datasets/arctic_dataset_eval.py
@@ -0,0 +1,6 @@
+from src.datasets.arctic_dataset import ArcticDataset
+
+
+class ArcticDatasetEval(ArcticDataset):
+ def getitem(self, imgname, load_rgb=True):
+ return self.getitem_eval(imgname, load_rgb=load_rgb)
diff --git a/src/datasets/dataset_utils.py b/src/datasets/dataset_utils.py
new file mode 100644
index 0000000..8e01144
--- /dev/null
+++ b/src/datasets/dataset_utils.py
@@ -0,0 +1,165 @@
+import os
+import os.path as op
+from glob import glob
+
+import numpy as np
+from loguru import logger
+
+import common.data_utils as data_utils
+from common.sys_utils import copy_repo
+
+
+def transform_bbox_for_speedup(
+ speedup,
+ is_egocam,
+ _bbox_crop,
+ ego_image_scale,
+):
+ bbox_crop = np.array(_bbox_crop)
+ # bbox is normalized in scale
+
+ if speedup:
+ if is_egocam:
+ bbox_crop = [num * ego_image_scale for num in bbox_crop]
+ else:
+ # change to new coord system
+ bbox_crop[0] = 500
+ bbox_crop[1] = 500
+ bbox_crop[2] = 1000 / (1.5 * 200)
+
+ # bbox is normalized in scale
+ return bbox_crop
+
+
+def transform_2d_for_speedup(
+ speedup,
+ is_egocam,
+ _joints2d_r,
+ _joints2d_l,
+ _kp2d_b,
+ _kp2d_t,
+ _bbox2d_b,
+ _bbox2d_t,
+ _bbox_crop,
+ ego_image_scale,
+):
+ joints2d_r = np.copy(_joints2d_r)
+ joints2d_l = np.copy(_joints2d_l)
+ kp2d_b = np.copy(_kp2d_b)
+ kp2d_t = np.copy(_kp2d_t)
+ bbox2d_b = np.copy(_bbox2d_b)
+ bbox2d_t = np.copy(_bbox2d_t)
+ bbox_crop = np.array(_bbox_crop)
+ # bbox is normalized in scale
+
+ if speedup:
+ if is_egocam:
+ joints2d_r[:, :2] *= ego_image_scale
+ joints2d_l[:, :2] *= ego_image_scale
+ kp2d_b[:, :2] *= ego_image_scale
+ kp2d_t[:, :2] *= ego_image_scale
+ bbox2d_b[:, :2] *= ego_image_scale
+ bbox2d_t[:, :2] *= ego_image_scale
+
+ bbox_crop = [num * ego_image_scale for num in bbox_crop]
+ else:
+ # change to new coord system
+ joints2d_r = data_utils.transform_kp2d(joints2d_r, bbox_crop)
+ joints2d_l = data_utils.transform_kp2d(joints2d_l, bbox_crop)
+ kp2d_b = data_utils.transform_kp2d(kp2d_b, bbox_crop)
+ kp2d_t = data_utils.transform_kp2d(kp2d_t, bbox_crop)
+ bbox2d_b = data_utils.transform_kp2d(bbox2d_b, bbox_crop)
+ bbox2d_t = data_utils.transform_kp2d(bbox2d_t, bbox_crop)
+
+ bbox_crop[0] = 500
+ bbox_crop[1] = 500
+ bbox_crop[2] = 1000 / (1.5 * 200)
+
+ # bbox is normalized in scale
+ return (
+ joints2d_r,
+ joints2d_l,
+ kp2d_b,
+ kp2d_t,
+ bbox2d_b,
+ bbox2d_t,
+ bbox_crop,
+ )
+
+
+def copy_repo_arctic(exp_key):
+ dst_folder = f"/is/cluster/work/fzicong/chiral_data/cache/logs/{exp_key}/repo"
+
+ if not op.exists(dst_folder):
+ logger.info("Copying repo")
+ src_files = glob("./*")
+ os.makedirs(dst_folder)
+ filter_keywords = [".ipynb", ".obj", ".pt", "run_scripts", ".sub", ".txt"]
+ copy_repo(src_files, dst_folder, filter_keywords)
+ logger.info("Done")
+
+
+def get_num_images(split, num_images):
+ if split in ["train", "val", "test"]:
+ return num_images
+
+ if split == "smalltrain":
+ return 100000
+
+ if split == "tinytrain":
+ return 12000
+
+ if split == "minitrain":
+ return 300
+
+ if split == "smallval":
+ return 12000
+
+ if split == "tinyval":
+ return 500
+
+ if split == "minival":
+ return 80
+
+ if split == "smalltest":
+ return 12000
+
+ if split == "tinytest":
+ return 6000
+
+ if split == "minitest":
+ return 200
+
+ assert False, f"Invalid split {split}"
+
+
+def pad_jts2d(jts):
+ num_jts = jts.shape[0]
+ jts_pad = np.ones((num_jts, 3))
+ jts_pad[:, :2] = jts
+ return jts_pad
+
+
+def get_valid(data_2d, data_cam, vidx, view_idx, imgname):
+ assert (
+ vidx < data_2d["joints.right"].shape[0]
+ ), "The requested vidx does not exist in annotation"
+ is_valid = data_cam["is_valid"][vidx, view_idx]
+ right_valid = data_cam["right_valid"][vidx, view_idx]
+ left_valid = data_cam["left_valid"][vidx, view_idx]
+ return vidx, is_valid, right_valid, left_valid
+
+
+def downsample(fnames, split):
+ if "small" not in split and "mini" not in split and "tiny" not in split:
+ return fnames
+ import random
+
+ random.seed(1)
+ assert (
+ random.randint(0, 100) == 17
+ ), "Same seed but different results; Subsampling might be different."
+
+ num_samples = get_num_images(split, len(fnames))
+ curr_keys = random.sample(fnames, num_samples)
+ return curr_keys
diff --git a/src/datasets/tempo_dataset.py b/src/datasets/tempo_dataset.py
new file mode 100644
index 0000000..a74aef3
--- /dev/null
+++ b/src/datasets/tempo_dataset.py
@@ -0,0 +1,105 @@
+import os.path as op
+
+import numpy as np
+import torch
+from loguru import logger
+from torch.utils.data import Dataset
+
+import common.ld_utils as ld_utils
+import src.datasets.dataset_utils as dataset_utils
+from src.datasets.arctic_dataset import ArcticDataset
+
+
+class TempoDataset(ArcticDataset):
+ def _load_data(self, args, split):
+ data_p = f"./data/arctic_data/data/feat/{args.img_feat_version}/{args.setup}_{split}.pt"
+ logger.info(f"Loading: {data_p}")
+ data = torch.load(data_p)
+ imgnames = data["imgnames"]
+ vecs_list = data["feat_vec"]
+ assert len(imgnames) == len(vecs_list)
+ vec_dict = {}
+ for imgname, vec in zip(imgnames, vecs_list):
+ key = "/".join(imgname.split("/")[-4:])
+ vec_dict[key] = vec
+ self.vec_dict = vec_dict
+
+ assert len(imgnames) == len(vec_dict.keys())
+ self.aug_data = False
+ self.window_size = args.window_size
+
+ def __init__(self, args, split, seq=None):
+ Dataset.__init__(self)
+ super()._load_data(args, split, seq)
+ self._load_data(args, split)
+
+ imgnames = list(self.vec_dict.keys())
+ imgnames = dataset_utils.downsample(imgnames, split)
+
+ self.imgnames = imgnames
+ logger.info(
+ f"TempoDataset Loaded {self.split} split, num samples {len(imgnames)}"
+ )
+
+ def __getitem__(self, index):
+ imgname = self.imgnames[index]
+ img_idx = int(op.basename(imgname).split(".")[0])
+ ind = (
+ np.arange(self.window_size) - (self.window_size - 1) / 2 + img_idx
+ ).astype(np.int64)
+ num_frames = self.data["/".join(imgname.split("/")[:2])]["params"][
+ "rot_r"
+ ].shape[0]
+ ind = np.clip(
+ ind, 10, num_frames - 10 - 1
+ ) # skip first and last 10 frames as they are not useful
+ imgnames = [op.join(op.dirname(imgname), "%05d.jpg" % (idx)) for idx in ind]
+
+ targets_list = []
+ meta_list = []
+ img_feats = []
+ inputs_list = []
+ load_rgb = True if self.args.method in ["tempo_ft"] else False
+ for imgname in imgnames:
+ img_folder = f"./data/arctic_data/data/images/"
+ inputs, targets, meta_info = self.getitem(
+ op.join(img_folder, imgname), load_rgb=load_rgb
+ )
+ if load_rgb:
+ inputs_list.append(inputs)
+ else:
+ img_feats.append(self.vec_dict[imgname].type(torch.FloatTensor))
+ targets_list.append(targets)
+ meta_list.append(meta_info)
+
+ if load_rgb:
+ inputs_list = ld_utils.stack_dl(
+ ld_utils.ld2dl(inputs_list), dim=0, verbose=False
+ )
+ inputs = {"img": inputs_list["img"]}
+ else:
+ img_feats = torch.stack(img_feats, dim=0)
+ inputs = {"img_feat": img_feats}
+
+ targets_list = ld_utils.stack_dl(
+ ld_utils.ld2dl(targets_list), dim=0, verbose=False
+ )
+ meta_list = ld_utils.stack_dl(ld_utils.ld2dl(meta_list), dim=0, verbose=False)
+
+ targets_list["is_valid"] = torch.FloatTensor(np.array(targets_list["is_valid"]))
+ targets_list["left_valid"] = torch.FloatTensor(
+ np.array(targets_list["left_valid"])
+ )
+ targets_list["right_valid"] = torch.FloatTensor(
+ np.array(targets_list["right_valid"])
+ )
+ targets_list["joints_valid_r"] = torch.FloatTensor(
+ np.array(targets_list["joints_valid_r"])
+ )
+ targets_list["joints_valid_l"] = torch.FloatTensor(
+ np.array(targets_list["joints_valid_l"])
+ )
+ meta_list["center"] = torch.FloatTensor(np.array(meta_list["center"]))
+ meta_list["is_flipped"] = torch.FloatTensor(np.array(meta_list["is_flipped"]))
+ meta_list["rot_angle"] = torch.FloatTensor(np.array(meta_list["rot_angle"]))
+ return inputs, targets_list, meta_list
diff --git a/src/datasets/tempo_inference_dataset.py b/src/datasets/tempo_inference_dataset.py
new file mode 100644
index 0000000..c40bc79
--- /dev/null
+++ b/src/datasets/tempo_inference_dataset.py
@@ -0,0 +1,146 @@
+import os.path as op
+
+import numpy as np
+import torch
+from loguru import logger
+from torch.utils.data import Dataset
+
+# from src.datasets.tempo_dataset import TempoDataset
+import common.ld_utils as ld_utils
+import src.datasets.dataset_utils as dataset_utils
+from src.datasets.arctic_dataset import ArcticDataset
+
+
+def create_windows(imgnames, window_size):
+ def chunks(lst, n):
+ """Yield successive n-sized chunks from lst."""
+ my_chunks = [lst[i : i + n] for i in range(0, len(lst), n)]
+ if len(my_chunks[-1]) == n:
+ return my_chunks
+ last_chunk = my_chunks[-1]
+ last_element = last_chunk[-1]
+ last_chunk_pad = [last_element for _ in range(n)]
+ for idx, element in enumerate(last_chunk):
+ last_chunk_pad[idx] = element
+ my_chunks[-1] = last_chunk_pad
+ return my_chunks
+
+ img_seq_dict = {}
+ for imgname in imgnames:
+ sid, seq_name, view_idx, _ = imgname.split("/")[-4:]
+ seq_name = "/".join([sid, seq_name, view_idx])
+ if seq_name not in img_seq_dict.keys():
+ img_seq_dict[seq_name] = []
+ img_seq_dict[seq_name].append(imgname)
+
+ windows = []
+ for seq_name in img_seq_dict.keys():
+ windows.append(chunks(sorted(img_seq_dict[seq_name]), window_size))
+
+ windows = sum(windows, [])
+ return windows
+
+
+class TempoInferenceDataset(ArcticDataset):
+ def _load_data(self, args, split):
+ # load image features
+ data_p = f"./data/arctic_data/data/feat/{args.img_feat_version}/{args.setup}_{split}.pt"
+ assert op.exists(
+ data_p
+ ), f"Not found {data_p}; NOTE: only use ArcticDataset for single-frame model to evaluate and extract."
+ logger.info(f"Loading {data_p}")
+ data = torch.load(data_p)
+ imgnames = data["imgnames"]
+ vecs_list = data["feat_vec"]
+ vec_dict = {}
+ for imgname, vec in zip(imgnames, vecs_list):
+ key = "/".join(imgname.split("/")[-4:])
+ vec_dict[key] = vec
+ self.vec_dict = vec_dict
+ assert len(imgnames) == len(vec_dict.keys())
+
+ # all imgnames for this split
+ # override the original self.imgnames
+ imgnames = [
+ imgname.replace("/data/arctic_data/", "/arctic_data/")
+ for imgname in imgnames
+ ]
+ self.imgnames = imgnames
+ self.aug_data = False
+ self.window_size = args.window_size
+
+ def _process_imgnames(self, seq, split):
+ imgnames = self.imgnames
+ if seq is not None:
+ imgnames = [imgname for imgname in imgnames if "/" + seq + "/" in imgname]
+ assert len(imgnames) == len(set(imgnames))
+ self.imgnames = imgnames
+
+ def __init__(self, args, split, seq=None):
+ Dataset.__init__(self)
+ super()._load_data(args, split, seq)
+ self._load_data(args, split)
+ self._process_imgnames(seq, split)
+
+ # split imgnames by windowsize into chunks
+ # no overlappping frames btw chunks
+ windows = create_windows(self.imgnames, self.window_size)
+ windows = dataset_utils.downsample(windows, split)
+
+ self.windows = windows
+ num_imgnames = len(sum(self.windows, []))
+ logger.info(
+ f"TempoInferDataset Loaded {self.split} split, num samples {num_imgnames}"
+ )
+
+ def __getitem__(self, index):
+ imgnames = self.windows[index]
+ inputs_list = []
+ targets_list = []
+ meta_list = []
+ img_feats = []
+ load_rgb = not self.args.eval # test.py do not load rgb
+ for imgname in imgnames:
+ short_imgname = "/".join(imgname.split("/")[-4:])
+ # always load rgb because in training, we need to visualize
+ # too complicated if not load rgb in eval or other situations
+ # thus: load both rgb and features
+ inputs, targets, meta_info = self.getitem(imgname, load_rgb=load_rgb)
+ img_feats.append(self.vec_dict[short_imgname])
+ inputs_list.append(inputs)
+ targets_list.append(targets)
+ meta_list.append(meta_info)
+
+ if load_rgb:
+ inputs_list = ld_utils.stack_dl(
+ ld_utils.ld2dl(inputs_list), dim=0, verbose=False
+ )
+ else:
+ inputs_list = {}
+ targets_list = ld_utils.stack_dl(
+ ld_utils.ld2dl(targets_list), dim=0, verbose=False
+ )
+ meta_list = ld_utils.stack_dl(ld_utils.ld2dl(meta_list), dim=0, verbose=False)
+ img_feats = torch.stack(img_feats, dim=0).float()
+
+ inputs_list["img_feat"] = img_feats
+ targets_list["is_valid"] = torch.FloatTensor(np.array(targets_list["is_valid"]))
+ targets_list["left_valid"] = torch.FloatTensor(
+ np.array(targets_list["left_valid"])
+ )
+ targets_list["right_valid"] = torch.FloatTensor(
+ np.array(targets_list["right_valid"])
+ )
+ targets_list["joints_valid_r"] = torch.FloatTensor(
+ np.array(targets_list["joints_valid_r"])
+ )
+ targets_list["joints_valid_l"] = torch.FloatTensor(
+ np.array(targets_list["joints_valid_l"])
+ )
+ meta_list["center"] = torch.FloatTensor(np.array(meta_list["center"]))
+ meta_list["is_flipped"] = torch.FloatTensor(np.array(meta_list["is_flipped"]))
+ meta_list["rot_angle"] = torch.FloatTensor(np.array(meta_list["rot_angle"]))
+ return inputs_list, targets_list, meta_list
+
+ def __len__(self):
+ return len(self.windows)
diff --git a/src/datasets/tempo_inference_dataset_eval.py b/src/datasets/tempo_inference_dataset_eval.py
new file mode 100644
index 0000000..696b35e
--- /dev/null
+++ b/src/datasets/tempo_inference_dataset_eval.py
@@ -0,0 +1,44 @@
+import numpy as np
+import torch
+
+import common.ld_utils as ld_utils
+from src.datasets.tempo_inference_dataset import TempoInferenceDataset
+
+
+class TempoInferenceDatasetEval(TempoInferenceDataset):
+ def __getitem__(self, index):
+ imgnames = self.windows[index]
+ inputs_list = []
+ targets_list = []
+ meta_list = []
+ img_feats = []
+ # load_rgb = not self.args.eval # test.py do not load rgb
+ load_rgb = False
+ for imgname in imgnames:
+ short_imgname = "/".join(imgname.split("/")[-4:])
+ # always load rgb because in training, we need to visualize
+ # too complicated if not load rgb in eval or other situations
+ # thus: load both rgb and features
+ inputs, targets, meta_info = self.getitem_eval(imgname, load_rgb=load_rgb)
+ img_feats.append(self.vec_dict[short_imgname])
+ inputs_list.append(inputs)
+ targets_list.append(targets)
+ meta_list.append(meta_info)
+
+ if load_rgb:
+ inputs_list = ld_utils.stack_dl(
+ ld_utils.ld2dl(inputs_list), dim=0, verbose=False
+ )
+ else:
+ inputs_list = {}
+ targets_list = ld_utils.stack_dl(
+ ld_utils.ld2dl(targets_list), dim=0, verbose=False
+ )
+ meta_list = ld_utils.stack_dl(ld_utils.ld2dl(meta_list), dim=0, verbose=False)
+ img_feats = torch.stack(img_feats, dim=0).float()
+
+ inputs_list["img_feat"] = img_feats
+ meta_list["center"] = torch.FloatTensor(np.array(meta_list["center"]))
+ meta_list["is_flipped"] = torch.FloatTensor(np.array(meta_list["is_flipped"]))
+ meta_list["rot_angle"] = torch.FloatTensor(np.array(meta_list["rot_angle"]))
+ return inputs_list, targets_list, meta_list
diff --git a/src/extraction/interface.py b/src/extraction/interface.py
new file mode 100644
index 0000000..6411aff
--- /dev/null
+++ b/src/extraction/interface.py
@@ -0,0 +1,289 @@
+import os
+import os.path as op
+
+import numpy as np
+import torch
+from loguru import logger
+from PIL import Image
+from pytorch3d.transforms import matrix_to_axis_angle
+from torch.utils.data import DataLoader
+from tqdm import tqdm
+
+import common.ld_utils as ld_utils
+import common.thing as thing
+from common.data_utils import denormalize_images
+from common.xdict import xdict
+from src.callbacks.process.process_generic import prepare_interfield
+from src.datasets.arctic_dataset import ArcticDataset
+
+
+def prepare_data(full_seq_name, exp_key, data_keys, layers, device, task, eval_p):
+ sid = full_seq_name[:3]
+ view = full_seq_name[-1]
+ folder_p = op.join(eval_p, full_seq_name)
+ if "pose" in task and "submit_" in task:
+ gt_folder = op.join("./eval_server_gt_pose", full_seq_name)
+ elif "field" in task and "submit_" in task:
+ gt_folder = op.join("./eval_server_gt_field", full_seq_name)
+ else:
+ gt_folder = folder_p
+ logger.info(f"Reading keys: {data_keys}")
+ batch = read_keys(gt_folder, folder_p, keys=data_keys, verbose=False)
+ batch = xdict(batch)
+ logger.info("Done")
+
+ # trim the frames at the end
+ # as in pred, they were a multiple of `window_size
+ num_gts = len(batch["meta_info.imgname"])
+ for key in data_keys:
+ if "pred." in key or "targets." in key:
+ batch.overwrite(key, batch[key][:num_gts])
+
+ # assert right seq
+ imgnames = batch["meta_info.imgname"]
+ for imgname in imgnames:
+ curr_sid, curr_seq, curr_view, _ = imgname.split("/")[-4:]
+ assert full_seq_name[4:-2] == curr_seq
+ assert curr_sid == sid
+ assert curr_view == view
+
+ if "pose" in task:
+ batch.overwrite(
+ "pred.mano.pose.r", matrix_to_axis_angle(batch["pred.mano.pose.r"])
+ )
+ batch.overwrite(
+ "pred.mano.pose.l", matrix_to_axis_angle(batch["pred.mano.pose.l"])
+ )
+
+ logger.info("forward params")
+ batch = fk_params_batch(batch, layers, device, flag="pred")
+ batch = fk_params_batch(batch, layers, device, flag="targets")
+ logger.info("Done")
+
+ meta_info = batch.search("meta_info.", replace_to="")
+ pred = batch.search("pred.", replace_to="")
+ targets = batch.search("targets.", replace_to="")
+ meta_info.overwrite("part_ids", targets["object.parts_ids"])
+ meta_info.overwrite("diameter", targets["object.diameter"])
+
+ logger.info("prepare interfield")
+ targets = prepare_interfield(targets, max_dist=0.1)
+ logger.info("Done")
+
+ elif "field" in task:
+ logger.info("forward params")
+ batch = fk_params_batch(batch, layers, device, flag="targets")
+ logger.info("Done")
+
+ meta_info = batch.search("meta_info.", replace_to="")
+ pred = batch.search("pred.", replace_to="")
+ targets = batch.search("targets.", replace_to="")
+ meta_info["object.v_len"] = targets["object.v_len"]
+
+ logger.info("prepare interfield")
+ targets = prepare_interfield(targets, max_dist=0.1)
+ logger.info("Done")
+
+ data = xdict()
+ data.merge(pred.prefix("pred."))
+ data.merge(targets.prefix("targets."))
+ data.merge(meta_info.prefix("meta_info."))
+ data = data.to("cpu")
+ return data
+
+
+def fk_params_batch(batch, layers, device, flag):
+ mano_r = layers["right"]
+ mano_l = layers["left"]
+ object_tensors = layers["object_tensors"]
+ batch = xdict(thing.thing2dev(dict(batch), device))
+ pose_r = batch[f"{flag}.mano.pose.r"].reshape(-1, 48)
+ pose_l = batch[f"{flag}.mano.pose.l"].reshape(-1, 48)
+ cam_r = batch[f"{flag}.mano.cam_t.r"].view(-1, 1, 3)
+ cam_l = batch[f"{flag}.mano.cam_t.l"].view(-1, 1, 3)
+ cam_o = batch[f"{flag}.object.cam_t"].view(-1, 1, 3)
+
+ out_r = mano_r(
+ global_orient=pose_r[:, :3].reshape(-1, 3),
+ hand_pose=pose_r[:, 3:].reshape(-1, 45),
+ betas=batch[f"{flag}.mano.beta.r"].view(-1, 10),
+ )
+
+ out_l = mano_l(
+ global_orient=pose_l[:, :3].reshape(-1, 3),
+ hand_pose=pose_l[:, 3:].reshape(-1, 45),
+ betas=batch[f"{flag}.mano.beta.l"].view(-1, 10),
+ )
+ query_names = batch["meta_info.query_names"]
+ out_o = object_tensors.forward(
+ batch[f"{flag}.object.radian"].view(-1, 1),
+ batch[f"{flag}.object.rot"].view(-1, 3),
+ None,
+ query_names,
+ )
+ v3d_r = out_r.vertices + cam_r
+ v3d_l = out_l.vertices + cam_l
+ v3d_o = out_o["v"] + cam_o
+ j3d_r = out_r.joints + cam_r
+ j3d_l = out_l.joints + cam_l
+ out = {
+ f"{flag}.mano.v3d.cam.r": v3d_r,
+ f"{flag}.mano.v3d.cam.l": v3d_l,
+ f"{flag}.mano.j3d.cam.r": j3d_r,
+ f"{flag}.mano.j3d.cam.l": j3d_l,
+ f"{flag}.object.v.cam": v3d_o,
+ f"{flag}.object.v_len": out_o["v_len"],
+ f"{flag}.object.diameter": out_o["diameter"],
+ f"{flag}.object.parts_ids": out_o["parts_ids"],
+ }
+ batch.merge(out)
+ return batch
+
+
+def read_keys(gt_folder_p, folder_p, keys, verbose=True):
+ out = {}
+
+ if verbose:
+ pbar = tqdm(keys)
+ else:
+ pbar = keys
+ for key in pbar:
+ if verbose:
+ pbar.set_description(f"Loading {key}")
+ if key in "inputs.img":
+ # skip images
+ continue
+ if "targets." in key or "meta_info." in key or "inputs." in key:
+ curr_folder_p = op.join(gt_folder_p, key.split(".")[0])
+ else:
+ curr_folder_p = op.join(folder_p, "preds")
+ data_p = op.join(curr_folder_p, key + ".pt")
+ # print(data_p)
+ data = torch.load(data_p)
+ if isinstance(data, (torch.HalfTensor, torch.cuda.HalfTensor)):
+ data = data.type(torch.float32)
+ out[key] = data
+ return out
+
+
+def save_results(out, out_dir):
+ # interface.verify_interface(out)
+ for seq_name, seq_data in out.items():
+ out_folder = op.join(out_dir, seq_name)
+ exp_key, seq_name = out_folder.split("/")[-2:]
+
+ input_p = op.join(out_folder, "inputs")
+ target_p = op.join(out_folder, "targets")
+ meta_p = op.join(out_folder, "meta_info")
+ pred_p = op.join(out_folder, "preds")
+ img_p = op.join(out_folder, "images")
+
+ logger.info(f"Dumping pose est results at {out_folder}")
+ for key, val in seq_data.items():
+ if "inputs.img" in key:
+ # save images
+ imgs = denormalize_images(val)
+ images = (imgs.permute(0, 2, 3, 1) * 255).numpy().astype(np.uint8)
+ for idx, img in tqdm(enumerate(images), total=len(images)):
+ im = Image.fromarray(img)
+ out_p = op.join(img_p, "%05d.jpg" % (idx))
+ os.makedirs(op.dirname(out_p), exist_ok=True)
+ im.save(out_p)
+ else:
+ if "inputs." in key:
+ out_p = op.join(input_p, key + ".pt")
+ elif "targets." in key:
+ out_p = op.join(target_p, key + ".pt")
+ elif "meta_info." in key:
+ out_p = op.join(meta_p, key + ".pt")
+ elif "pred." in key:
+ out_p = op.join(pred_p, key + ".pt")
+ else:
+ print(f"Skipping {key} of type {type(val)}")
+
+ os.makedirs(op.dirname(out_p), exist_ok=True)
+ if isinstance(
+ val, (torch.FloatTensor, torch.cuda.FloatTensor)
+ ) and key not in ["pred.feat_vec"]:
+ # reduce storage requirement
+ val = val.type(torch.float16)
+ print(f"Saving {key} to {out_p}")
+ torch.save(val, out_p)
+
+
+def std_interface(out_list):
+ out_list = ld_utils.ld2dl(out_list)
+ out = ld_utils.cat_dl(out_list, 0)
+ for key, val in out.items():
+ if isinstance(val, torch.Tensor):
+ out[key] = val.squeeze()
+
+ # verify that all keys have same length
+ keys = list(out.keys())
+ key0 = keys[0]
+ for key in keys:
+ if len(out[key]) != len(out[key0]):
+ logger.info(
+ f"Key {key} has length {len(out[key])} while {key0} has length {len(out[key0])}"
+ )
+ import ipdb
+
+ ipdb.set_trace()
+
+ # sort by imgname
+ imgnames = np.array(out["meta_info.imgname"])
+ num_examples = len(set(out["meta_info.imgname"]))
+ sort_idx = np.argsort(imgnames)
+ for key, val in out.items():
+ assert len(val) == len(sort_idx)
+ if isinstance(val, (torch.Tensor, np.ndarray)):
+ out[key] = val[sort_idx]
+ elif isinstance(val, (list)):
+ out[key] = [val[idx] for idx in sort_idx]
+ else:
+ print(f"Skipping {key} of type {type(out)}")
+
+ # split according to camera
+ imgnames = np.array(out["meta_info.imgname"])
+ cam_ids = []
+ all_seqs = []
+ for imgname in imgnames:
+ sid, seq_name, cam_id, frame = imgname.split("/")[-4:]
+ all_seqs.append(seq_name)
+ cam_ids.append(int(cam_id))
+
+ assert len(set(all_seqs)) == 1
+ cam_ids = np.array(cam_ids)
+ all_cams = list(set(cam_ids))
+ out_cam = {}
+ imgnames_one = imgnames.reshape(len(all_cams), -1)
+ num_examples = len(set(imgnames_one[0].tolist()))
+ for cam_id in all_cams:
+ sub_idx = np.where(cam_id == cam_ids)[0][:num_examples]
+ curr_cam_out = {}
+ for key, val in out.items():
+ if isinstance(val, (torch.Tensor, np.ndarray)):
+ curr_cam_out[key] = val[sub_idx]
+ elif isinstance(val, (list)):
+ curr_cam_out[key] = [val[idx] for idx in sub_idx]
+ else:
+ print(f"Skipping {key} of type {type(out)}")
+
+ assert len(curr_cam_out[key]) == num_examples
+ out_cam[f"{sid}_{seq_name}_{cam_id}"] = curr_cam_out
+ return out_cam
+
+
+def fetch_dataset(args, seq):
+ ds = ArcticDataset(args=args, split=args.run_on, seq=seq)
+ return ds
+
+
+def fetch_dataloader(args, seq):
+ dataset = fetch_dataset(args, seq)
+ return DataLoader(
+ dataset=dataset,
+ batch_size=args.test_batch_size,
+ shuffle=False,
+ num_workers=args.num_workers,
+ )
diff --git a/src/extraction/keys/eval_field.py b/src/extraction/keys/eval_field.py
new file mode 100644
index 0000000..ab5de3f
--- /dev/null
+++ b/src/extraction/keys/eval_field.py
@@ -0,0 +1,26 @@
+KEYS = [
+ "pred.dist.ro",
+ "pred.dist.lo",
+ "pred.dist.or",
+ "pred.dist.ol",
+ "targets.mano.pose.r",
+ "targets.mano.pose.l",
+ "targets.mano.beta.r",
+ "targets.mano.beta.l",
+ "targets.object.radian",
+ "targets.object.rot",
+ "targets.is_valid",
+ "targets.left_valid",
+ "targets.right_valid",
+ "targets.joints_valid_r",
+ "targets.joints_valid_l",
+ "targets.mano.cam_t.r",
+ "targets.mano.cam_t.l",
+ "targets.object.cam_t",
+ "meta_info.imgname",
+ "meta_info.query_names",
+ "meta_info.window_size",
+ "meta_info.center",
+ "meta_info.is_flipped",
+ "meta_info.rot_angle",
+]
diff --git a/src/extraction/keys/eval_pose.py b/src/extraction/keys/eval_pose.py
new file mode 100644
index 0000000..4a4755e
--- /dev/null
+++ b/src/extraction/keys/eval_pose.py
@@ -0,0 +1,33 @@
+KEYS = [
+ "pred.mano.cam_t.r",
+ "pred.mano.beta.r",
+ "pred.mano.pose.r",
+ "pred.mano.cam_t.l",
+ "pred.mano.beta.l",
+ "pred.mano.pose.l",
+ "pred.object.rot",
+ "pred.object.cam_t",
+ "pred.object.radian",
+ "targets.mano.pose.r",
+ "targets.mano.pose.l",
+ "targets.mano.beta.r",
+ "targets.mano.beta.l",
+ "targets.object.radian",
+ "targets.object.rot",
+ "targets.is_valid",
+ "targets.left_valid",
+ "targets.right_valid",
+ "targets.joints_valid_r",
+ "targets.joints_valid_l",
+ "targets.mano.cam_t.r",
+ "targets.mano.cam_t.l",
+ "targets.object.cam_t",
+ "targets.object.bbox3d.cam",
+ "meta_info.imgname",
+ "meta_info.query_names",
+ "meta_info.window_size",
+ "meta_info.center",
+ "meta_info.is_flipped",
+ "meta_info.rot_angle",
+ "meta_info.diameter",
+]
diff --git a/src/extraction/keys/feat_field.py b/src/extraction/keys/feat_field.py
new file mode 100644
index 0000000..9f04964
--- /dev/null
+++ b/src/extraction/keys/feat_field.py
@@ -0,0 +1,4 @@
+KEYS = [
+ "pred.feat_vec",
+ "meta_info.imgname",
+]
diff --git a/src/extraction/keys/feat_pose.py b/src/extraction/keys/feat_pose.py
new file mode 100644
index 0000000..9f04964
--- /dev/null
+++ b/src/extraction/keys/feat_pose.py
@@ -0,0 +1,4 @@
+KEYS = [
+ "pred.feat_vec",
+ "meta_info.imgname",
+]
diff --git a/src/extraction/keys/submit_field.py b/src/extraction/keys/submit_field.py
new file mode 100644
index 0000000..52a6178
--- /dev/null
+++ b/src/extraction/keys/submit_field.py
@@ -0,0 +1,7 @@
+KEYS = [
+ "pred.dist.ro",
+ "pred.dist.lo",
+ "pred.dist.or",
+ "pred.dist.ol",
+ "meta_info.imgname",
+]
diff --git a/src/extraction/keys/submit_pose.py b/src/extraction/keys/submit_pose.py
new file mode 100644
index 0000000..fc52c26
--- /dev/null
+++ b/src/extraction/keys/submit_pose.py
@@ -0,0 +1,12 @@
+KEYS = [
+ "pred.mano.cam_t.r",
+ "pred.mano.beta.r",
+ "pred.mano.pose.r",
+ "pred.mano.cam_t.l",
+ "pred.mano.beta.l",
+ "pred.mano.pose.l",
+ "pred.object.rot",
+ "pred.object.cam_t",
+ "pred.object.radian",
+ "meta_info.imgname",
+]
diff --git a/src/extraction/keys/vis_field.py b/src/extraction/keys/vis_field.py
new file mode 100644
index 0000000..60832c8
--- /dev/null
+++ b/src/extraction/keys/vis_field.py
@@ -0,0 +1,21 @@
+KEYS = [
+ "inputs.img",
+ "pred.dist.lo",
+ "pred.dist.ol",
+ "pred.dist.or",
+ "pred.dist.ro",
+ "targets.is_valid",
+ "targets.left_valid",
+ "targets.right_valid",
+ "targets.mano.beta.r",
+ "targets.mano.beta.l",
+ "targets.mano.pose.r",
+ "targets.mano.pose.l",
+ "targets.mano.cam_t.r",
+ "targets.mano.cam_t.l",
+ "targets.object.radian",
+ "targets.object.rot",
+ "targets.object.cam_t",
+ "meta_info.imgname",
+ "meta_info.query_names",
+]
diff --git a/src/extraction/keys/vis_pose.py b/src/extraction/keys/vis_pose.py
new file mode 100644
index 0000000..712106d
--- /dev/null
+++ b/src/extraction/keys/vis_pose.py
@@ -0,0 +1,35 @@
+KEYS = [
+ "inputs.img",
+ "pred.mano.cam_t.r",
+ "pred.mano.beta.r",
+ "pred.mano.pose.r",
+ "pred.mano.cam_t.l",
+ "pred.mano.beta.l",
+ "pred.mano.pose.l",
+ "pred.object.rot",
+ "pred.object.cam_t",
+ "pred.object.radian",
+ "targets.mano.pose.r",
+ "targets.mano.pose.l",
+ "targets.mano.beta.r",
+ "targets.mano.beta.l",
+ "targets.object.radian",
+ "targets.object.rot",
+ "targets.is_valid",
+ "targets.left_valid",
+ "targets.right_valid",
+ "targets.joints_valid_r",
+ "targets.joints_valid_l",
+ "targets.mano.cam_t.r",
+ "targets.mano.cam_t.l",
+ "targets.object.cam_t",
+ "meta_info.imgname",
+ "meta_info.query_names",
+ "meta_info.window_size",
+ "meta_info.intrinsics",
+ "meta_info.dist",
+ "meta_info.center",
+ "meta_info.is_flipped",
+ "meta_info.rot_angle",
+ "meta_info.diameter",
+]
diff --git a/src/factory.py b/src/factory.py
new file mode 100644
index 0000000..d751883
--- /dev/null
+++ b/src/factory.py
@@ -0,0 +1,143 @@
+import torch
+from torch.utils.data import DataLoader
+
+from common.torch_utils import reset_all_seeds
+from src.datasets.arctic_dataset import ArcticDataset
+from src.datasets.arctic_dataset_eval import ArcticDatasetEval
+from src.datasets.tempo_dataset import TempoDataset
+from src.datasets.tempo_inference_dataset import TempoInferenceDataset
+from src.datasets.tempo_inference_dataset_eval import TempoInferenceDatasetEval
+
+
+def fetch_dataset_eval(args, seq=None):
+ if args.method in ["arctic_sf"]:
+ DATASET = ArcticDatasetEval
+ elif args.method in ["field_sf"]:
+ DATASET = ArcticDatasetEval
+ elif args.method in ["arctic_lstm", "field_lstm"]:
+ DATASET = TempoInferenceDatasetEval
+ else:
+ assert False
+ if seq is not None:
+ split = args.run_on
+ ds = DATASET(args=args, split=split, seq=seq)
+ return ds
+
+
+def fetch_dataset_devel(args, is_train, seq=None):
+ split = args.trainsplit if is_train else args.valsplit
+ if args.method in ["arctic_sf"]:
+ if is_train:
+ DATASET = ArcticDataset
+ else:
+ DATASET = ArcticDataset
+ elif args.method in ["field_sf"]:
+ if is_train:
+ DATASET = ArcticDataset
+ else:
+ DATASET = ArcticDataset
+ elif args.method in ["field_lstm", "arctic_lstm"]:
+ if is_train:
+ DATASET = TempoDataset
+ else:
+ DATASET = TempoInferenceDataset
+ else:
+ assert False
+ if seq is not None:
+ split = args.run_on
+ ds = DATASET(args=args, split=split, seq=seq)
+ return ds
+
+
+def collate_custom_fn(data_list):
+ data = data_list[0]
+ _inputs, _targets, _meta_info = data
+ out_inputs = {}
+ out_targets = {}
+ out_meta_info = {}
+
+ for key in _inputs.keys():
+ out_inputs[key] = []
+
+ for key in _targets.keys():
+ out_targets[key] = []
+
+ for key in _meta_info.keys():
+ out_meta_info[key] = []
+
+ for data in data_list:
+ inputs, targets, meta_info = data
+ for key, val in inputs.items():
+ out_inputs[key].append(val)
+
+ for key, val in targets.items():
+ out_targets[key].append(val)
+
+ for key, val in meta_info.items():
+ out_meta_info[key].append(val)
+
+ for key in _inputs.keys():
+ out_inputs[key] = torch.cat(out_inputs[key], dim=0)
+
+ for key in _targets.keys():
+ out_targets[key] = torch.cat(out_targets[key], dim=0)
+
+ for key in _meta_info.keys():
+ if key not in ["imgname", "query_names"]:
+ out_meta_info[key] = torch.cat(out_meta_info[key], dim=0)
+ else:
+ out_meta_info[key] = sum(out_meta_info[key], [])
+
+ return out_inputs, out_targets, out_meta_info
+
+
+def fetch_dataloader(args, mode, seq=None):
+ if mode == "train":
+ reset_all_seeds(args.seed)
+ dataset = fetch_dataset_devel(args, is_train=True)
+ if type(dataset) == ArcticDataset:
+ collate_fn = None
+ else:
+ collate_fn = collate_custom_fn
+ return DataLoader(
+ dataset=dataset,
+ batch_size=args.batch_size,
+ num_workers=args.num_workers,
+ pin_memory=args.pin_memory,
+ shuffle=args.shuffle_train,
+ collate_fn=collate_fn,
+ )
+
+ elif mode == "val" or mode == "eval":
+ if "submit_" in args.extraction_mode:
+ dataset = fetch_dataset_eval(args, seq=seq)
+ else:
+ dataset = fetch_dataset_devel(args, is_train=False, seq=seq)
+ if type(dataset) in [ArcticDataset, ArcticDatasetEval]:
+ collate_fn = None
+ else:
+ collate_fn = collate_custom_fn
+ return DataLoader(
+ dataset=dataset,
+ batch_size=args.test_batch_size,
+ shuffle=False,
+ num_workers=args.num_workers,
+ collate_fn=collate_fn,
+ )
+ else:
+ assert False
+
+
+def fetch_model(args):
+ if args.method in ["arctic_sf"]:
+ from src.models.arctic_sf.wrapper import ArcticSFWrapper as Wrapper
+ elif args.method in ["arctic_lstm"]:
+ from src.models.arctic_lstm.wrapper import ArcticLSTMWrapper as Wrapper
+ elif args.method in ["field_sf"]:
+ from src.models.field_sf.wrapper import FieldSFWrapper as Wrapper
+ elif args.method in ["field_lstm"]:
+ from src.models.field_lstm.wrapper import FieldLSTMWrapper as Wrapper
+ else:
+ assert False, f"Invalid method ({args.method})"
+ model = Wrapper(args)
+ return model
diff --git a/src/mesh_loaders/arctic.py b/src/mesh_loaders/arctic.py
new file mode 100644
index 0000000..b9d6941
--- /dev/null
+++ b/src/mesh_loaders/arctic.py
@@ -0,0 +1,140 @@
+import numpy as np
+import torch
+from PIL import Image
+
+import common.viewer as viewer_utils
+from common.mesh import Mesh
+from common.viewer import ViewerData
+
+
+def construct_hand_meshes(cam_data, layers, view_idx, distort):
+ if view_idx == 0 and distort:
+ view_idx = 9
+ v3d_r = cam_data["verts.right"][:, view_idx]
+ v3d_l = cam_data["verts.left"][:, view_idx]
+
+ right = {
+ "v3d": v3d_r,
+ "f3d": layers["right"].faces,
+ "vc": None,
+ "name": "right",
+ "color": "white",
+ }
+ left = {
+ "v3d": v3d_l,
+ "f3d": layers["left"].faces,
+ "vc": None,
+ "name": "left",
+ "color": "white",
+ }
+ return right, left
+
+
+def construct_object_meshes(cam_data, obj_name, layers, view_idx, distort):
+ if view_idx == 0 and distort:
+ view_idx = 9
+ v3d_o = cam_data["verts.object"][:, view_idx]
+ f3d_o = Mesh(
+ filename=f"./data/arctic_data/data/meta/object_vtemplates/{obj_name}/mesh.obj"
+ ).faces
+
+ obj = {
+ "v3d": v3d_o,
+ "f3d": f3d_o,
+ "vc": None,
+ "name": "object",
+ "color": "light-blue",
+ }
+ return obj
+
+
+def construct_smplx_meshes(cam_data, layers, view_idx, distort):
+ assert not distort, "Distortion rendering not supported for SMPL-X"
+ # We use the following algorithm to render meshes with distortion effects:
+ # VR Distortion Correction Using Vertex Displacement
+ # https://stackoverflow.com/questions/44489686/camera-lens-distortion-in-opengl
+ # However, this method creates artifacts when vertices are too close to the camera.
+
+ if view_idx == 0 and distort:
+ view_idx = 9
+
+ v3d_s = cam_data["verts.smplx"][:, view_idx]
+
+ smplx_mesh = {
+ "v3d": v3d_s,
+ "f3d": layers["smplx"].faces,
+ "vc": None,
+ "name": "smplx",
+ "color": "rice",
+ }
+
+ return smplx_mesh
+
+
+def construct_meshes(
+ seq_p,
+ layers,
+ use_mano,
+ use_object,
+ use_smplx,
+ no_image,
+ use_distort,
+ view_idx,
+ subject_meta,
+):
+ # load
+ data = np.load(seq_p, allow_pickle=True).item()
+ cam_data = data["cam_coord"]
+ data_params = data["params"]
+ # unpack
+ subject = seq_p.split("/")[-2]
+ seq_name = seq_p.split("/")[-1].split(".")[0]
+ obj_name = seq_name.split("_")[0]
+
+ num_frames = cam_data["verts.right"].shape[0]
+
+ # camera intrinsics
+ if view_idx == 0:
+ K = torch.FloatTensor(data_params["K_ego"][0].copy())
+ else:
+ K = torch.FloatTensor(
+ np.array(subject_meta[subject]["intris_mat"][view_idx - 1])
+ )
+
+ # image names
+ vidx = np.arange(num_frames)
+ image_idx = vidx + subject_meta[subject]["ioi_offset"]
+ imgnames = [
+ f"./data/arctic_data/data/images/{subject}/{seq_name}/{view_idx}/{idx:05d}.jpg"
+ for idx in image_idx
+ ]
+
+ # construct meshes
+ vis_dict = {}
+ if use_mano:
+ right, left = construct_hand_meshes(cam_data, layers, view_idx, use_distort)
+ vis_dict["right"] = right
+ vis_dict["left"] = left
+ if use_smplx:
+ smplx_mesh = construct_smplx_meshes(cam_data, layers, view_idx, use_distort)
+ vis_dict["smplx"] = smplx_mesh
+ if use_object:
+ obj = construct_object_meshes(cam_data, obj_name, layers, view_idx, use_distort)
+ vis_dict["object"] = obj
+
+ meshes = viewer_utils.construct_viewer_meshes(
+ vis_dict, draw_edges=False, flat_shading=False
+ )
+
+ num_frames = len(imgnames)
+ Rt = np.zeros((num_frames, 3, 4))
+ Rt[:, :3, :3] = np.eye(3)
+ Rt[:, 1:3, :3] *= -1.0
+
+ im = Image.open(imgnames[0])
+ cols, rows = im.size
+ if no_image:
+ imgnames = None
+
+ data = ViewerData(Rt, K, cols, rows, imgnames)
+ return meshes, data
diff --git a/src/mesh_loaders/field.py b/src/mesh_loaders/field.py
new file mode 100644
index 0000000..d052c22
--- /dev/null
+++ b/src/mesh_loaders/field.py
@@ -0,0 +1,123 @@
+import os.path as op
+
+import matplotlib
+import numpy as np
+import torch
+
+import common.viewer as viewer_utils
+from common.body_models import build_layers, seal_mano_mesh
+from common.mesh import Mesh
+from common.xdict import xdict
+from src.extraction.interface import prepare_data
+from src.extraction.keys.vis_field import KEYS as keys
+
+
+def dist2vc(dist_r, dist_l, dist_o, ccmap):
+ vc_r, vc_l, vc_o = viewer_utils.dist2vc(dist_r, dist_l, dist_o, ccmap)
+
+ vc_r_pad = np.zeros((vc_r.shape[0], vc_r.shape[1] + 1, 4))
+ vc_l_pad = np.zeros((vc_l.shape[0], vc_l.shape[1] + 1, 4))
+
+ # sealed vertex to pre-defined color
+ vc_r_pad[:, -1, 0] = 0.4
+ vc_l_pad[:, -1, 0] = 0.4
+ vc_r_pad[:, -1, 1] = 0.2
+ vc_l_pad[:, -1, 1] = 0.2
+ vc_r_pad[:, -1, 2] = 0.3
+ vc_l_pad[:, -1, 2] = 0.3
+ vc_r_pad[:, -1, 3] = 1.0
+ vc_l_pad[:, -1, 3] = 1.0
+ vc_r_pad[:, :-1, :] = vc_r
+ vc_l_pad[:, :-1, :] = vc_l
+
+ vc_r = vc_r_pad
+ vc_l = vc_l_pad
+ return vc_r, vc_l, vc_o
+
+
+def construct_meshes(exp_folder, seq_name, flag, mode, side_angle=None, zoom_out=0.5):
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
+ layers = build_layers(device)
+
+ exp_key = exp_folder.split("/")[1]
+ # load data
+ data = prepare_data(
+ seq_name,
+ exp_key,
+ keys,
+ layers,
+ device,
+ task="field",
+ eval_p=op.join(exp_folder, "eval"),
+ )
+
+ # load object faces
+ obj_name = seq_name.split("_")[1]
+ f3d_o = Mesh(
+ filename=f"./data/arctic_data/data/meta/object_vtemplates/{obj_name}/mesh.obj"
+ ).f
+
+ # only show predicted dist < 0.1
+ num_frames = data["targets.dist.or"].shape[0]
+ num_verts = data["targets.dist.or"].shape[1]
+ data["pred.dist.or"][:num_frames, :num_verts][data["targets.dist.or"] == 0.1] = 0.1
+ data["pred.dist.ol"][:num_frames, :num_verts][data["targets.dist.ol"] == 0.1] = 0.1
+ data["pred.dist.ro"][:num_frames, :num_verts][data["targets.dist.ro"] == 0.1] = 0.1
+ data["pred.dist.lo"][:num_frames, :num_verts][data["targets.dist.lo"] == 0.1] = 0.1
+
+ # center verts
+ v3d_r = data[f"targets.mano.v3d.cam.r"]
+ v3d_l = data[f"targets.mano.v3d.cam.l"]
+ v3d_o = data[f"targets.object.v.cam"]
+ cam_t = data[f"targets.object.cam_t"]
+ v3d_r -= cam_t[:, None, :]
+ v3d_l -= cam_t[:, None, :]
+ v3d_o -= cam_t[:, None, :]
+
+ # seal MANO meshes
+ f3d_r = torch.LongTensor(layers["right"].faces.astype(np.int64))
+ f3d_l = torch.LongTensor(layers["left"].faces.astype(np.int64))
+ v3d_r, f3d_r = seal_mano_mesh(v3d_r, f3d_r, True)
+ v3d_l, f3d_l = seal_mano_mesh(v3d_l, f3d_l, False)
+
+ if "_l" in mode:
+ mydist_o = data[f"{flag}.dist.ol"]
+ else:
+ mydist_o = data[f"{flag}.dist.or"]
+
+ ccmap = matplotlib.cm.get_cmap("plasma")
+ vc_r, vc_l, vc_o = dist2vc(
+ data[f"{flag}.dist.ro"], data[f"{flag}.dist.lo"], mydist_o, ccmap
+ )
+
+ right = {
+ "v3d": v3d_r.numpy(),
+ "f3d": f3d_r.numpy(),
+ "vc": vc_r,
+ "name": "right",
+ "color": "none",
+ }
+ left = {
+ "v3d": v3d_l.numpy(),
+ "f3d": f3d_l.numpy(),
+ "vc": vc_l,
+ "name": "left",
+ "color": "none",
+ }
+ obj = {
+ "v3d": v3d_o.numpy(),
+ "f3d": f3d_o,
+ "vc": vc_o,
+ "name": "object",
+ "color": "none",
+ }
+ meshes = viewer_utils.construct_viewer_meshes(
+ {"right": right, "left": left, "object": obj},
+ draw_edges=False,
+ flat_shading=True,
+ )
+ data = xdict(data).to_np()
+
+ # pred_field uses GT cam_t for vis
+ data["pred.object.cam_t"] = data["targets.object.cam_t"]
+ return meshes, data
diff --git a/src/mesh_loaders/pose.py b/src/mesh_loaders/pose.py
new file mode 100644
index 0000000..d606c71
--- /dev/null
+++ b/src/mesh_loaders/pose.py
@@ -0,0 +1,86 @@
+import os.path as op
+
+import numpy as np
+import torch
+import trimesh
+
+import common.viewer as viewer_utils
+from common.body_models import build_layers, seal_mano_mesh
+from common.xdict import xdict
+from src.extraction.interface import prepare_data
+from src.extraction.keys.vis_pose import KEYS as keys
+
+
+def construct_meshes(exp_folder, seq_name, flag, side_angle=None, zoom_out=0.5):
+ exp_key = exp_folder.split("/")[1]
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
+ layers = build_layers(device)
+
+ data = prepare_data(
+ seq_name,
+ exp_key,
+ keys,
+ layers,
+ device,
+ task="pose",
+ eval_p=op.join(exp_folder, "eval"),
+ )
+
+ # load object faces
+ obj_name = seq_name.split("_")[1]
+ f3d_o = trimesh.load(
+ f"./data/arctic_data/data/meta/object_vtemplates/{obj_name}/mesh.obj",
+ process=False,
+ ).faces
+
+ # center verts
+ v3d_r = data[f"{flag}.mano.v3d.cam.r"]
+ v3d_l = data[f"{flag}.mano.v3d.cam.l"]
+ v3d_o = data[f"{flag}.object.v.cam"]
+ cam_t = data[f"{flag}.object.cam_t"]
+ v3d_r -= cam_t[:, None, :]
+ v3d_l -= cam_t[:, None, :]
+ v3d_o -= cam_t[:, None, :]
+
+ # seal MANO mesh
+ f3d_r = torch.LongTensor(layers["right"].faces.astype(np.int64))
+ f3d_l = torch.LongTensor(layers["left"].faces.astype(np.int64))
+ v3d_r, f3d_r = seal_mano_mesh(v3d_r, f3d_r, True)
+ v3d_l, f3d_l = seal_mano_mesh(v3d_l, f3d_l, False)
+
+ # AIT meshes
+ hand_color = "white"
+ object_color = "light-blue"
+ right = {
+ "v3d": v3d_r.numpy(),
+ "f3d": f3d_r.numpy(),
+ "vc": None,
+ "name": "right",
+ "color": hand_color,
+ }
+ left = {
+ "v3d": v3d_l.numpy(),
+ "f3d": f3d_l.numpy(),
+ "vc": None,
+ "name": "left",
+ "color": hand_color,
+ }
+ obj = {
+ "v3d": v3d_o.numpy(),
+ "f3d": f3d_o,
+ "vc": None,
+ "name": "object",
+ "color": object_color,
+ }
+
+ meshes = viewer_utils.construct_viewer_meshes(
+ {
+ "right": right,
+ "left": left,
+ "object": obj,
+ },
+ draw_edges=False,
+ flat_shading=True,
+ )
+ data = xdict(data).to_np()
+ return meshes, data
diff --git a/src/models/__init__.py b/src/models/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/models/arctic_lstm/model.py b/src/models/arctic_lstm/model.py
new file mode 100644
index 0000000..342fa51
--- /dev/null
+++ b/src/models/arctic_lstm/model.py
@@ -0,0 +1,110 @@
+import torch
+import torch.nn as nn
+
+import common.ld_utils as ld_utils
+import src.callbacks.process.process_generic as generic
+from common.xdict import xdict
+from src.nets.hand_heads.hand_hmr import HandHMR
+from src.nets.hand_heads.mano_head import MANOHead
+from src.nets.obj_heads.obj_head import ArtiHead
+from src.nets.obj_heads.obj_hmr import ObjectHMR
+
+
+class ArcticLSTM(nn.Module):
+ def __init__(self, focal_length, img_res, args):
+ super().__init__()
+ self.args = args
+ feat_dim = 2048
+ self.head_r = HandHMR(feat_dim, is_rhand=True, n_iter=3)
+ self.head_l = HandHMR(feat_dim, is_rhand=False, n_iter=3)
+
+ self.head_o = ObjectHMR(feat_dim, n_iter=3)
+
+ self.mano_r = MANOHead(
+ is_rhand=True, focal_length=focal_length, img_res=img_res
+ )
+
+ self.mano_l = MANOHead(
+ is_rhand=False, focal_length=focal_length, img_res=img_res
+ )
+
+ self.arti_head = ArtiHead(focal_length=focal_length, img_res=img_res)
+ self.mode = "train"
+ self.img_res = img_res
+ self.focal_length = focal_length
+ self.feat_dim = feat_dim
+ self.lstm = nn.LSTM(
+ input_size=2048,
+ hidden_size=1024,
+ num_layers=2,
+ bidirectional=True,
+ batch_first=True,
+ )
+
+ def _fetch_img_feat(self, inputs):
+ feat_vec = inputs["img_feat"]
+ return feat_vec
+
+ def forward(self, inputs, meta_info):
+ window_size = self.args.window_size
+ query_names = meta_info["query_names"]
+ K = meta_info["intrinsics"]
+ device = K.device
+ feat_vec = self._fetch_img_feat(inputs)
+ feat_vec = feat_vec.view(-1, window_size, self.feat_dim)
+ batch_size = feat_vec.shape[0]
+
+ # bidirectional
+ h0 = torch.randn(2 * 2, batch_size, self.feat_dim // 2, device=device)
+ c0 = torch.randn(2 * 2, batch_size, self.feat_dim // 2, device=device)
+ feat_vec, (hn, cn) = self.lstm(feat_vec, (h0, c0)) # batch, seq, 2*dim
+ feat_vec = feat_vec.reshape(batch_size * window_size, self.feat_dim)
+
+ hmr_output_r = self.head_r(feat_vec, use_pool=False)
+ hmr_output_l = self.head_l(feat_vec, use_pool=False)
+ hmr_output_o = self.head_o(feat_vec, use_pool=False)
+
+ # weak perspective
+ root_r = hmr_output_r["cam_t.wp"]
+ root_l = hmr_output_l["cam_t.wp"]
+ root_o = hmr_output_o["cam_t.wp"]
+
+ mano_output_r = self.mano_r(
+ rotmat=hmr_output_r["pose"],
+ shape=hmr_output_r["shape"],
+ K=K,
+ cam=root_r,
+ )
+
+ mano_output_l = self.mano_l(
+ rotmat=hmr_output_l["pose"],
+ shape=hmr_output_l["shape"],
+ K=K,
+ cam=root_l,
+ )
+
+ # fwd mesh when in val or vis
+ arti_output = self.arti_head(
+ rot=hmr_output_o["rot"],
+ angle=hmr_output_o["radian"],
+ query_names=query_names,
+ cam=root_o,
+ K=K,
+ )
+
+ root_r_init = hmr_output_r["cam_t.wp.init"]
+ root_l_init = hmr_output_l["cam_t.wp.init"]
+ root_o_init = hmr_output_o["cam_t.wp.init"]
+ mano_output_r["cam_t.wp.init.r"] = root_r_init
+ mano_output_l["cam_t.wp.init.l"] = root_l_init
+ arti_output["cam_t.wp.init"] = root_o_init
+
+ mano_output_r = ld_utils.prefix_dict(mano_output_r, "mano.")
+ mano_output_l = ld_utils.prefix_dict(mano_output_l, "mano.")
+ arti_output = ld_utils.prefix_dict(arti_output, "object.")
+ output = xdict()
+ output.merge(mano_output_r)
+ output.merge(mano_output_l)
+ output.merge(arti_output)
+ output = generic.prepare_interfield(output, self.args.max_dist)
+ return output
diff --git a/src/models/arctic_lstm/wrapper.py b/src/models/arctic_lstm/wrapper.py
new file mode 100644
index 0000000..917834c
--- /dev/null
+++ b/src/models/arctic_lstm/wrapper.py
@@ -0,0 +1,50 @@
+import torch
+from loguru import logger
+
+import common.torch_utils as torch_utils
+from common.xdict import xdict
+from src.callbacks.loss.loss_arctic_lstm import compute_loss
+from src.callbacks.process.process_arctic import process_data
+from src.callbacks.vis.visualize_arctic import visualize_all
+from src.models.arctic_lstm.model import ArcticLSTM
+from src.models.generic.wrapper import GenericWrapper
+
+
+class ArcticLSTMWrapper(GenericWrapper):
+ def __init__(self, args):
+ super().__init__(args)
+ self.model = ArcticLSTM(
+ focal_length=args.focal_length,
+ img_res=args.img_res,
+ args=args,
+ )
+ self.process_fn = process_data
+ self.loss_fn = compute_loss
+ self.metric_dict = [
+ "cdev",
+ "mrrpe",
+ "mpjpe.ra",
+ "aae",
+ "success_rate",
+ ]
+
+ self.vis_fns = [visualize_all]
+ self.num_vis_train = 0
+ self.num_vis_val = 1
+
+ def set_training_flags(self):
+ if not self.started_training:
+ sd_p = f"./logs/{self.args.img_feat_version}/checkpoints/last.ckpt"
+ sd = torch.load(sd_p)["state_dict"]
+ msd = xdict(sd).search("model.head")
+
+ wd = msd.search("weight")
+ bd = msd.search("bias")
+ wd.merge(bd)
+ self.load_state_dict(wd, strict=False)
+ torch_utils.toggle_parameters(self, True)
+ logger.info(f"Loaded: {sd_p}")
+ self.started_training = True
+
+ def inference(self, inputs, meta_info):
+ return super().inference_pose(inputs, meta_info)
diff --git a/src/models/arctic_sf/model.py b/src/models/arctic_sf/model.py
new file mode 100644
index 0000000..be88462
--- /dev/null
+++ b/src/models/arctic_sf/model.py
@@ -0,0 +1,96 @@
+import torch.nn as nn
+
+import common.ld_utils as ld_utils
+from common.xdict import xdict
+from src.nets.backbone.utils import get_backbone_info
+from src.nets.hand_heads.hand_hmr import HandHMR
+from src.nets.hand_heads.mano_head import MANOHead
+from src.nets.obj_heads.obj_head import ArtiHead
+from src.nets.obj_heads.obj_hmr import ObjectHMR
+
+
+class ArcticSF(nn.Module):
+ def __init__(self, backbone, focal_length, img_res, args):
+ super(ArcticSF, self).__init__()
+ self.args = args
+ if backbone == "resnet50":
+ from src.nets.backbone.resnet import resnet50 as resnet
+ elif backbone == "resnet18":
+ from src.nets.backbone.resnet import resnet18 as resnet
+ else:
+ assert False
+ self.backbone = resnet(pretrained=True)
+ feat_dim = get_backbone_info(backbone)["n_output_channels"]
+ self.head_r = HandHMR(feat_dim, is_rhand=True, n_iter=3)
+ self.head_l = HandHMR(feat_dim, is_rhand=False, n_iter=3)
+
+ self.head_o = ObjectHMR(feat_dim, n_iter=3)
+
+ self.mano_r = MANOHead(
+ is_rhand=True, focal_length=focal_length, img_res=img_res
+ )
+
+ self.mano_l = MANOHead(
+ is_rhand=False, focal_length=focal_length, img_res=img_res
+ )
+
+ self.arti_head = ArtiHead(focal_length=focal_length, img_res=img_res)
+ self.mode = "train"
+ self.img_res = img_res
+ self.focal_length = focal_length
+
+ def forward(self, inputs, meta_info):
+ images = inputs["img"]
+ query_names = meta_info["query_names"]
+ K = meta_info["intrinsics"]
+ features = self.backbone(images)
+ feat_vec = features.view(features.shape[0], features.shape[1], -1).sum(dim=2)
+
+ hmr_output_r = self.head_r(features)
+ hmr_output_l = self.head_l(features)
+ hmr_output_o = self.head_o(features)
+
+ # weak perspective
+ root_r = hmr_output_r["cam_t.wp"]
+ root_l = hmr_output_l["cam_t.wp"]
+ root_o = hmr_output_o["cam_t.wp"]
+
+ mano_output_r = self.mano_r(
+ rotmat=hmr_output_r["pose"],
+ shape=hmr_output_r["shape"],
+ K=K,
+ cam=root_r,
+ )
+
+ mano_output_l = self.mano_l(
+ rotmat=hmr_output_l["pose"],
+ shape=hmr_output_l["shape"],
+ K=K,
+ cam=root_l,
+ )
+
+ # fwd mesh when in val or vis
+ arti_output = self.arti_head(
+ rot=hmr_output_o["rot"],
+ angle=hmr_output_o["radian"],
+ query_names=query_names,
+ cam=root_o,
+ K=K,
+ )
+
+ root_r_init = hmr_output_r["cam_t.wp.init"]
+ root_l_init = hmr_output_l["cam_t.wp.init"]
+ root_o_init = hmr_output_o["cam_t.wp.init"]
+ mano_output_r["cam_t.wp.init.r"] = root_r_init
+ mano_output_l["cam_t.wp.init.l"] = root_l_init
+ arti_output["cam_t.wp.init"] = root_o_init
+
+ mano_output_r = ld_utils.prefix_dict(mano_output_r, "mano.")
+ mano_output_l = ld_utils.prefix_dict(mano_output_l, "mano.")
+ arti_output = ld_utils.prefix_dict(arti_output, "object.")
+ output = xdict()
+ output.merge(mano_output_r)
+ output.merge(mano_output_l)
+ output.merge(arti_output)
+ output["feat_vec"] = feat_vec.cpu().detach()
+ return output
diff --git a/src/models/arctic_sf/wrapper.py b/src/models/arctic_sf/wrapper.py
new file mode 100644
index 0000000..dcdd581
--- /dev/null
+++ b/src/models/arctic_sf/wrapper.py
@@ -0,0 +1,33 @@
+from src.callbacks.loss.loss_arctic_sf import compute_loss
+from src.callbacks.process.process_arctic import process_data
+from src.callbacks.vis.visualize_arctic import visualize_all
+from src.models.arctic_sf.model import ArcticSF
+from src.models.generic.wrapper import GenericWrapper
+
+
+class ArcticSFWrapper(GenericWrapper):
+ def __init__(self, args):
+ super().__init__(args)
+ self.model = ArcticSF(
+ backbone="resnet50",
+ focal_length=args.focal_length,
+ img_res=args.img_res,
+ args=args,
+ )
+ self.process_fn = process_data
+ self.loss_fn = compute_loss
+ self.metric_dict = [
+ "cdev",
+ "mrrpe",
+ "mpjpe.ra",
+ "aae",
+ "success_rate",
+ ]
+
+ self.vis_fns = [visualize_all]
+
+ self.num_vis_train = 1
+ self.num_vis_val = 1
+
+ def inference(self, inputs, meta_info):
+ return super().inference_pose(inputs, meta_info)
diff --git a/src/models/field_lstm/model.py b/src/models/field_lstm/model.py
new file mode 100644
index 0000000..a2929a1
--- /dev/null
+++ b/src/models/field_lstm/model.py
@@ -0,0 +1,107 @@
+import torch
+import torch.nn as nn
+
+from common.xdict import xdict
+from src.models.field_sf.model import RegressHead, Upsampler
+from src.nets.backbone.utils import get_backbone_info
+from src.nets.obj_heads.obj_head import ArtiHead
+from src.nets.pointnet import PointNetfeat
+
+
+class FieldLSTM(nn.Module):
+ def __init__(self, backbone, focal_length, img_res, window_size):
+ super().__init__()
+ assert backbone in ["resnet18", "resnet50"]
+ feat_dim = get_backbone_info(backbone)["n_output_channels"]
+ self.arti_head = ArtiHead(focal_length=focal_length, img_res=img_res)
+
+ img_down_dim = 512
+ img_mid_dim = 512
+ pt_out_dim = 512
+ self.down = nn.Sequential(
+ nn.Linear(feat_dim, img_mid_dim),
+ nn.ReLU(),
+ nn.Linear(img_mid_dim, img_down_dim),
+ nn.ReLU(),
+ ) # downsize image features
+
+ pt_shallow_dim = 512
+ pt_mid_dim = 512
+ self.point_backbone = PointNetfeat(
+ input_dim=3 + img_down_dim,
+ shallow_dim=pt_shallow_dim,
+ mid_dim=pt_mid_dim,
+ out_dim=pt_out_dim,
+ )
+ pts_dim = pt_shallow_dim + pt_out_dim
+ self.dist_head_or = RegressHead(pts_dim)
+ self.dist_head_ol = RegressHead(pts_dim)
+ self.dist_head_ro = RegressHead(pts_dim)
+ self.dist_head_lo = RegressHead(pts_dim)
+ self.avgpool = nn.AdaptiveAvgPool2d(1)
+
+ self.num_v_sub = 195 # mano subsampled
+ self.num_v_o_sub = 300 * 2 # object subsampled
+ self.num_v_o = 4000 # object
+ self.upsampling_r = Upsampler(self.num_v_sub, 778)
+ self.upsampling_l = Upsampler(self.num_v_sub, 778)
+ self.upsampling_o = Upsampler(self.num_v_o_sub, self.num_v_o)
+ self.lstm = nn.LSTM(
+ input_size=2048,
+ hidden_size=1024,
+ num_layers=2,
+ bidirectional=True,
+ batch_first=True,
+ )
+
+ self.feat_dim = feat_dim
+ self.window_size = window_size
+
+ def forward(self, inputs, meta_info):
+ window_size = self.window_size
+ device = meta_info["v0.r"].device
+
+ feat_vec = inputs["img_feat"].view(-1, window_size, self.feat_dim)
+ batch_size = feat_vec.shape[0]
+
+ points_r = meta_info["v0.r"].permute(0, 2, 1)[:, :, 21:]
+ points_l = meta_info["v0.l"].permute(0, 2, 1)[:, :, 21:]
+ points_o = meta_info["v0.o"].permute(0, 2, 1)
+ points_all = torch.cat((points_r, points_l, points_o), dim=2)
+
+ # bidirectional
+ h0 = torch.randn(2 * 2, batch_size, self.feat_dim // 2, device=device)
+ c0 = torch.randn(2 * 2, batch_size, self.feat_dim // 2, device=device)
+ feat_vec, (hn, cn) = self.lstm(feat_vec, (h0, c0)) # batch, seq, 2*dim
+ feat_vec = feat_vec.reshape(batch_size * window_size, self.feat_dim)
+
+ img_feat = self.down(feat_vec)
+ num_mano_pts = points_r.shape[2]
+ num_object_pts = points_o.shape[2]
+
+ img_feat_all = img_feat[:, :, None].repeat(
+ 1, 1, num_mano_pts * 2 + num_object_pts
+ )
+ pts_all_feat = self.point_backbone(
+ torch.cat((points_all, img_feat_all), dim=1)
+ )[0]
+ pts_r_feat, pts_l_feat, pts_o_feat = torch.split(
+ pts_all_feat, [num_mano_pts, num_mano_pts, num_object_pts], dim=2
+ )
+
+ dist_ro = self.dist_head_ro(pts_r_feat)
+ dist_lo = self.dist_head_lo(pts_l_feat)
+ dist_or = self.dist_head_or(pts_o_feat)
+ dist_ol = self.dist_head_ol(pts_o_feat)
+
+ dist_ro = self.upsampling_r(dist_ro[:, :, None])[:, :, 0]
+ dist_lo = self.upsampling_l(dist_lo[:, :, None])[:, :, 0]
+ dist_or = self.upsampling_o(dist_or[:, :, None])[:, :, 0]
+ dist_ol = self.upsampling_o(dist_ol[:, :, None])[:, :, 0]
+
+ out = xdict()
+ out["dist.ro"] = dist_ro
+ out["dist.lo"] = dist_lo
+ out["dist.or"] = dist_or
+ out["dist.ol"] = dist_ol
+ return out
diff --git a/src/models/field_lstm/wrapper.py b/src/models/field_lstm/wrapper.py
new file mode 100644
index 0000000..9580989
--- /dev/null
+++ b/src/models/field_lstm/wrapper.py
@@ -0,0 +1,45 @@
+import torch
+from loguru import logger
+
+import common.torch_utils as torch_utils
+from common.xdict import xdict
+from src.callbacks.loss.loss_field import compute_loss
+from src.callbacks.process.process_field import process_data
+from src.callbacks.vis.visualize_field import visualize_all
+from src.models.field_lstm.model import FieldLSTM
+from src.models.generic.wrapper import GenericWrapper
+
+
+class FieldLSTMWrapper(GenericWrapper):
+ def __init__(self, args):
+ super().__init__(args)
+ self.model = FieldLSTM(
+ "resnet50",
+ args.focal_length,
+ args.img_res,
+ args.window_size,
+ )
+ self.process_fn = process_data
+ self.loss_fn = compute_loss
+ self.metric_dict = ["avg_err_field"]
+
+ self.vis_fns = [visualize_all]
+ self.num_vis_train = 0
+ self.num_vis_val = 1
+
+ def set_training_flags(self):
+ if not self.started_training:
+ sd_p = f"./logs/{self.args.img_feat_version}/checkpoints/last.ckpt"
+ sd = torch.load(sd_p)["state_dict"]
+ msd = xdict(sd).search("model.").rm("model.backbone")
+
+ wd = msd.search("weight")
+ bd = msd.search("bias")
+ wd.merge(bd)
+ self.load_state_dict(wd, strict=False)
+ torch_utils.toggle_parameters(self, True)
+ logger.info(f"Loaded: {sd_p}")
+ self.started_training = True
+
+ def inference(self, inputs, meta_info):
+ return super().inference_field(inputs, meta_info)
diff --git a/src/models/field_sf/model.py b/src/models/field_sf/model.py
new file mode 100644
index 0000000..adf6df0
--- /dev/null
+++ b/src/models/field_sf/model.py
@@ -0,0 +1,132 @@
+import torch
+import torch.nn as nn
+
+from common.xdict import xdict
+from src.nets.backbone.utils import get_backbone_info
+from src.nets.obj_heads.obj_head import ArtiHead
+from src.nets.pointnet import PointNetfeat
+
+
+class Upsampler(nn.Module):
+ def __init__(self, in_dim, out_dim):
+ super().__init__()
+ self.upsampling = torch.nn.Linear(in_dim, out_dim)
+
+ def forward(self, pred_vertices_sub):
+ temp_transpose = pred_vertices_sub.transpose(1, 2)
+ pred_vertices = self.upsampling(temp_transpose)
+ pred_vertices = pred_vertices.transpose(1, 2)
+ return pred_vertices
+
+
+class RegressHead(nn.Module):
+ def __init__(self, input_dim):
+ super().__init__()
+
+ self.network = nn.Sequential(
+ nn.Conv1d(input_dim, 512, 1),
+ nn.BatchNorm1d(512),
+ nn.ReLU(),
+ nn.Conv1d(512, 128, 1),
+ nn.BatchNorm1d(128),
+ nn.ReLU(),
+ nn.Conv1d(128, 1, 1),
+ )
+
+ def forward(self, x):
+ dist = self.network(x).permute(0, 2, 1)[:, :, 0]
+ return dist
+
+
+class FieldSF(nn.Module):
+ def __init__(self, backbone, focal_length, img_res):
+ super().__init__()
+ if backbone == "resnet18":
+ from src.nets.backbone.resnet import resnet18 as resnet
+ elif backbone == "resnet50":
+ from src.nets.backbone.resnet import resnet50 as resnet
+ else:
+ assert False
+ self.backbone = resnet(pretrained=True)
+ feat_dim = get_backbone_info(backbone)["n_output_channels"]
+ self.arti_head = ArtiHead(focal_length=focal_length, img_res=img_res)
+
+ img_down_dim = 512
+ img_mid_dim = 512
+ pt_out_dim = 512
+ self.down = nn.Sequential(
+ nn.Linear(feat_dim, img_mid_dim),
+ nn.ReLU(),
+ nn.Linear(img_mid_dim, img_down_dim),
+ nn.ReLU(),
+ ) # downsize image features
+
+ pt_shallow_dim = 512
+ pt_mid_dim = 512
+ self.point_backbone = PointNetfeat(
+ input_dim=3 + img_down_dim,
+ shallow_dim=pt_shallow_dim,
+ mid_dim=pt_mid_dim,
+ out_dim=pt_out_dim,
+ )
+ pts_dim = pt_shallow_dim + pt_out_dim
+ self.dist_head_or = RegressHead(pts_dim)
+ self.dist_head_ol = RegressHead(pts_dim)
+ self.dist_head_ro = RegressHead(pts_dim)
+ self.dist_head_lo = RegressHead(pts_dim)
+ self.avgpool = nn.AdaptiveAvgPool2d(1)
+
+ self.num_v_sub = 195 # mano subsampled
+ self.num_v_o_sub = 300 * 2 # object subsampled
+ self.num_v_o = 4000 # object
+ self.upsampling_r = Upsampler(self.num_v_sub, 778)
+ self.upsampling_l = Upsampler(self.num_v_sub, 778)
+ self.upsampling_o = Upsampler(self.num_v_o_sub, self.num_v_o)
+
+ def _decode(self, pts_all_feat):
+ pts_all_feat = self.point_backbone(pts_all_feat)[0]
+ pts_r_feat, pts_l_feat, pts_o_feat = torch.split(
+ pts_all_feat,
+ [self.num_mano_pts, self.num_mano_pts, self.num_object_pts],
+ dim=2,
+ )
+
+ dist_ro = self.dist_head_ro(pts_r_feat)
+ dist_lo = self.dist_head_lo(pts_l_feat)
+ dist_or = self.dist_head_or(pts_o_feat)
+ dist_ol = self.dist_head_ol(pts_o_feat)
+ return dist_ro, dist_lo, dist_or, dist_ol
+
+ def forward(self, inputs, meta_info):
+ images = inputs["img"]
+ points_r = meta_info["v0.r"].permute(0, 2, 1)[:, :, 21:]
+ points_l = meta_info["v0.l"].permute(0, 2, 1)[:, :, 21:]
+ points_o = meta_info["v0.o"].permute(0, 2, 1)
+ points_all = torch.cat((points_r, points_l, points_o), dim=2)
+
+ img_feat = self.backbone(images)
+ img_feat = self.avgpool(img_feat).view(img_feat.shape[0], -1)
+ pred_vec = img_feat.clone()
+ img_feat = self.down(img_feat)
+
+ self.num_mano_pts = points_r.shape[2]
+ self.num_object_pts = points_o.shape[2]
+
+ img_feat_all = img_feat[:, :, None].repeat(
+ 1, 1, self.num_mano_pts * 2 + self.num_object_pts
+ )
+
+ pts_all_feat = torch.cat((points_all, img_feat_all), dim=1)
+ dist_ro, dist_lo, dist_or, dist_ol = self._decode(pts_all_feat)
+ dist_ro = self.upsampling_r(dist_ro[:, :, None])[:, :, 0]
+ dist_lo = self.upsampling_l(dist_lo[:, :, None])[:, :, 0]
+ dist_or = self.upsampling_o(dist_or[:, :, None])[:, :, 0]
+ dist_ol = self.upsampling_o(dist_ol[:, :, None])[:, :, 0]
+
+ out = xdict()
+ out["dist.ro"] = dist_ro
+ out["dist.lo"] = dist_lo
+ out["dist.or"] = dist_or
+ out["dist.ol"] = dist_ol
+ out["feat_vec"] = pred_vec
+ return out
diff --git a/src/models/field_sf/wrapper.py b/src/models/field_sf/wrapper.py
new file mode 100644
index 0000000..8989821
--- /dev/null
+++ b/src/models/field_sf/wrapper.py
@@ -0,0 +1,21 @@
+from src.callbacks.loss.loss_field import compute_loss
+from src.callbacks.process.process_field import process_data
+from src.callbacks.vis.visualize_field import visualize_all
+from src.models.field_sf.model import FieldSF
+from src.models.generic.wrapper import GenericWrapper
+
+
+class FieldSFWrapper(GenericWrapper):
+ def __init__(self, args):
+ super().__init__(args)
+ self.model = FieldSF("resnet50", args.focal_length, args.img_res)
+ self.process_fn = process_data
+ self.loss_fn = compute_loss
+ self.metric_dict = ["avg_err_field"]
+
+ self.vis_fns = [visualize_all]
+ self.num_vis_train = 1
+ self.num_vis_val = 1
+
+ def inference(self, inputs, meta_info):
+ return super().inference_field(inputs, meta_info)
diff --git a/src/models/generic/wrapper.py b/src/models/generic/wrapper.py
new file mode 100644
index 0000000..ad1f83e
--- /dev/null
+++ b/src/models/generic/wrapper.py
@@ -0,0 +1,188 @@
+import numpy as np
+import torch
+
+import common.data_utils as data_utils
+import common.ld_utils as ld_utils
+import src.callbacks.process.process_generic as generic
+from common.abstract_pl import AbstractPL
+from common.body_models import MANODecimator, build_mano_aa
+from common.comet_utils import push_images
+from common.rend_utils import Renderer
+from common.xdict import xdict
+from src.utils.eval_modules import eval_fn_dict
+
+
+def mul_loss_dict(loss_dict):
+ for key, val in loss_dict.items():
+ loss, weight = val
+ loss_dict[key] = loss * weight
+ return loss_dict
+
+
+class GenericWrapper(AbstractPL):
+ def __init__(self, args):
+ super().__init__(
+ args,
+ push_images,
+ "loss__val",
+ float("inf"),
+ high_loss_val=float("inf"),
+ )
+ self.args = args
+ self.mano_r = build_mano_aa(is_rhand=True)
+ self.mano_l = build_mano_aa(is_rhand=False)
+ self.add_module("mano_r", self.mano_r)
+ self.add_module("mano_l", self.mano_l)
+ self.renderer = Renderer(img_res=args.img_res)
+ self.object_sampler = np.load(
+ "./data/arctic_data/data/meta/downsamplers.npy", allow_pickle=True
+ ).item()
+
+ def set_flags(self, mode):
+ self.model.mode = mode
+ if mode == "train":
+ self.train()
+ else:
+ self.eval()
+
+ def inference_pose(self, inputs, meta_info):
+ pred = self.model(inputs, meta_info)
+ mydict = xdict()
+ mydict.merge(xdict(inputs).prefix("inputs."))
+ mydict.merge(pred.prefix("pred."))
+ mydict.merge(xdict(meta_info).prefix("meta_info."))
+ mydict = mydict.detach()
+ return mydict
+
+ def inference_field(self, inputs, meta_info):
+ meta_info = xdict(meta_info)
+
+ models = {
+ "mano_r": self.mano_r,
+ "mano_l": self.mano_l,
+ "arti_head": self.model.arti_head,
+ "mesh_sampler": MANODecimator(),
+ "object_sampler": self.object_sampler,
+ }
+
+ batch_size = meta_info["intrinsics"].shape[0]
+
+ (
+ v0_r,
+ v0_l,
+ v0_o,
+ pidx,
+ v0_r_full,
+ v0_l_full,
+ v0_o_full,
+ mask,
+ cams,
+ bottom_anchor,
+ ) = generic.prepare_templates(
+ batch_size,
+ models["mano_r"],
+ models["mano_l"],
+ models["mesh_sampler"],
+ models["arti_head"],
+ meta_info["query_names"],
+ )
+
+ meta_info["v0.r"] = v0_r
+ meta_info["v0.l"] = v0_l
+ meta_info["v0.o"] = v0_o
+
+ pred = self.model(inputs, meta_info)
+ mydict = xdict()
+ mydict.merge(xdict(inputs).prefix("inputs."))
+ mydict.merge(pred.prefix("pred."))
+ mydict.merge(meta_info.prefix("meta_info."))
+ mydict = mydict.detach()
+ return mydict
+
+ def forward(self, inputs, targets, meta_info, mode):
+ models = {
+ "mano_r": self.mano_r,
+ "mano_l": self.mano_l,
+ "arti_head": self.model.arti_head,
+ "mesh_sampler": MANODecimator(),
+ "object_sampler": self.object_sampler,
+ }
+
+ self.set_flags(mode)
+ inputs = xdict(inputs)
+ targets = xdict(targets)
+ meta_info = xdict(meta_info)
+ with torch.no_grad():
+ inputs, targets, meta_info = self.process_fn(
+ models, inputs, targets, meta_info, mode, self.args
+ )
+
+ move_keys = ["object.v_len"]
+ for key in move_keys:
+ meta_info[key] = targets[key]
+ meta_info["mano.faces.r"] = self.mano_r.faces
+ meta_info["mano.faces.l"] = self.mano_l.faces
+ pred = self.model(inputs, meta_info)
+ loss_dict = self.loss_fn(
+ pred=pred, gt=targets, meta_info=meta_info, args=self.args
+ )
+ loss_dict = {k: (loss_dict[k][0].mean(), loss_dict[k][1]) for k in loss_dict}
+ loss_dict = mul_loss_dict(loss_dict)
+ loss_dict["loss"] = sum(loss_dict[k] for k in loss_dict)
+
+ # conversion for vis and eval
+ keys = list(pred.keys())
+ for key in keys:
+ # denormalize 2d keypoints
+ if "2d.norm" in key:
+ denorm_key = key.replace(".norm", "")
+ assert key in targets.keys(), f"Do not have key {key}"
+
+ val_pred = pred[key]
+ val_gt = targets[key]
+
+ val_denorm_pred = data_utils.unormalize_kp2d(
+ val_pred, self.args.img_res
+ )
+ val_denorm_gt = data_utils.unormalize_kp2d(val_gt, self.args.img_res)
+
+ pred[denorm_key] = val_denorm_pred
+ targets[denorm_key] = val_denorm_gt
+
+ if mode == "train":
+ return {"out_dict": (inputs, targets, meta_info, pred), "loss": loss_dict}
+
+ if mode == "vis":
+ vis_dict = xdict()
+ vis_dict.merge(inputs.prefix("inputs."))
+ vis_dict.merge(pred.prefix("pred."))
+ vis_dict.merge(targets.prefix("targets."))
+ vis_dict.merge(meta_info.prefix("meta_info."))
+ vis_dict = vis_dict.detach()
+ return vis_dict
+
+ # evaluate metrics
+ metrics_all = self.evaluate_metrics(
+ pred, targets, meta_info, self.metric_dict
+ ).to_torch()
+ out_dict = xdict()
+ out_dict["imgname"] = meta_info["imgname"]
+ out_dict.merge(ld_utils.prefix_dict(metrics_all, "metric."))
+
+ if mode == "extract":
+ mydict = xdict()
+ mydict.merge(inputs.prefix("inputs."))
+ mydict.merge(pred.prefix("pred."))
+ mydict.merge(targets.prefix("targets."))
+ mydict.merge(meta_info.prefix("meta_info."))
+ mydict = mydict.detach()
+ return mydict
+ return out_dict, loss_dict
+
+ def evaluate_metrics(self, pred, targets, meta_info, specs):
+ metric_dict = xdict()
+ for key in specs:
+ metrics = eval_fn_dict[key](pred, targets, meta_info)
+ metric_dict.merge(metrics)
+
+ return metric_dict
diff --git a/src/nets/backbone/__init__.py b/src/nets/backbone/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/nets/backbone/resnet.py b/src/nets/backbone/resnet.py
new file mode 100644
index 0000000..354d539
--- /dev/null
+++ b/src/nets/backbone/resnet.py
@@ -0,0 +1,418 @@
+import torch.nn as nn
+from torch.hub import load_state_dict_from_url
+
+__all__ = [
+ "ResNet",
+ "resnet18",
+ "resnet34",
+ "resnet50",
+ "resnet101",
+ "resnet152",
+ "resnext50_32x4d",
+ "resnext101_32x8d",
+ "wide_resnet50_2",
+ "wide_resnet101_2",
+]
+
+
+model_urls = {
+ "resnet18": "https://download.pytorch.org/models/resnet18-5c106cde.pth",
+ "resnet34": "https://download.pytorch.org/models/resnet34-333f7ec4.pth",
+ "resnet50": "https://download.pytorch.org/models/resnet50-19c8e357.pth",
+ "resnet101": "https://download.pytorch.org/models/resnet101-5d3b4d8f.pth",
+ "resnet152": "https://download.pytorch.org/models/resnet152-b121ed2d.pth",
+ "resnext50_32x4d": "https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth",
+ "resnext101_32x8d": "https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth",
+ "wide_resnet50_2": "https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth",
+ "wide_resnet101_2": "https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth",
+}
+
+
+def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
+ """3x3 convolution with padding"""
+ return nn.Conv2d(
+ in_planes,
+ out_planes,
+ kernel_size=3,
+ stride=stride,
+ padding=dilation,
+ groups=groups,
+ bias=False,
+ dilation=dilation,
+ )
+
+
+def conv1x1(in_planes, out_planes, stride=1):
+ """1x1 convolution"""
+ return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
+
+
+class BasicBlock(nn.Module):
+ expansion = 1
+
+ def __init__(
+ self,
+ inplanes,
+ planes,
+ stride=1,
+ downsample=None,
+ groups=1,
+ base_width=64,
+ dilation=1,
+ norm_layer=None,
+ ):
+ super(BasicBlock, self).__init__()
+ if norm_layer is None:
+ norm_layer = nn.BatchNorm2d
+ if groups != 1 or base_width != 64:
+ raise ValueError("BasicBlock only supports groups=1 and base_width=64")
+ if dilation > 1:
+ raise NotImplementedError("Dilation > 1 not supported in BasicBlock")
+ # Both self.conv1 and self.downsample layers downsample the input when stride != 1
+ self.conv1 = conv3x3(inplanes, planes, stride)
+ self.bn1 = norm_layer(planes)
+ self.relu = nn.ReLU(inplace=True)
+ self.conv2 = conv3x3(planes, planes)
+ self.bn2 = norm_layer(planes)
+ self.downsample = downsample
+ self.stride = stride
+
+ def forward(self, x):
+ identity = x
+
+ out = self.conv1(x)
+ out = self.bn1(out)
+ out = self.relu(out)
+
+ out = self.conv2(out)
+ out = self.bn2(out)
+
+ if self.downsample is not None:
+ identity = self.downsample(x)
+
+ out += identity
+ out = self.relu(out)
+
+ return out
+
+
+class Bottleneck(nn.Module):
+ # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2)
+ # while original implementation places the stride at the first 1x1 convolution(self.conv1)
+ # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385.
+ # This variant is also known as ResNet V1.5 and improves accuracy according to
+ # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch.
+
+ expansion = 4
+
+ def __init__(
+ self,
+ inplanes,
+ planes,
+ stride=1,
+ downsample=None,
+ groups=1,
+ base_width=64,
+ dilation=1,
+ norm_layer=None,
+ ):
+ super(Bottleneck, self).__init__()
+ if norm_layer is None:
+ norm_layer = nn.BatchNorm2d
+ width = int(planes * (base_width / 64.0)) * groups
+ # Both self.conv2 and self.downsample layers downsample the input when stride != 1
+ self.conv1 = conv1x1(inplanes, width)
+ self.bn1 = norm_layer(width)
+ self.conv2 = conv3x3(width, width, stride, groups, dilation)
+ self.bn2 = norm_layer(width)
+ self.conv3 = conv1x1(width, planes * self.expansion)
+ self.bn3 = norm_layer(planes * self.expansion)
+ self.relu = nn.ReLU(inplace=True)
+ self.downsample = downsample
+ self.stride = stride
+
+ def forward(self, x):
+ identity = x
+
+ out = self.conv1(x)
+ out = self.bn1(out)
+ out = self.relu(out)
+
+ out = self.conv2(out)
+ out = self.bn2(out)
+ out = self.relu(out)
+
+ out = self.conv3(out)
+ out = self.bn3(out)
+
+ if self.downsample is not None:
+ identity = self.downsample(x)
+
+ out += identity
+ out = self.relu(out)
+
+ return out
+
+
+class ResNet(nn.Module):
+ def __init__(
+ self,
+ block,
+ layers,
+ num_classes=1000,
+ zero_init_residual=False,
+ groups=1,
+ width_per_group=64,
+ replace_stride_with_dilation=None,
+ norm_layer=None,
+ ):
+ super(ResNet, self).__init__()
+ if norm_layer is None:
+ norm_layer = nn.BatchNorm2d
+ self._norm_layer = norm_layer
+
+ self.inplanes = 64
+ self.dilation = 1
+ if replace_stride_with_dilation is None:
+ # each element in the tuple indicates if we should replace
+ # the 2x2 stride with a dilated convolution instead
+ replace_stride_with_dilation = [False, False, False]
+ if len(replace_stride_with_dilation) != 3:
+ raise ValueError(
+ "replace_stride_with_dilation should be None "
+ "or a 3-element tuple, got {}".format(replace_stride_with_dilation)
+ )
+ self.groups = groups
+ self.base_width = width_per_group
+ self.conv1 = nn.Conv2d(
+ 3, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False
+ )
+ self.bn1 = norm_layer(self.inplanes)
+ self.relu = nn.ReLU(inplace=True)
+ self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
+ self.layer1 = self._make_layer(block, 64, layers[0])
+ self.layer2 = self._make_layer(
+ block, 128, layers[1], stride=2, dilate=replace_stride_with_dilation[0]
+ )
+ self.layer3 = self._make_layer(
+ block, 256, layers[2], stride=2, dilate=replace_stride_with_dilation[1]
+ )
+ self.layer4 = self._make_layer(
+ block, 512, layers[3], stride=2, dilate=replace_stride_with_dilation[2]
+ )
+ # self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
+ # self.fc = nn.Linear(512 * block.expansion, num_classes)
+
+ for m in self.modules():
+ if isinstance(m, nn.Conv2d):
+ nn.init.kaiming_normal_(m.weight, mode="fan_out", nonlinearity="relu")
+ elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
+ nn.init.constant_(m.weight, 1)
+ nn.init.constant_(m.bias, 0)
+
+ # Zero-initialize the last BN in each residual branch,
+ # so that the residual branch starts with zeros, and each residual block behaves like an identity.
+ # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
+ if zero_init_residual:
+ for m in self.modules():
+ if isinstance(m, Bottleneck):
+ nn.init.constant_(m.bn3.weight, 0)
+ elif isinstance(m, BasicBlock):
+ nn.init.constant_(m.bn2.weight, 0)
+
+ def _make_layer(self, block, planes, blocks, stride=1, dilate=False):
+ norm_layer = self._norm_layer
+ downsample = None
+ previous_dilation = self.dilation
+ if dilate:
+ self.dilation *= stride
+ stride = 1
+ if stride != 1 or self.inplanes != planes * block.expansion:
+ downsample = nn.Sequential(
+ conv1x1(self.inplanes, planes * block.expansion, stride),
+ norm_layer(planes * block.expansion),
+ )
+
+ layers = []
+ layers.append(
+ block(
+ self.inplanes,
+ planes,
+ stride,
+ downsample,
+ self.groups,
+ self.base_width,
+ previous_dilation,
+ norm_layer,
+ )
+ )
+ self.inplanes = planes * block.expansion
+ for _ in range(1, blocks):
+ layers.append(
+ block(
+ self.inplanes,
+ planes,
+ groups=self.groups,
+ base_width=self.base_width,
+ dilation=self.dilation,
+ norm_layer=norm_layer,
+ )
+ )
+
+ return nn.Sequential(*layers)
+
+ def _forward_impl(self, x):
+ # See note [TorchScript super()]
+ x = self.conv1(x)
+ x = self.bn1(x)
+ x = self.relu(x)
+ x = self.maxpool(x)
+
+ x = self.layer1(x)
+ x = self.layer2(x)
+ x = self.layer3(x)
+ x = self.layer4(x)
+
+ # x = self.avgpool(x)
+ # x = torch.flatten(x, 1)
+ # x = self.fc(x)
+
+ return x
+
+ def forward(self, x):
+ return self._forward_impl(x)
+
+
+def _resnet(arch, block, layers, pretrained, progress, **kwargs):
+ model = ResNet(block, layers, **kwargs)
+ if pretrained:
+ state_dict = load_state_dict_from_url(model_urls[arch], progress=progress)
+ model.load_state_dict(state_dict, strict=False)
+ return model
+
+
+def resnet18(pretrained=False, progress=True, **kwargs):
+ r"""ResNet-18 model from
+ `"Deep Residual Learning for Image Recognition" `_
+
+ Args:
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
+ progress (bool): If True, displays a progress bar of the download to stderr
+ """
+ return _resnet("resnet18", BasicBlock, [2, 2, 2, 2], pretrained, progress, **kwargs)
+
+
+def resnet34(pretrained=False, progress=True, **kwargs):
+ r"""ResNet-34 model from
+ `"Deep Residual Learning for Image Recognition" `_
+
+ Args:
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
+ progress (bool): If True, displays a progress bar of the download to stderr
+ """
+ return _resnet("resnet34", BasicBlock, [3, 4, 6, 3], pretrained, progress, **kwargs)
+
+
+def resnet50(pretrained=False, progress=True, **kwargs):
+ r"""ResNet-50 model from
+ `"Deep Residual Learning for Image Recognition" `_
+
+ Args:
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
+ progress (bool): If True, displays a progress bar of the download to stderr
+ """
+ return _resnet("resnet50", Bottleneck, [3, 4, 6, 3], pretrained, progress, **kwargs)
+
+
+def resnet101(pretrained=False, progress=True, **kwargs):
+ r"""ResNet-101 model from
+ `"Deep Residual Learning for Image Recognition" `_
+
+ Args:
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
+ progress (bool): If True, displays a progress bar of the download to stderr
+ """
+ return _resnet(
+ "resnet101", Bottleneck, [3, 4, 23, 3], pretrained, progress, **kwargs
+ )
+
+
+def resnet152(pretrained=False, progress=True, **kwargs):
+ r"""ResNet-152 model from
+ `"Deep Residual Learning for Image Recognition" `_
+
+ Args:
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
+ progress (bool): If True, displays a progress bar of the download to stderr
+ """
+ return _resnet(
+ "resnet152", Bottleneck, [3, 8, 36, 3], pretrained, progress, **kwargs
+ )
+
+
+def resnext50_32x4d(pretrained=False, progress=True, **kwargs):
+ r"""ResNeXt-50 32x4d model from
+ `"Aggregated Residual Transformation for Deep Neural Networks" `_
+
+ Args:
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
+ progress (bool): If True, displays a progress bar of the download to stderr
+ """
+ kwargs["groups"] = 32
+ kwargs["width_per_group"] = 4
+ return _resnet(
+ "resnext50_32x4d", Bottleneck, [3, 4, 6, 3], pretrained, progress, **kwargs
+ )
+
+
+def resnext101_32x8d(pretrained=False, progress=True, **kwargs):
+ r"""ResNeXt-101 32x8d model from
+ `"Aggregated Residual Transformation for Deep Neural Networks" `_
+
+ Args:
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
+ progress (bool): If True, displays a progress bar of the download to stderr
+ """
+ kwargs["groups"] = 32
+ kwargs["width_per_group"] = 8
+ return _resnet(
+ "resnext101_32x8d", Bottleneck, [3, 4, 23, 3], pretrained, progress, **kwargs
+ )
+
+
+def wide_resnet50_2(pretrained=False, progress=True, **kwargs):
+ r"""Wide ResNet-50-2 model from
+ `"Wide Residual Networks" `_
+
+ The model is the same as ResNet except for the bottleneck number of channels
+ which is twice larger in every block. The number of channels in outer 1x1
+ convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048
+ channels, and in Wide ResNet-50-2 has 2048-1024-2048.
+
+ Args:
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
+ progress (bool): If True, displays a progress bar of the download to stderr
+ """
+ kwargs["width_per_group"] = 64 * 2
+ return _resnet(
+ "wide_resnet50_2", Bottleneck, [3, 4, 6, 3], pretrained, progress, **kwargs
+ )
+
+
+def wide_resnet101_2(pretrained=False, progress=True, **kwargs):
+ r"""Wide ResNet-101-2 model from
+ `"Wide Residual Networks" `_
+
+ The model is the same as ResNet except for the bottleneck number of channels
+ which is twice larger in every block. The number of channels in outer 1x1
+ convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048
+ channels, and in Wide ResNet-50-2 has 2048-1024-2048.
+
+ Args:
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
+ progress (bool): If True, displays a progress bar of the download to stderr
+ """
+ kwargs["width_per_group"] = 64 * 2
+ return _resnet(
+ "wide_resnet101_2", Bottleneck, [3, 4, 23, 3], pretrained, progress, **kwargs
+ )
diff --git a/src/nets/backbone/utils.py b/src/nets/backbone/utils.py
new file mode 100644
index 0000000..b5779cd
--- /dev/null
+++ b/src/nets/backbone/utils.py
@@ -0,0 +1,20 @@
+def get_backbone_info(backbone):
+ info = {
+ "resnet18": {"n_output_channels": 512, "downsample_rate": 4},
+ "resnet34": {"n_output_channels": 512, "downsample_rate": 4},
+ "resnet50": {"n_output_channels": 2048, "downsample_rate": 4},
+ "resnet50_adf_dropout": {"n_output_channels": 2048, "downsample_rate": 4},
+ "resnet50_dropout": {"n_output_channels": 2048, "downsample_rate": 4},
+ "resnet101": {"n_output_channels": 2048, "downsample_rate": 4},
+ "resnet152": {"n_output_channels": 2048, "downsample_rate": 4},
+ "resnext50_32x4d": {"n_output_channels": 2048, "downsample_rate": 4},
+ "resnext101_32x8d": {"n_output_channels": 2048, "downsample_rate": 4},
+ "wide_resnet50_2": {"n_output_channels": 2048, "downsample_rate": 4},
+ "wide_resnet101_2": {"n_output_channels": 2048, "downsample_rate": 4},
+ "mobilenet_v2": {"n_output_channels": 1280, "downsample_rate": 4},
+ "hrnet_w32": {"n_output_channels": 480, "downsample_rate": 4},
+ "hrnet_w48": {"n_output_channels": 720, "downsample_rate": 4},
+ # 'hrnet_w64': {'n_output_channels': 2048, 'downsample_rate': 4},
+ "dla34": {"n_output_channels": 512, "downsample_rate": 4},
+ }
+ return info[backbone]
diff --git a/src/nets/hand_heads/hand_hmr.py b/src/nets/hand_heads/hand_hmr.py
new file mode 100644
index 0000000..ddd6877
--- /dev/null
+++ b/src/nets/hand_heads/hand_hmr.py
@@ -0,0 +1,68 @@
+import pytorch3d.transforms.rotation_conversions as rot_conv
+import torch
+import torch.nn as nn
+
+from common.xdict import xdict
+from src.nets.hmr_layer import HMRLayer
+
+
+class HandHMR(nn.Module):
+ def __init__(self, feat_dim, is_rhand, n_iter):
+ super().__init__()
+ self.is_rhand = is_rhand
+
+ hand_specs = {"pose_6d": 6 * 16, "cam_t/wp": 3, "shape": 10}
+ self.hmr_layer = HMRLayer(feat_dim, 1024, hand_specs)
+
+ self.cam_init = nn.Sequential(
+ nn.Linear(feat_dim, 512),
+ nn.ReLU(),
+ nn.Linear(512, 512),
+ nn.ReLU(),
+ nn.Linear(512, 3),
+ )
+
+ self.hand_specs = hand_specs
+ self.n_iter = n_iter
+ self.avgpool = nn.AdaptiveAvgPool2d(1)
+
+ def init_vector_dict(self, features):
+ batch_size = features.shape[0]
+ dev = features.device
+ init_pose = (
+ rot_conv.matrix_to_rotation_6d(
+ rot_conv.axis_angle_to_matrix(torch.zeros(16, 3))
+ )
+ .reshape(1, -1)
+ .repeat(batch_size, 1)
+ )
+ init_shape = torch.zeros(1, 10).repeat(batch_size, 1)
+ init_transl = self.cam_init(features)
+
+ out = {}
+ out["pose_6d"] = init_pose
+ out["shape"] = init_shape
+ out["cam_t/wp"] = init_transl
+ out = xdict(out).to(dev)
+ return out
+
+ def forward(self, features, use_pool=True):
+ batch_size = features.shape[0]
+ if use_pool:
+ feat = self.avgpool(features)
+ feat = feat.view(feat.size(0), -1)
+ else:
+ feat = features
+
+ init_vdict = self.init_vector_dict(feat)
+ init_cam_t = init_vdict["cam_t/wp"].clone()
+ pred_vdict = self.hmr_layer(feat, init_vdict, self.n_iter)
+
+ pred_rotmat = rot_conv.rotation_6d_to_matrix(
+ pred_vdict["pose_6d"].reshape(-1, 6)
+ ).view(batch_size, 16, 3, 3)
+
+ pred_vdict["pose"] = pred_rotmat
+ pred_vdict["cam_t.wp.init"] = init_cam_t
+ pred_vdict = pred_vdict.replace_keys("/", ".")
+ return pred_vdict
diff --git a/src/nets/hand_heads/mano_head.py b/src/nets/hand_heads/mano_head.py
new file mode 100644
index 0000000..305a6cf
--- /dev/null
+++ b/src/nets/hand_heads/mano_head.py
@@ -0,0 +1,62 @@
+import torch.nn as nn
+
+import common.camera as camera
+import common.data_utils as data_utils
+import common.rot as rot
+import common.transforms as tf
+from common.body_models import build_mano_aa
+from common.xdict import xdict
+
+
+class MANOHead(nn.Module):
+ def __init__(self, is_rhand, focal_length, img_res):
+ super(MANOHead, self).__init__()
+ self.mano = build_mano_aa(is_rhand)
+ self.add_module("mano", self.mano)
+ self.focal_length = focal_length
+ self.img_res = img_res
+ self.is_rhand = is_rhand
+
+ def forward(self, rotmat, shape, cam, K):
+ """
+ :param rotmat: rotation in euler angles format (N,J,3,3)
+ :param shape: smpl betas
+ :param cam: weak perspective camera
+ :param normalize_joints2d: bool, normalize joints between -1, 1 if true
+ :return: dict with keys 'vertices', 'joints3d', 'joints2d' if cam is True
+ """
+
+ rotmat_original = rotmat.clone()
+ rotmat = rot.matrix_to_axis_angle(rotmat.reshape(-1, 3, 3)).reshape(-1, 48)
+
+ mano_output = self.mano(
+ betas=shape,
+ hand_pose=rotmat[:, 3:],
+ global_orient=rotmat[:, :3],
+ )
+ output = xdict()
+
+ avg_focal_length = (K[:, 0, 0] + K[:, 1, 1]) / 2.0
+ cam_t = camera.weak_perspective_to_perspective_torch(
+ cam, focal_length=avg_focal_length, img_res=self.img_res, min_s=0.1
+ )
+
+ joints3d_cam = mano_output.joints + cam_t[:, None, :]
+ v3d_cam = mano_output.vertices + cam_t[:, None, :]
+
+ joints2d = tf.project2d_batch(K, joints3d_cam)
+ joints2d = data_utils.normalize_kp2d(joints2d, self.img_res)
+
+ output["cam_t.wp"] = cam
+ output["cam_t"] = cam_t
+ output["joints3d"] = mano_output.joints
+ output["vertices"] = mano_output.vertices
+ output["j3d.cam"] = joints3d_cam
+ output["v3d.cam"] = v3d_cam
+ output["j2d.norm"] = joints2d
+ output["beta"] = shape
+ output["pose"] = rotmat_original
+
+ postfix = ".r" if self.is_rhand else ".l"
+ output_pad = output.postfix(postfix)
+ return output_pad
diff --git a/src/nets/hmr_layer.py b/src/nets/hmr_layer.py
new file mode 100644
index 0000000..0c41fe9
--- /dev/null
+++ b/src/nets/hmr_layer.py
@@ -0,0 +1,49 @@
+import torch
+import torch.nn as nn
+
+
+class HMRLayer(nn.Module):
+ def __init__(self, feat_dim, mid_dim, specs_dict):
+ super().__init__()
+
+ self.feat_dim = feat_dim
+ self.avgpool = nn.AdaptiveAvgPool2d(1)
+ self.specs_dict = specs_dict
+
+ vector_dim = sum(list(zip(*specs_dict.items()))[1])
+ hmr_dim = feat_dim + vector_dim
+
+ # construct refine
+ self.refine = nn.Sequential(
+ nn.Linear(hmr_dim, mid_dim),
+ nn.ReLU(),
+ nn.Dropout(),
+ nn.Linear(mid_dim, mid_dim),
+ nn.ReLU(),
+ nn.Dropout(),
+ )
+
+ # construct decoders
+ decoders = {}
+ for key, vec_size in specs_dict.items():
+ decoders[key] = nn.Linear(mid_dim, vec_size)
+ self.decoders = nn.ModuleDict(decoders)
+
+ self.init_weights()
+
+ def init_weights(self):
+ for key, decoder in self.decoders.items():
+ nn.init.xavier_uniform_(decoder.weight, gain=0.01)
+ self.decoders[key] = decoder
+
+ def forward(self, feat, init_vector_dict, n_iter):
+ pred_vector_dict = init_vector_dict
+ for i in range(n_iter):
+ vectors = list(zip(*pred_vector_dict.items()))[1]
+ xc = torch.cat([feat] + list(vectors), dim=1)
+ xc = self.refine(xc)
+ for key, decoder in self.decoders.items():
+ pred_vector_dict.overwrite(key, decoder(xc) + pred_vector_dict[key])
+
+ pred_vector_dict.has_invalid()
+ return pred_vector_dict
diff --git a/src/nets/obj_heads/obj_head.py b/src/nets/obj_heads/obj_head.py
new file mode 100644
index 0000000..8d08365
--- /dev/null
+++ b/src/nets/obj_heads/obj_head.py
@@ -0,0 +1,77 @@
+import torch.nn as nn
+
+import common.camera as camera
+import common.data_utils as data_utils
+import common.transforms as tf
+from common.object_tensors import ObjectTensors
+from common.xdict import xdict
+
+
+class ArtiHead(nn.Module):
+ def __init__(self, focal_length, img_res):
+ super().__init__()
+ self.object_tensors = ObjectTensors()
+ self.focal_length = focal_length
+ self.img_res = img_res
+
+ def forward(
+ self,
+ rot,
+ angle,
+ query_names,
+ cam,
+ K,
+ transl=None,
+ ):
+ if self.object_tensors.dev != rot.device:
+ self.object_tensors.to(rot.device)
+
+ out = self.object_tensors.forward(angle.view(-1, 1), rot, transl, query_names)
+
+ # after adding relative transl
+ bbox3d = out["bbox3d"]
+ kp3d = out["kp3d"]
+
+ # right hand translation
+ avg_focal_length = (K[:, 0, 0] + K[:, 1, 1]) / 2.0
+ cam_t = camera.weak_perspective_to_perspective_torch(
+ cam, focal_length=avg_focal_length, img_res=self.img_res, min_s=0.1
+ )
+
+ # camera coord
+ bbox3d_cam = bbox3d + cam_t[:, None, :]
+ kp3d_cam = kp3d + cam_t[:, None, :]
+
+ # 2d keypoints
+ kp2d = tf.project2d_batch(K, kp3d_cam)
+ bbox2d = tf.project2d_batch(K, bbox3d_cam)
+
+ kp2d = data_utils.normalize_kp2d(kp2d, self.img_res)
+ bbox2d = data_utils.normalize_kp2d(bbox2d, self.img_res)
+ num_kps = kp2d.shape[1] // 2
+
+ output = xdict()
+ output["rot"] = rot
+ if transl is not None:
+ # relative transl
+ output["transl"] = transl # mete
+
+ output["cam_t.wp"] = cam
+ output["cam_t"] = cam_t
+ output["kp3d"] = kp3d
+ output["bbox3d"] = bbox3d
+ output["bbox3d.cam"] = bbox3d_cam
+ output["kp3d.cam"] = kp3d_cam
+ output["kp2d.norm"] = kp2d
+ output["kp2d.norm.t"] = kp2d[:, :num_kps]
+ output["kp2d.norm.b"] = kp2d[:, num_kps:]
+ output["bbox2d.norm.t"] = bbox2d[:, :8]
+ output["bbox2d.norm.b"] = bbox2d[:, 8:]
+ output["radian"] = angle
+
+ output["v.cam"] = out["v"] + cam_t[:, None, :]
+ output["v_len"] = out["v_len"]
+ output["f"] = out["f"]
+ output["f_len"] = out["f_len"]
+
+ return output
diff --git a/src/nets/obj_heads/obj_hmr.py b/src/nets/obj_heads/obj_hmr.py
new file mode 100644
index 0000000..1d3d2ac
--- /dev/null
+++ b/src/nets/obj_heads/obj_hmr.py
@@ -0,0 +1,53 @@
+import torch
+import torch.nn as nn
+
+from common.xdict import xdict
+from src.nets.hmr_layer import HMRLayer
+
+
+class ObjectHMR(nn.Module):
+ def __init__(self, feat_dim, n_iter):
+ super().__init__()
+
+ obj_specs = {"rot": 3, "cam_t/wp": 3, "radian": 1}
+ self.hmr_layer = HMRLayer(feat_dim, 1024, obj_specs)
+
+ self.cam_init = nn.Sequential(
+ nn.Linear(feat_dim, 512),
+ nn.ReLU(),
+ nn.Linear(512, 512),
+ nn.ReLU(),
+ nn.Linear(512, 3),
+ )
+
+ self.obj_specs = obj_specs
+ self.n_iter = n_iter
+ self.avgpool = nn.AdaptiveAvgPool2d(1)
+
+ def init_vector_dict(self, features):
+ batch_size = features.shape[0]
+ dev = features.device
+ init_rot = torch.zeros(batch_size, 3)
+ init_angle = torch.zeros(batch_size, 1)
+ init_transl = self.cam_init(features)
+
+ out = {}
+ out["rot"] = init_rot
+ out["radian"] = init_angle
+ out["cam_t/wp"] = init_transl
+ out = xdict(out).to(dev)
+ return out
+
+ def forward(self, features, use_pool=True):
+ if use_pool:
+ feat = self.avgpool(features)
+ feat = feat.view(feat.size(0), -1)
+ else:
+ feat = features
+
+ init_vdict = self.init_vector_dict(feat)
+ init_cam_t = init_vdict["cam_t/wp"].clone()
+ pred_vdict = self.hmr_layer(feat, init_vdict, self.n_iter)
+ pred_vdict["cam_t.wp.init"] = init_cam_t
+ pred_vdict = pred_vdict.replace_keys("/", ".")
+ return pred_vdict
diff --git a/src/nets/pointnet.py b/src/nets/pointnet.py
new file mode 100644
index 0000000..802f921
--- /dev/null
+++ b/src/nets/pointnet.py
@@ -0,0 +1,137 @@
+from __future__ import print_function
+
+import numpy as np
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.nn.parallel
+import torch.utils.data
+from torch.autograd import Variable
+
+"""
+Source: https://github.com/fxia22/pointnet.pytorch/blob/f0c2430b0b1529e3f76fb5d6cd6ca14be763d975/pointnet/model.py
+"""
+
+
+class STN3d(nn.Module):
+ def __init__(self):
+ super(STN3d, self).__init__()
+ self.conv1 = torch.nn.Conv1d(3, 64, 1)
+ self.conv2 = torch.nn.Conv1d(64, 128, 1)
+ self.conv3 = torch.nn.Conv1d(128, 1024, 1)
+ self.fc1 = nn.Linear(1024, 512)
+ self.fc2 = nn.Linear(512, 256)
+ self.fc3 = nn.Linear(256, 9)
+ self.relu = nn.ReLU()
+
+ self.bn1 = nn.BatchNorm1d(64)
+ self.bn2 = nn.BatchNorm1d(128)
+ self.bn3 = nn.BatchNorm1d(1024)
+ self.bn4 = nn.BatchNorm1d(512)
+ self.bn5 = nn.BatchNorm1d(256)
+
+ def forward(self, x):
+ batchsize = x.size()[0]
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = F.relu(self.bn3(self.conv3(x)))
+ x = torch.max(x, 2, keepdim=True)[0]
+ x = x.view(-1, 1024)
+
+ x = F.relu(self.bn4(self.fc1(x)))
+ x = F.relu(self.bn5(self.fc2(x)))
+ x = self.fc3(x)
+
+ iden = (
+ Variable(
+ torch.from_numpy(
+ np.array([1, 0, 0, 0, 1, 0, 0, 0, 1]).astype(np.float32)
+ )
+ )
+ .view(1, 9)
+ .repeat(batchsize, 1)
+ )
+ if x.is_cuda:
+ iden = iden.cuda()
+ x = x + iden
+ x = x.view(-1, 3, 3)
+ return x
+
+
+class STNkd(nn.Module):
+ def __init__(self, k=64):
+ super(STNkd, self).__init__()
+ self.conv1 = torch.nn.Conv1d(k, 64, 1)
+ self.conv2 = torch.nn.Conv1d(64, 128, 1)
+ self.conv3 = torch.nn.Conv1d(128, 1024, 1)
+ self.fc1 = nn.Linear(1024, 512)
+ self.fc2 = nn.Linear(512, 256)
+ self.fc3 = nn.Linear(256, k * k)
+ self.relu = nn.ReLU()
+
+ self.bn1 = nn.BatchNorm1d(64)
+ self.bn2 = nn.BatchNorm1d(128)
+ self.bn3 = nn.BatchNorm1d(1024)
+ self.bn4 = nn.BatchNorm1d(512)
+ self.bn5 = nn.BatchNorm1d(256)
+
+ self.k = k
+
+ def forward(self, x):
+ batchsize = x.size()[0]
+ x = F.relu(self.bn1(self.conv1(x)))
+ x = F.relu(self.bn2(self.conv2(x)))
+ x = F.relu(self.bn3(self.conv3(x)))
+ x = torch.max(x, 2, keepdim=True)[0]
+ x = x.view(-1, 1024)
+
+ x = F.relu(self.bn4(self.fc1(x)))
+ x = F.relu(self.bn5(self.fc2(x)))
+ x = self.fc3(x)
+
+ iden = (
+ Variable(torch.from_numpy(np.eye(self.k).flatten().astype(np.float32)))
+ .view(1, self.k * self.k)
+ .repeat(batchsize, 1)
+ )
+ if x.is_cuda:
+ iden = iden.cuda()
+ x = x + iden
+ x = x.view(-1, self.k, self.k)
+ return x
+
+
+class PointNetfeat(nn.Module):
+ def __init__(self, input_dim, shallow_dim, mid_dim, out_dim, global_feat=False):
+ super(PointNetfeat, self).__init__()
+ self.shallow_layer = nn.Sequential(
+ nn.Conv1d(input_dim, shallow_dim, 1), nn.BatchNorm1d(shallow_dim)
+ )
+
+ self.base_layer = nn.Sequential(
+ nn.Conv1d(shallow_dim, mid_dim, 1),
+ nn.BatchNorm1d(mid_dim),
+ nn.ReLU(),
+ nn.Conv1d(mid_dim, out_dim, 1),
+ nn.BatchNorm1d(out_dim),
+ )
+
+ self.global_feat = global_feat
+ self.out_dim = out_dim
+
+ def forward(self, x):
+ n_pts = x.size()[2]
+ x = self.shallow_layer(x)
+ pointfeat = x
+
+ x = self.base_layer(x)
+ x = torch.max(x, 2, keepdim=True)[0]
+ x = x.view(-1, self.out_dim)
+
+ trans_feat = None
+ trans = None
+ if self.global_feat:
+ return x, trans, trans_feat
+ else:
+ x = x.view(-1, self.out_dim, 1).repeat(1, 1, n_pts)
+ return torch.cat([x, pointfeat], 1), trans, trans_feat
diff --git a/src/parsers/configs/arctic_lstm.py b/src/parsers/configs/arctic_lstm.py
new file mode 100644
index 0000000..cf6bb64
--- /dev/null
+++ b/src/parsers/configs/arctic_lstm.py
@@ -0,0 +1,11 @@
+from src.parsers.configs.generic import DEFAULT_ARGS_ALLO, DEFAULT_ARGS_EGO
+
+DEFAULT_ARGS_EGO["batch_size"] = 64
+DEFAULT_ARGS_EGO["test_batch_size"] = 64
+DEFAULT_ARGS_EGO["img_feat_version"] = "28bf3642f"
+DEFAULT_ARGS_EGO["num_epoch"] = 80
+
+DEFAULT_ARGS_ALLO["batch_size"] = 64
+DEFAULT_ARGS_ALLO["test_batch_size"] = 64
+DEFAULT_ARGS_ALLO["img_feat_version"] = "3558f1342"
+DEFAULT_ARGS_ALLO["num_epoch"] = 10
diff --git a/src/parsers/configs/arctic_sf.py b/src/parsers/configs/arctic_sf.py
new file mode 100644
index 0000000..9c570c6
--- /dev/null
+++ b/src/parsers/configs/arctic_sf.py
@@ -0,0 +1,4 @@
+from src.parsers.configs.generic import DEFAULT_ARGS_ALLO, DEFAULT_ARGS_EGO
+
+DEFAULT_ARGS_EGO["img_feat_version"] = "" # should use ArcticDataset
+DEFAULT_ARGS_ALLO["img_feat_version"] = "" # should use ArcticDataset
diff --git a/src/parsers/configs/field_lstm.py b/src/parsers/configs/field_lstm.py
new file mode 100644
index 0000000..3a05b7b
--- /dev/null
+++ b/src/parsers/configs/field_lstm.py
@@ -0,0 +1,11 @@
+from src.parsers.configs.generic import DEFAULT_ARGS_ALLO, DEFAULT_ARGS_EGO
+
+DEFAULT_ARGS_EGO["batch_size"] = 32
+DEFAULT_ARGS_EGO["test_batch_size"] = 32
+DEFAULT_ARGS_EGO["img_feat_version"] = "58e200d16"
+DEFAULT_ARGS_EGO["num_epoch"] = 50
+
+DEFAULT_ARGS_ALLO["batch_size"] = 32
+DEFAULT_ARGS_ALLO["test_batch_size"] = 32
+DEFAULT_ARGS_ALLO["img_feat_version"] = "1f9ac0b15"
+DEFAULT_ARGS_ALLO["num_epoch"] = 6
diff --git a/src/parsers/configs/field_sf.py b/src/parsers/configs/field_sf.py
new file mode 100644
index 0000000..9c570c6
--- /dev/null
+++ b/src/parsers/configs/field_sf.py
@@ -0,0 +1,4 @@
+from src.parsers.configs.generic import DEFAULT_ARGS_ALLO, DEFAULT_ARGS_EGO
+
+DEFAULT_ARGS_EGO["img_feat_version"] = "" # should use ArcticDataset
+DEFAULT_ARGS_ALLO["img_feat_version"] = "" # should use ArcticDataset
diff --git a/src/parsers/configs/generic.py b/src/parsers/configs/generic.py
new file mode 100644
index 0000000..a7cef31
--- /dev/null
+++ b/src/parsers/configs/generic.py
@@ -0,0 +1,69 @@
+DEFAULT_ARGS_EGO = {
+ "run_on": "",
+ "trainsplit": "train",
+ "valsplit": "tinyval",
+ "setup": "p2a",
+ "method": "arctic",
+ "log_every": 50,
+ "eval_every_epoch": 5,
+ "lr_dec_epoch": [],
+ "num_epoch": 100,
+ "lr": 1e-5,
+ "lr_dec_factor": 10,
+ "lr_decay": 0.1,
+ "num_exp": 1,
+ "exp_key": "",
+ "batch_size": 64,
+ "test_batch_size": 128,
+ "temp_loader": False,
+ "window_size": 11,
+ "num_workers": 16,
+ "img_feat_version": "",
+ "eval_on": "",
+ "acc_grad": 1,
+ "load_from": "",
+ "load_ckpt": "",
+ "infer_ckpt": "",
+ "resume_ckpt": "",
+ "gpu_ids": [0],
+ "agent_id": 0,
+ "cluster_node": "",
+ "bid": 21,
+ "gpu_arch": "ampere",
+ "gpu_min_mem": 20000,
+ "extraction_mode": "",
+}
+DEFAULT_ARGS_ALLO = {
+ "run_on": "",
+ "trainsplit": "train",
+ "valsplit": "tinyval",
+ "setup": "p1a",
+ "method": "arctic",
+ "log_every": 50,
+ "eval_every_epoch": 1,
+ "lr_dec_epoch": [],
+ "num_epoch": 20,
+ "lr": 1e-5,
+ "lr_dec_factor": 10,
+ "lr_decay": 0.1,
+ "num_exp": 1,
+ "exp_key": "",
+ "batch_size": 64,
+ "test_batch_size": 128,
+ "window_size": 11,
+ "num_workers": 16,
+ "img_feat_version": "",
+ "eval_on": "",
+ "acc_grad": 1,
+ "load_from": "",
+ "load_ckpt": "",
+ "infer_ckpt": "",
+ "resume_ckpt": "",
+ "gpu_ids": [0],
+ "agent_id": 0,
+ "cluster_node": "",
+ "bid": 21,
+ "gpu_arch": "ampere",
+ "gpu_min_mem": 20000,
+ "extraction_mode": "",
+}
diff --git a/src/parsers/generic_parser.py b/src/parsers/generic_parser.py
new file mode 100644
index 0000000..ca9571f
--- /dev/null
+++ b/src/parsers/generic_parser.py
@@ -0,0 +1,87 @@
+def add_generic_args(parser):
+ """
+ Generic options that are non-specific to a project.
+ """
+ parser.add_argument("--agent_id", type=int, default=None)
+ parser.add_argument(
+ "--load_from", type=str, default=None, help="Load weights from InterHand format"
+ )
+ parser.add_argument(
+ "--load_ckpt", type=str, default=None, help="Load checkpoints from PL format"
+ )
+ parser.add_argument(
+ "--infer_ckpt", type=str, default=None, help="This is for the interface"
+ )
+ parser.add_argument(
+ "--resume_ckpt",
+ type=str,
+ default=None,
+ help="Resume training from checkpoint and keep logging in the same comet exp",
+ )
+ parser.add_argument(
+ "-f",
+ "--fast",
+ dest="fast_dev_run",
+ help="single batch for development",
+ action="store_true",
+ )
+ parser.add_argument(
+ "--trainsplit",
+ type=str,
+ default=None,
+ choices=[None, "train", "smalltrain", "minitrain", "tinytrain"],
+ help="Amount to subsample training set.",
+ )
+ parser.add_argument(
+ "--valsplit",
+ type=str,
+ default=None,
+ choices=[None, "val", "smallval", "tinyval", "minival"],
+ help="Amount to subsample validation set.",
+ )
+ parser.add_argument(
+ "--run_on",
+ type=str,
+ default=None,
+ help="split for extraction",
+ )
+ parser.add_argument("--setup", type=str, default=None)
+
+ parser.add_argument("--log_every", type=int, default=None, help="log every k steps")
+ parser.add_argument(
+ "--eval_every_epoch", type=int, default=None, help="Eval every k epochs"
+ )
+ parser.add_argument(
+ "--lr_dec_epoch",
+ type=int,
+ nargs="+",
+ default=None,
+ help="Learning rate decay epoch.",
+ )
+ parser.add_argument("--num_epoch", type=int, default=None)
+ parser.add_argument("--lr", type=float, default=None)
+ parser.add_argument(
+ "--lr_dec_factor", type=int, default=None, help="Learning rate decay factor"
+ )
+ parser.add_argument(
+ "--lr_decay", type=float, default=None, help="Learning rate decay factor"
+ )
+ parser.add_argument("--num_exp", type=int, default=None)
+ parser.add_argument("--acc_grad", type=int, default=None)
+ parser.add_argument("--batch_size", type=int, default=None)
+ parser.add_argument("--test_batch_size", type=int, default=None)
+ parser.add_argument("--num_workers", type=int, default=None)
+ parser.add_argument(
+ "--eval_on",
+ type=str,
+ default=None,
+ choices=[None, "val", "test", "minival", "minitest"],
+ help="Test mode set to eval on",
+ )
+
+ parser.add_argument("--mute", help="No logging", action="store_true")
+ parser.add_argument("--no_vis", help="Stop visualization", action="store_true")
+ parser.add_argument("--cluster", action="store_true")
+ parser.add_argument("--cluster_node", type=str, default=None)
+ parser.add_argument("--bid", type=int, default=None, help="log every k steps")
+ return parser
diff --git a/src/parsers/parser.py b/src/parsers/parser.py
new file mode 100644
index 0000000..92f4a8a
--- /dev/null
+++ b/src/parsers/parser.py
@@ -0,0 +1,73 @@
+import argparse
+
+from easydict import EasyDict
+
+from common.args_utils import set_default_params
+from src.parsers.generic_parser import add_generic_args
+
+
+def construct_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument(
+ "--method",
+ type=str,
+ default=None,
+ choices=[None, "arctic_sf", "arctic_lstm", "field_sf", "field_lstm"],
+ )
+ parser.add_argument("--exp_key", type=str, default=None)
+ parser.add_argument("--extraction_mode", type=str, default=None)
+ parser.add_argument("--img_feat_version", type=str, default=None)
+ parser.add_argument("--window_size", type=int, default=None)
+ parser.add_argument("--eval", action="store_true")
+ parser = add_generic_args(parser)
+ args = EasyDict(vars(parser.parse_args()))
+
+ if args.method in ["arctic_sf"]:
+ import src.parsers.configs.arctic_sf as config
+ elif args.method in ["arctic_lstm"]:
+ import src.parsers.configs.arctic_lstm as config
+ elif args.method in ["field_sf"]:
+ import src.parsers.configs.field_sf as config
+ elif args.method in ["field_lstm"]:
+ import src.parsers.configs.field_lstm as config
+ else:
+ assert False
+
+ default_args = (
+ config.DEFAULT_ARGS_EGO if args.setup in ["p2"] else config.DEFAULT_ARGS_ALLO
+ )
+ args = set_default_params(args, default_args)
+
+ args.focal_length = 1000.0
+ args.img_res = 224
+ args.rot_factor = 30.0
+ args.noise_factor = 0.4
+ args.scale_factor = 0.25
+ args.flip_prob = 0.0
+ args.img_norm_mean = [0.485, 0.456, 0.406]
+ args.img_norm_std = [0.229, 0.224, 0.225]
+ args.pin_memory = True
+ args.shuffle_train = True
+ args.seed = 1
+ args.grad_clip = 150.0
+ args.use_gt_k = False # use weak perspective camera or the actual intrinsics
+ args.speedup = True # load cropped images for faster training
+ # args.speedup = False # uncomment this to load full images instead
+ args.max_dist = 0.10 # distance range the model predicts on
+ args.ego_image_scale = 0.3
+
+ if args.method in ["field_sf", "field_lstm"]:
+ args.project = "interfield"
+ else:
+ args.project = "arctic"
+ args.interface_p = None
+
+ if args.fast_dev_run:
+ args.num_workers = 0
+ args.batch_size = 8
+ args.trainsplit = "minitrain"
+ args.valsplit = "minival"
+ args.log_every = 5
+ args.window_size = 3
+
+ return args
diff --git a/src/utils/const.py b/src/utils/const.py
new file mode 100644
index 0000000..7668361
--- /dev/null
+++ b/src/utils/const.py
@@ -0,0 +1,6 @@
+import common.comet_utils as comet_utils
+from src.parsers.parser import construct_args
+
+args = construct_args()
+experiment, args = comet_utils.init_experiment(args)
+comet_utils.save_args(args, save_keys=["comet_key"])
diff --git a/src/utils/eval_modules.py b/src/utils/eval_modules.py
new file mode 100644
index 0000000..44fa8fe
--- /dev/null
+++ b/src/utils/eval_modules.py
@@ -0,0 +1,464 @@
+import copy
+import warnings
+
+import numpy as np
+import torch
+
+import common.metrics as metrics
+from common.torch_utils import unpad_vtensor
+
+warnings.filterwarnings("ignore")
+
+import torch
+
+import common.torch_utils as torch_utils
+from common.torch_utils import unpad_vtensor
+from common.xdict import xdict
+from src.utils.loss_modules import contact_deviation
+from src.utils.mdev import eval_motion_deviation
+
+
+def compute_avg_err(gt_dist, pred_dist, is_valid):
+ assert len(gt_dist) == len(pred_dist)
+ diff_list = []
+ for gt, pred, valid in zip(gt_dist, pred_dist, is_valid):
+ if valid:
+ diff = torch.abs(gt - pred).mean()
+ else:
+ diff = torch.tensor(float("nan"))
+ diff_list.append(diff)
+ diff_list = torch.stack(diff_list).view(-1)
+ assert len(diff_list) == len(gt_dist)
+ return diff_list
+
+
+def eval_field_errors(_pred, _targets, _meta_info):
+ pred = copy.deepcopy(_pred).to("cpu")
+ targets = copy.deepcopy(_targets).to("cpu")
+ meta_info = copy.deepcopy(_meta_info).to("cpu")
+
+ targets.overwrite(
+ "dist.or", unpad_vtensor(targets["dist.or"], meta_info["object.v_len"])
+ )
+ targets.overwrite(
+ "dist.ol", unpad_vtensor(targets["dist.ol"], meta_info["object.v_len"])
+ )
+ pred.overwrite("dist.or", unpad_vtensor(pred["dist.or"], meta_info["object.v_len"]))
+ pred.overwrite("dist.ol", unpad_vtensor(pred["dist.ol"], meta_info["object.v_len"]))
+
+ keys = ["dist.ro", "dist.lo", "dist.or", "dist.ol"]
+ is_valid = _targets["is_valid"].bool().tolist()
+
+ # validty of hand is not in use as if hand is out of frame model should predict longer distance
+ metric_dict = xdict(
+ {
+ key.replace("dist.", "avg/"): compute_avg_err(
+ targets[key], pred[key], is_valid
+ )
+ for key in keys
+ }
+ )
+
+ avg_ho_all = torch.stack((metric_dict["avg/ro"], metric_dict["avg/lo"]), dim=1)
+ avg_oh_all = torch.stack((metric_dict["avg/or"], metric_dict["avg/ol"]), dim=1)
+
+ avg_ho_all = torch_utils.nanmean(avg_ho_all, dim=1)
+ avg_oh_all = torch_utils.nanmean(avg_oh_all, dim=1)
+
+ metric_dict["avg/ho"] = avg_ho_all
+ metric_dict["avg/oh"] = avg_oh_all
+ metric_dict.pop("avg/ro", None)
+ metric_dict.pop("avg/lo", None)
+ metric_dict.pop("avg/or", None)
+ metric_dict.pop("avg/ol", None)
+ metric_dict = metric_dict.mul(1000.0).to_np()
+ return metric_dict
+
+
+def eval_degree(pred, targets, meta_info):
+ is_valid = targets["is_valid"]
+
+ # only evaluate on sequences with articulation
+ invalid_idx = (1.0 - is_valid).long().nonzero().view(-1).cpu()
+
+ pred_radian = pred["object.radian"].view(-1) # radian
+ gt_radian = targets["object.radian"].view(-1) # radian
+ arti_err = metrics.compute_arti_deg_error(pred_radian, gt_radian)
+
+ # flag down sequences without articulation
+ arti_err[invalid_idx] = float("nan")
+
+ metric_dict = {}
+ metric_dict["aae"] = arti_err
+ return metric_dict
+
+
+def eval_mpjpe_ra(pred, targets, meta_info):
+ joints3d_cam_r_gt = targets["mano.j3d.cam.r"]
+ joints3d_cam_l_gt = targets["mano.j3d.cam.l"]
+ joints3d_cam_r_pred = pred["mano.j3d.cam.r"]
+ joints3d_cam_l_pred = pred["mano.j3d.cam.l"]
+ is_valid = targets["is_valid"]
+ left_valid = targets["left_valid"] * is_valid
+ right_valid = targets["right_valid"] * is_valid
+ num_examples = len(joints3d_cam_r_gt)
+
+ joints3d_cam_r_gt_ra = joints3d_cam_r_gt - joints3d_cam_r_gt[:, :1, :]
+ joints3d_cam_l_gt_ra = joints3d_cam_l_gt - joints3d_cam_l_gt[:, :1, :]
+ joints3d_cam_r_pred_ra = joints3d_cam_r_pred - joints3d_cam_r_pred[:, :1, :]
+ joints3d_cam_l_pred_ra = joints3d_cam_l_pred - joints3d_cam_l_pred[:, :1, :]
+ mpjpe_ra_r = metrics.compute_joint3d_error(
+ joints3d_cam_r_gt_ra, joints3d_cam_r_pred_ra, right_valid
+ )
+ mpjpe_ra_l = metrics.compute_joint3d_error(
+ joints3d_cam_l_gt_ra, joints3d_cam_l_pred_ra, left_valid
+ )
+
+ mpjpe_ra_r = mpjpe_ra_r.mean(axis=1)
+ mpjpe_ra_l = mpjpe_ra_l.mean(axis=1)
+
+ # average over hand direction
+ mpjpe_ra_h = torch.FloatTensor(np.stack((mpjpe_ra_r, mpjpe_ra_l), axis=1))
+ mpjpe_ra_h = torch_utils.nanmean(mpjpe_ra_h, dim=1)
+
+ metric_dict = xdict()
+ # metric_dict["mpjpe/ra/r"] = mpjpe_ra_r
+ # metric_dict["mpjpe/ra/l"] = mpjpe_ra_l
+ metric_dict["mpjpe/ra/h"] = mpjpe_ra_h
+ metric_dict = metric_dict.mul(1000.0).to_np()
+
+ # assert len(metric_dict["mpjpe/ra/r"]) == num_examples
+ # assert len(metric_dict["mpjpe/ra/l"]) == num_examples
+ assert len(metric_dict["mpjpe/ra/h"]) == num_examples
+ return metric_dict
+
+
+def eval_mrrpe(pred, targets, meta_info):
+ joints3d_cam_r_gt = targets["mano.j3d.cam.r"]
+ joints3d_cam_l_gt = targets["mano.j3d.cam.l"]
+ joints3d_cam_r_pred = pred["mano.j3d.cam.r"]
+ joints3d_cam_l_pred = pred["mano.j3d.cam.l"]
+ v3d_cam_gt = unpad_vtensor(targets["object.v.cam"], targets["object.v_len"])
+ v3d_cam_pred = unpad_vtensor(pred["object.v.cam"], targets["object.v_len"])
+
+ bottom_idx = meta_info["part_ids"] == 2
+ bottom_idx = [bidx.nonzero().view(-1) for bidx in bottom_idx]
+ v3d_root_gt = [
+ v3d_gt[bidx].mean(dim=0) for v3d_gt, bidx in zip(v3d_cam_gt, bottom_idx)
+ ]
+ v3d_root_pred = [
+ v3d_pred[bidx].mean(dim=0) for v3d_pred, bidx in zip(v3d_cam_pred, bottom_idx)
+ ]
+
+ is_valid = targets["is_valid"]
+ left_valid = targets["left_valid"] * is_valid
+ right_valid = targets["right_valid"] * is_valid
+
+ root_r_gt = joints3d_cam_r_gt[:, 0]
+ root_l_gt = joints3d_cam_l_gt[:, 0]
+ root_r_pred = joints3d_cam_r_pred[:, 0]
+ root_l_pred = joints3d_cam_l_pred[:, 0]
+ v3d_root_gt = torch.stack(v3d_root_gt, dim=0)
+ v3d_root_pred = torch.stack(v3d_root_pred, dim=0)
+
+ mrrpe_rl = metrics.compute_mrrpe(
+ root_r_gt, root_l_gt, root_r_pred, root_l_pred, left_valid * right_valid
+ )
+ mrrpe_ro = metrics.compute_mrrpe(
+ root_r_gt, v3d_root_gt, root_r_pred, v3d_root_pred, right_valid * is_valid
+ )
+ metric_dict = xdict()
+ metric_dict["mrrpe/r/l"] = mrrpe_rl
+ metric_dict["mrrpe/r/o"] = mrrpe_ro
+ metric_dict = metric_dict.mul(1000.0).to_np()
+ return metric_dict
+
+
+def eval_v2v_success(pred, targets, meta_info):
+ is_valid = targets["is_valid"]
+
+ v3d_cam_gt = unpad_vtensor(targets["object.v.cam"], targets["object.v_len"])
+ v3d_cam_pred = unpad_vtensor(pred["object.v.cam"], targets["object.v_len"])
+
+ bottom_idx = meta_info["part_ids"] == 2
+ bottom_idx = [bidx.nonzero().view(-1) for bidx in bottom_idx]
+ v3d_root_gt = [
+ v3d_gt[bidx].mean(dim=0) for v3d_gt, bidx in zip(v3d_cam_gt, bottom_idx)
+ ]
+ v3d_root_pred = [
+ v3d_pred[bidx].mean(dim=0) for v3d_pred, bidx in zip(v3d_cam_pred, bottom_idx)
+ ]
+
+ v3d_cam_gt_ra = [
+ v3d_gt - root[None, :] for v3d_gt, root in zip(v3d_cam_gt, v3d_root_gt)
+ ]
+
+ v3d_cam_pred_ra = [
+ v3d_pred - root[None, :] for v3d_pred, root in zip(v3d_cam_pred, v3d_root_pred)
+ ]
+
+ v2v_ra = metrics.compute_v2v_dist_no_reduce(
+ v3d_cam_gt_ra, v3d_cam_pred_ra, is_valid
+ )
+
+ diameters = meta_info["diameter"].cpu().numpy()
+
+ alphas = [0.03, 0.05, 0.1]
+ alphas = [0.05]
+ metric_dict = xdict()
+ for alpha in alphas:
+ v2v_rate_ra_list = []
+ for _v2v_ra, _diameter, _is_valid in zip(v2v_ra, diameters, is_valid):
+ if bool(_is_valid):
+ v2v_rate_ra = (_v2v_ra < _diameter * alpha).astype(np.float32)
+ success = v2v_rate_ra.sum()
+ v2v_rate_ra = success / v2v_rate_ra.shape[0]
+ v2v_rate_ra_list.append(v2v_rate_ra)
+ else:
+ v2v_rate_ra_list.append(float("nan"))
+ # percentage
+ metric_dict[f"success_rate/{alpha:.2f}"] = np.array(v2v_rate_ra_list)
+ metric_dict = metric_dict.mul(100.0).to_np()
+ return metric_dict
+
+
+def eval_contact_deviation(pred, targets, meta_info):
+ cd_ro = contact_deviation(
+ pred["object.v.cam"],
+ pred["mano.v3d.cam.r"],
+ targets["dist.ro"],
+ targets["idx.ro"],
+ targets["is_valid"],
+ targets["right_valid"],
+ )
+
+ cd_lo = contact_deviation(
+ pred["object.v.cam"],
+ pred["mano.v3d.cam.l"],
+ targets["dist.lo"],
+ targets["idx.lo"],
+ targets["is_valid"],
+ targets["left_valid"],
+ )
+ cd_ho = torch.stack((cd_ro, cd_lo), dim=1)
+ cd_ho = torch_utils.nanmean(cd_ho, dim=1)
+
+ metric_dict = xdict()
+ # metric_dict["cdev/ro"] = cd_ro
+ # metric_dict["cdev/lo"] = cd_lo
+ metric_dict["cdev/ho"] = cd_ho
+ metric_dict = metric_dict.mul(1000) # mm
+ return metric_dict
+
+
+def compute_error_accel(joints_gt, joints_pred, fps=30.0):
+ """
+ Computes acceleration error:
+ First apply a center difference filter [1, -2, 1] along the seq
+ Then divided by the stencil with h^2 where h =1/fps (second)
+ Note that for each frame that is not visible, three entries in the
+ acceleration error should be zero'd out.
+ Args:
+ joints_gt (Nx14x3).
+ joints_pred (Nx14x3).
+ vis (N).
+ Returns:
+ error_accel (N-2).
+
+ Modified from: https://github.com/mkocabas/VIBE/blob/master/lib/utils/eval_utils.py#L22
+ Note: VIBE does not divide by the stencil h^2, so their results are not in mm instead of m/s^2
+ """
+
+ h = 1 / fps # stencil width
+
+ # (N-2)x14x3
+ # m/s^2
+ accel_gt = (joints_gt[:-2] - 2 * joints_gt[1:-1] + joints_gt[2:]) / (h**2)
+ accel_pred = (joints_pred[:-2] - 2 * joints_pred[1:-1] + joints_pred[2:]) / (h**2)
+ normed = torch.norm(accel_pred - accel_gt, dim=2)
+ acc_err = torch.mean(normed, dim=1)
+ return acc_err
+
+
+def eval_acc_pose(pred, targets, meta_info):
+ gt_vo = targets["object.v.cam"]
+ gt_vr = targets["mano.v3d.cam.r"]
+ gt_vl = targets["mano.v3d.cam.l"]
+
+ pred_vo = pred["object.v.cam"]
+ pred_vr = pred["mano.v3d.cam.r"]
+ pred_vl = pred["mano.v3d.cam.l"]
+
+ num_frames = gt_vo.shape[0]
+
+ # hand roots
+ pred_root_r = pred["mano.j3d.cam.r"][:, :1]
+ pred_root_l = pred["mano.j3d.cam.l"][:, :1]
+ gt_root_r = targets["mano.j3d.cam.r"][:, :1]
+ gt_root_l = targets["mano.j3d.cam.l"][:, :1]
+
+ # object roots
+ parts_ids = targets["object.parts_ids"]
+ bottom_idx = parts_ids[0] == 2
+ gt_root_o = gt_vo[:, bottom_idx].mean(dim=1)[:, None, :]
+ pred_root_o = pred_vo[:, bottom_idx].mean(dim=1)[:, None, :]
+
+ # root relative (num_frames, num_verts, 3)
+ gt_vr_ra = gt_vr - gt_root_r
+ gt_vl_ra = gt_vl - gt_root_l
+ gt_vo_ra = gt_vo - gt_root_o
+
+ # root relative (num_frames, num_verts, 3)
+ pred_vr_ra = pred_vr - pred_root_r
+ pred_vl_ra = pred_vl - pred_root_l
+ pred_vo_ra = pred_vo - pred_root_o
+
+ # m/s^2
+ acc_r = compute_error_accel(gt_vr_ra, pred_vr_ra)
+ acc_l = compute_error_accel(gt_vl_ra, pred_vl_ra)
+ acc_o = compute_error_accel(gt_vo_ra, pred_vo_ra)
+
+ is_valid = targets["is_valid"]
+ left_valid = targets["left_valid"] * is_valid
+ right_valid = targets["right_valid"] * is_valid
+
+ is_valid = is_valid.cpu().numpy()
+ left_valid = left_valid.cpu().numpy()
+ right_valid = right_valid.cpu().numpy()
+
+ # acc of time step t is valid if {t-1, t, t+1} are valid
+ acc_valid_r = (
+ np.convolve(right_valid, np.ones(3), mode="valid").astype(np.int64) == 3
+ )
+ acc_valid_l = (
+ np.convolve(left_valid, np.ones(3), mode="valid").astype(np.int64) == 3
+ )
+ acc_valid_o = np.convolve(is_valid, np.ones(3), mode="valid").astype(np.int64) == 3
+
+ # set invalid acc to nan
+ acc_r[~acc_valid_r] = float("nan")
+ acc_l[~acc_valid_l] = float("nan")
+ acc_o[~acc_valid_o] = float("nan")
+
+ # average by hands
+ acc_h = torch.stack((acc_r, acc_l), dim=1)
+ acc_h = torch_utils.nanmean(acc_h, dim=1)
+
+ # pad nan to start and end of tensor
+ acc_r = torch.cat(
+ (torch.tensor([float("nan")]), acc_r, torch.tensor([float("nan")]))
+ )
+ acc_l = torch.cat(
+ (torch.tensor([float("nan")]), acc_l, torch.tensor([float("nan")]))
+ )
+ acc_h = torch.cat(
+ (torch.tensor([float("nan")]), acc_h, torch.tensor([float("nan")]))
+ )
+
+ metric_dict = xdict()
+ # metric_dict["acc/r"] = acc_r
+ # metric_dict["acc/l"] = acc_l
+ metric_dict["acc/h"] = acc_h
+ metric_dict["acc/o"] = acc_o
+ metric_dict = metric_dict.to_np() # m/s^2
+
+ # assert metric_dict["acc/r"].shape[0] == num_frames
+ # assert metric_dict["acc/l"].shape[0] == num_frames
+ assert metric_dict["acc/h"].shape[0] == num_frames
+ return metric_dict
+
+
+def eval_acc_field(pred, targets, meta_info):
+ is_valid = targets["is_valid"]
+ right_valid = targets["right_valid"] * is_valid
+ left_valid = targets["left_valid"] * is_valid
+ num_frames = is_valid.shape[0]
+
+ targets_dist_lo = targets["dist.lo"][:, :, None].clone()
+ targets_dist_ro = targets["dist.ro"][:, :, None].clone()
+ targets_dist_ol = targets["dist.ol"][:, :, None].clone()
+ targets_dist_or = targets["dist.or"][:, :, None].clone()
+ num_verts = targets_dist_ol.shape[1]
+ assert targets_dist_or.shape[1] == num_verts
+
+ pred_dist_lo = pred["dist.lo"][:, :, None].clone()
+ pred_dist_ro = pred["dist.ro"][:, :, None].clone()
+ pred_dist_ol = pred["dist.ol"][:, :num_verts, None].clone()
+ pred_dist_or = pred["dist.or"][:, :num_verts, None].clone()
+
+ acc_lo = compute_error_accel(targets_dist_lo, pred_dist_lo)
+ acc_ro = compute_error_accel(targets_dist_ro, pred_dist_ro)
+ acc_ol = compute_error_accel(targets_dist_ol, pred_dist_ol)
+ acc_or = compute_error_accel(targets_dist_or, pred_dist_or)
+
+ is_valid = is_valid.cpu().numpy()
+ left_valid = left_valid.cpu().numpy()
+ right_valid = right_valid.cpu().numpy()
+
+ # acc is valid if {t-1, t, t+1} are valid for numerical differentiation
+ acc_valid_r = (
+ np.convolve(right_valid, np.ones(3), mode="valid").astype(np.int64) == 3
+ )
+ acc_valid_l = (
+ np.convolve(left_valid, np.ones(3), mode="valid").astype(np.int64) == 3
+ )
+ acc_valid_o = np.convolve(is_valid, np.ones(3), mode="valid").astype(np.int64) == 3
+
+ acc_ro[~acc_valid_r] = float("nan")
+ acc_lo[~acc_valid_l] = float("nan")
+ acc_or[~acc_valid_o] = float("nan")
+ acc_ol[~acc_valid_o] = float("nan")
+
+ acc_ho = torch.stack((acc_ro, acc_lo), dim=1)
+ acc_oh = torch.stack((acc_or, acc_ol), dim=1)
+ acc_ho = torch_utils.nanmean(acc_ho, dim=1)
+ acc_oh = torch_utils.nanmean(acc_oh, dim=1)
+
+ # pad nan
+ acc_ro = torch.cat(
+ (torch.tensor([float("nan")]), acc_ro, torch.tensor([float("nan")]))
+ )
+ acc_lo = torch.cat(
+ (torch.tensor([float("nan")]), acc_lo, torch.tensor([float("nan")]))
+ )
+ acc_or = torch.cat(
+ (torch.tensor([float("nan")]), acc_or, torch.tensor([float("nan")]))
+ )
+ acc_ol = torch.cat(
+ (torch.tensor([float("nan")]), acc_ol, torch.tensor([float("nan")]))
+ )
+ acc_oh = torch.cat(
+ (torch.tensor([float("nan")]), acc_oh, torch.tensor([float("nan")]))
+ )
+ acc_ho = torch.cat(
+ (torch.tensor([float("nan")]), acc_ho, torch.tensor([float("nan")]))
+ )
+
+ metric_dict = xdict()
+ # metric_dict["acc/ro"] = acc_ro
+ # metric_dict["acc/lo"] = acc_lo
+ # metric_dict["acc/or"] = acc_or
+ # metric_dict["acc/ol"] = acc_ol
+ metric_dict["acc/oh"] = acc_oh
+ metric_dict["acc/ho"] = acc_ho
+
+ # assert metric_dict["acc/ro"].shape[0] == num_frames
+ # assert metric_dict["acc/lo"].shape[0] == num_frames
+ # assert metric_dict["acc/or"].shape[0] == num_frames
+ # assert metric_dict["acc/ol"].shape[0] == num_frames
+ assert metric_dict["acc/oh"].shape[0] == num_frames
+ assert metric_dict["acc/ho"].shape[0] == num_frames
+ return metric_dict
+
+
+eval_fn_dict = {
+ "aae": eval_degree,
+ "mpjpe.ra": eval_mpjpe_ra,
+ "mrrpe": eval_mrrpe,
+ "success_rate": eval_v2v_success,
+ "avg_err_field": eval_field_errors,
+ "cdev": eval_contact_deviation,
+ "mdev": eval_motion_deviation,
+ "acc_err_pose": eval_acc_pose,
+ "acc_err_field": eval_acc_field,
+}
diff --git a/src/utils/interfield.py b/src/utils/interfield.py
new file mode 100644
index 0000000..3ceff7c
--- /dev/null
+++ b/src/utils/interfield.py
@@ -0,0 +1,27 @@
+import torch
+from pytorch3d.ops import knn_points
+
+
+def compute_dist_mano_to_obj(batch_mano_v, batch_v, batch_v_len, dist_min, dist_max):
+ knn_dists, knn_idx, _ = knn_points(
+ batch_mano_v, batch_v, None, batch_v_len, K=1, return_nn=True
+ )
+ knn_dists = knn_dists.sqrt()[:, :, 0]
+
+ knn_dists = torch.clamp(knn_dists, dist_min, dist_max)
+ return knn_dists, knn_idx[:, :, 0]
+
+
+def compute_dist_obj_to_mano(batch_mano_v, batch_v, batch_v_len, dist_min, dist_max):
+ knn_dists, knn_idx, _ = knn_points(
+ batch_v, batch_mano_v, batch_v_len, None, K=1, return_nn=True
+ )
+
+ knn_dists = knn_dists.sqrt()
+ knn_dists = torch.clamp(knn_dists, dist_min, dist_max)
+ return knn_dists[:, :, 0], knn_idx[:, :, 0]
+
+
+def dist2contact(dist, contact_bnd):
+ contact = (dist < contact_bnd).long()
+ return contact
diff --git a/src/utils/loss_modules.py b/src/utils/loss_modules.py
new file mode 100644
index 0000000..75ea041
--- /dev/null
+++ b/src/utils/loss_modules.py
@@ -0,0 +1,118 @@
+import torch
+import torch.nn as nn
+
+import common.torch_utils as torch_utils
+from common.torch_utils import nanmean
+
+l1_loss = nn.L1Loss(reduction="none")
+mse_loss = nn.MSELoss(reduction="none")
+
+
+def subtract_root_batch(joints: torch.Tensor, root_idx: int):
+ assert len(joints.shape) == 3
+ assert joints.shape[2] == 3
+ joints_ra = joints.clone()
+ root = joints_ra[:, root_idx : root_idx + 1].clone()
+ joints_ra = joints_ra - root
+ return joints_ra
+
+
+def compute_contact_devi_loss(pred, targets):
+ cd_ro = contact_deviation(
+ pred["object.v.cam"],
+ pred["mano.v3d.cam.r"],
+ targets["dist.ro"],
+ targets["idx.ro"],
+ targets["is_valid"],
+ targets["right_valid"],
+ )
+
+ cd_lo = contact_deviation(
+ pred["object.v.cam"],
+ pred["mano.v3d.cam.l"],
+ targets["dist.lo"],
+ targets["idx.lo"],
+ targets["is_valid"],
+ targets["left_valid"],
+ )
+ cd_ro = nanmean(cd_ro)
+ cd_lo = nanmean(cd_lo)
+ cd_ro = torch.nan_to_num(cd_ro)
+ cd_lo = torch.nan_to_num(cd_lo)
+ return cd_ro, cd_lo
+
+
+def contact_deviation(pred_v3d_o, pred_v3d_r, dist_ro, idx_ro, is_valid, _right_valid):
+ right_valid = _right_valid.clone() * is_valid
+ contact_dist = 3 * 1e-3 # 3mm considered in contact
+ vo_r_corres = torch.gather(pred_v3d_o, 1, idx_ro[:, :, None].repeat(1, 1, 3))
+
+ # displacement vector H->O
+ disp_ro = vo_r_corres - pred_v3d_r # batch, num_v, 3
+ invalid_ridx = (1 - right_valid).nonzero()[:, 0]
+ disp_ro[invalid_ridx] = float("nan")
+ disp_ro[dist_ro > contact_dist] = float("nan")
+ cd = (disp_ro**2).sum(dim=2).sqrt()
+ err_ro = torch_utils.nanmean(cd, axis=1) # .cpu().numpy() # m
+ return err_ro
+
+
+def keypoint_3d_loss(pred_keypoints_3d, gt_keypoints_3d, criterion, jts_valid):
+ """
+ Compute 3D keypoint loss for the examples that 3D keypoint annotations are available.
+ The loss is weighted by the confidence.
+ """
+
+ gt_root = gt_keypoints_3d[:, :1, :]
+ gt_keypoints_3d = gt_keypoints_3d - gt_root
+ pred_root = pred_keypoints_3d[:, :1, :]
+ pred_keypoints_3d = pred_keypoints_3d - pred_root
+
+ return joints_loss(pred_keypoints_3d, gt_keypoints_3d, criterion, jts_valid)
+
+
+def object_kp3d_loss(pred_3d, gt_3d, criterion, is_valid):
+ num_kps = pred_3d.shape[1] // 2
+ pred_3d_ra = subtract_root_batch(pred_3d, root_idx=num_kps)
+ gt_3d_ra = subtract_root_batch(gt_3d, root_idx=num_kps)
+ loss_kp = vector_loss(
+ pred_3d_ra,
+ gt_3d_ra,
+ criterion=criterion,
+ is_valid=is_valid,
+ )
+ return loss_kp
+
+
+def hand_kp3d_loss(pred_3d, gt_3d, criterion, jts_valid):
+ pred_3d_ra = subtract_root_batch(pred_3d, root_idx=0)
+ gt_3d_ra = subtract_root_batch(gt_3d, root_idx=0)
+ loss_kp = keypoint_3d_loss(
+ pred_3d_ra, gt_3d_ra, criterion=criterion, jts_valid=jts_valid
+ )
+ return loss_kp
+
+
+def vector_loss(pred_vector, gt_vector, criterion, is_valid=None):
+ dist = criterion(pred_vector, gt_vector)
+ if is_valid.sum() == 0:
+ return torch.zeros((1)).to(gt_vector.device)
+ if is_valid is not None:
+ valid_idx = is_valid.long().bool()
+ dist = dist[valid_idx]
+ loss = dist.mean().view(-1)
+ return loss
+
+
+def joints_loss(pred_vector, gt_vector, criterion, jts_valid):
+ dist = criterion(pred_vector, gt_vector)
+ if jts_valid is not None:
+ dist = dist * jts_valid[:, :, None]
+ loss = dist.mean().view(-1)
+ return loss
+
+
+def mano_loss(pred_rotmat, pred_betas, gt_rotmat, gt_betas, criterion, is_valid=None):
+ loss_regr_pose = vector_loss(pred_rotmat, gt_rotmat, criterion, is_valid)
+ loss_regr_betas = vector_loss(pred_betas, gt_betas, criterion, is_valid)
+ return loss_regr_pose, loss_regr_betas
diff --git a/src/utils/mdev.py b/src/utils/mdev.py
new file mode 100644
index 0000000..6359609
--- /dev/null
+++ b/src/utils/mdev.py
@@ -0,0 +1,192 @@
+import numpy as np
+import torch
+
+import common.torch_utils as torch_utils
+from common.xdict import xdict
+
+
+def find_windows(dist, dist_idx, vo, contact_thres, window_thres):
+ # find windows with at least `window_thres` frames in continuous contact
+ # dist: closest distance of each MANO vertex to object for every frame, (num_frames, 778)
+ # dist_idx: closest object vertex id of each MANO vertex to object for every frame, (num_frames, 778)
+ # vo: object vertices in a static frame, (num_obj_verts, 3)
+ # contact_thres: threshold for contact
+ # window_thres: threshold for window length
+
+ # return: windows tensor in shape (num_windows, 4) where each window is [m, n, i, j]
+ # m: start frame
+ # n: end frame
+ # i: hand vertex id
+ # j: object vertex id
+
+ assert isinstance(dist, (torch.Tensor))
+ assert isinstance(dist_idx, (torch.Tensor))
+ num_frames, num_verts = dist.shape
+ contacts = (dist < contact_thres).bool()
+
+ # find MANO vertices that are in contact for at least `window_thres` frames (not necessarily continuous at this point)
+ # the goal is to reduce number ofM MANO vertices to search
+ verts_ids = (contacts.long().sum(dim=0) >= window_thres).nonzero().view(-1).tolist()
+ windows = []
+ # search for each potential MANO vertex id
+ for vidx in verts_ids:
+ window_s = None
+ window_e = None
+ prev_in_contact = False
+ # loop along time dimension
+ for fidx in range(num_frames):
+ if not prev_in_contact and contacts[fidx, vidx]:
+ # if prev not in contact, and current in contact
+ # this indicates the start of a window
+ window_s = fidx # start of window
+ prev_in_contact = True
+ elif prev_in_contact and contacts[fidx, vidx]:
+ # if prev in contact, and current in contact
+ # inside contact window
+ continue
+ elif not prev_in_contact and not contacts[fidx, vidx]:
+ # if prev not in contact, and current not in contact
+ # gaps between contact windows
+ continue
+ elif prev_in_contact and not contacts[fidx, vidx]:
+ # prev in contact, current not in contact
+ # end of contact window
+ # window found: [window_s, window_e] (inclusive)
+ window_e = fidx - 1 # end of window
+ prev_in_contact = False # reset
+ # skip len(window) < window_thres
+ if window_e - window_s + 1 < window_thres:
+ continue
+
+ # remove windows with sliding finger along object surface
+ # check max distance of object vertices matched to hand vertex i
+ # if > 3mm skip this in windows
+ j_list = dist_idx[
+ window_s : window_e + 1, vidx
+ ] # object vertex ids that are closest to hand vertex vidx within window
+ vj = vo[
+ j_list
+ ] # object vertices closest to hand vertex vidx within window
+ cdist = (
+ torch.cdist(vj, vj).cpu().numpy()
+ ) # check if they are nearby in a canonical static frame
+ triu_idx = torch.triu_indices(window_thres, window_thres)
+ cdist[triu_idx[0, :], triu_idx[1, :]] = float(
+ "nan"
+ ) # remove upper triangle (duplicates)
+ mean_dist = np.nanmean(
+ cdist.reshape(-1)
+ ) # average distance between object vertices
+
+ # mano vertex vidx has slided along object surface
+ if mean_dist > contact_thres:
+ continue
+ else:
+ # find the most frequent object vertex id to match hand vertex vidx
+ jidx = int(torch.mode(j_list)[0])
+ windows.append([window_s, window_e, vidx, jidx])
+ else:
+ assert False
+
+ # verify each window has continuous contact and is the biggest
+ for window in windows:
+ line = (
+ contacts[window[0] - 1 : window[1] + 1 + 1, window[2] : window[2] + 1]
+ .long()
+ .view(-1)
+ )
+ # check if the window is the biggest
+ assert not contacts[window[0] - 1, window[2]]
+ assert not contacts[window[1] + 1, window[2]]
+
+ # Example 011110 gives line.sum() == 4 and len(line) == 6
+ assert line.sum() == len(line) - 2
+ return windows
+
+
+def find_windows_wrapper(dist, dist_idx, vo, contact_thres, window_thres):
+ # find windows with at least `window_thres` frames in continuous contact
+ windows = np.array(find_windows(dist, dist_idx, vo[0], contact_thres, window_thres))
+ return windows
+
+
+def compute_mdev(windows, pred_vh, pred_vo, frame_valid):
+ mdev_list = []
+ for window in windows:
+ m, n, i, j = window
+ # extract hand object locations according to pairs
+ pred_stable_vh = pred_vh[m : n + 1, i]
+ pred_stable_vo = pred_vo[m : n + 1, j]
+
+ # direction of hand and object vertices in time
+ pred_delta_vh = pred_stable_vh[1:] - pred_stable_vh[:-1]
+ pred_delta_vo = pred_stable_vo[1:] - pred_stable_vo[:-1]
+
+ # difference between hand and object directions
+ pred_diff_delta = pred_delta_vh - pred_delta_vo
+
+ # a diff is valid if two consecutive frames are valid
+ valid = frame_valid[m : n + 1].clone()
+ diff_valid = valid[1:] * valid[:-1]
+ diff_valid = diff_valid.bool()
+
+ # set invalid diff to nan
+ pred_diff_delta[~diff_valid, :] = float("nan")
+
+ mdev = torch.norm(pred_diff_delta, dim=1)
+
+ # normalize by (valid) window size
+ mdev = torch_utils.nanmean(mdev, dim=0)
+ mdev_list.append(mdev)
+ return mdev_list
+
+
+def eval_motion_deviation(pred, targets, meta_info):
+ num_frames, num_verts = pred["mano.v3d.cam.r"].shape[:2]
+
+ is_valid = targets["is_valid"]
+ r_valid = targets["right_valid"] * is_valid
+ l_valid = targets["left_valid"] * is_valid
+
+ # parameters
+ contact_thres = 3e-3
+ window_thres = 15 # half a second
+
+ # find stable contact window btw right and object
+ # [m, n, i, j]
+ windows_r = find_windows_wrapper(
+ targets["dist.ro"],
+ targets["idx.ro"],
+ targets["object.v.cam"],
+ contact_thres,
+ window_thres,
+ )
+
+ # left hand
+ windows_l = find_windows_wrapper(
+ targets["dist.lo"],
+ targets["idx.lo"],
+ targets["object.v.cam"],
+ contact_thres,
+ window_thres,
+ )
+
+ mdev_list_r = compute_mdev(
+ windows_r, pred["mano.v3d.cam.r"], pred["object.v.cam"], r_valid
+ )
+ mdev_r = torch.stack(mdev_list_r)
+
+ mdev_list_l = compute_mdev(
+ windows_l, pred["mano.v3d.cam.l"], pred["object.v.cam"], l_valid
+ )
+ mdev_l = torch.stack(mdev_list_l)
+
+ mdev_h = torch.cat((mdev_r, mdev_l), dim=0)
+
+ metric_dict = xdict()
+ # metric_dict["mdev/r"] = mdev_r
+ # metric_dict["mdev/l"] = mdev_l
+ metric_dict["mdev/h"] = mdev_h
+ metric_dict = metric_dict.mul(1000).to_np() # mm
+
+ return metric_dict