|
29 | 29 |
|
30 | 30 |
|
31 | 31 | <p align="center">
|
32 |
| - <a href="https://arxiv.org/pdf/2312.09800"><strong>Paper (arXiv)</strong></a> | |
| 32 | + <a href="https://arxiv.org/abs/2312.09800"><strong>Paper (arXiv)</strong></a> | |
33 | 33 | <a href="https://www.youtube.com/watch?v=rP_OuOE-O34"><strong>Video</strong></a> |
|
34 | 34 | <a href="https://drive.google.com/file/d/1-u_rW03HvtYhjPT6pm1JBhTTPWxGdVWP/view?usp=sharing"><strong>Poster</strong></a> |
|
35 | 35 | <a href="#citation"><strong>BibTeX</strong></a>
|
@@ -68,81 +68,89 @@ conda activate devo
|
68 | 68 |
|
69 | 69 | Next, install the DEVO package
|
70 | 70 | ```bash
|
| 71 | +# download and unzip Eigen source code |
71 | 72 | wget https://gitlab.com/libeigen/eigen/-/archive/3.4.0/eigen-3.4.0.zip
|
72 | 73 | unzip eigen-3.4.0.zip -d thirdparty
|
73 | 74 |
|
74 | 75 | # install DEVO
|
75 | 76 | pip install .
|
76 | 77 | ```
|
77 | 78 |
|
78 |
| -## Data Preprocessing |
79 |
| -Check ```scripts/pp_DATASETNAME.py``` for the way to pre-process the original datasets. This will create the necessary files for you, e.g. ``rectify_map.h5``, ``calib_undist.json`` and ``t_offset_us.txt``. |
80 |
| - |
81 | 79 | ### Only for Training
|
82 |
| -Please note, the training data have the size of about 1.1TB (rbg: 300GB, evs: 370GB). |
| 80 | +*The following steps are only needed if you intend to (re)train DEVO. Please note, the training data have the size of about 1.1TB (rbg: 300GB, evs: 370GB).* |
| 81 | + |
| 82 | +*Otherwise, skip it and go to [here](#only-for-evalution).* |
83 | 83 |
|
84 | 84 | First, download all RGB images and depth maps of [TartanAir](https://theairlab.org/tartanair-dataset/) from the left camera (~500GB) to `<TARTANPATH>`
|
85 | 85 | ```bash
|
86 | 86 | python thirdparty/tartanair_tools/download_training.py --output-dir <TARTANPATH> --rgb --depth --only-left
|
87 | 87 | ```
|
88 | 88 |
|
89 |
| -Next, generate event voxel grids using [vid2e](https://github.com/uzh-rpg/rpg_vid2e) |
| 89 | +Next, generate event voxel grids using [vid2e](https://github.com/uzh-rpg/rpg_vid2e). |
90 | 90 | ```bash
|
91 |
| -python # TODO release simulation |
| 91 | +python scripts/convert_tartan.py --dirsfile <path to .txt file> |
92 | 92 | ```
|
| 93 | +`dirsfile` expects a .txt file containing line-separated paths to dirs with .png images (to generate events for these images). |
93 | 94 |
|
94 |
| -We provide scene infomation (including frame graph for co-visability used by clip sampling). (Building dataset is expensive). |
95 |
| -```bash |
96 |
| -# download data (~450MB) |
97 |
| -./download_data.sh |
98 |
| -``` |
99 | 95 |
|
100 | 96 | ### Only for Evalution
|
101 |
| -We provide a pretrained model for our simulated event data |
| 97 | +We provide a pretrained model for our simulated event data. |
102 | 98 |
|
103 | 99 | ```bash
|
104 | 100 | # download model (~40MB)
|
105 | 101 | ./download_model.sh
|
106 | 102 | ```
|
107 | 103 |
|
| 104 | +#### Data Preprocessing |
| 105 | +We evaluate DEVO on seven real-world event-based datasets ([FPV](https://fpv.ifi.uzh.ch/), [VECtor](https://star-datasets.github.io/vector/), [HKU](https://github.com/arclab-hku/Event_based_VO-VIO-SLAM?tab=readme-ov-file#data-sequence), [EDS](https://rpg.ifi.uzh.ch/eds.html), [RPG](https://rpg.ifi.uzh.ch/ECCV18_stereo_davis.html), [MVSEC](https://daniilidis-group.github.io/mvsec/), [TUM-VIE](https://cvg.cit.tum.de/data/datasets/visual-inertial-event-dataset)). We provide scripts for data preprocessing (undist, ...). |
| 106 | + |
| 107 | +Check `scripts/pp_DATASETNAME.py` for the way to preprocess the original datasets. This will create the necessary files for you, e.g. `rectify_map.h5`, `calib_undist.json` and `t_offset_us.txt`. |
| 108 | + |
108 | 109 |
|
109 | 110 | ## Training
|
110 |
| -Make sure you have run `./download_data.sh`. Your directory structure should look as follows |
| 111 | +Make sure you have run the [following steps](#only-for-training). Your dataset directory structure should look as follows |
111 | 112 |
|
112 | 113 | ```
|
113 |
| -├── datasets |
114 |
| - ├── TartanAirEvs |
115 |
| - ├── abandonedfactory |
116 |
| - ├── abandonedfactory_night |
117 |
| - ├── ... |
118 |
| - ├── westerndesert |
119 |
| - ... |
| 114 | +├── <TARTANPATH> |
| 115 | + ├── abandonedfactory |
| 116 | + ├── abandonedfactory_night |
| 117 | + ├── ... |
| 118 | + ├── westerndesert |
120 | 119 | ```
|
121 | 120 |
|
122 |
| -To train (log files will be written to `runs/<your name>`). Model will be run on the validation split every 10k iterations |
| 121 | +To train DEVO with the default configuration, run |
123 | 122 | ```bash
|
124 | 123 | python train.py -c="config/DEVO_base.conf" --name=<your name>
|
125 | 124 | ```
|
126 | 125 |
|
| 126 | +The log files will be written to `runs/<your name>`. Please, check [`train.py`](train.py) for more options. |
| 127 | + |
127 | 128 | ## Evaluation
|
| 129 | +Make sure you have run the [following steps](#only-for-evalution) (downloading pretrained model, data and preprocessing data). |
| 130 | + |
128 | 131 | ```bash
|
129 |
| -python evals/eval_evs/eval_XXX_evs.py --datapath=<path to xxx dataset> --weights="DEVO.pth" --stride=1 --trials=1 --expname=<your name> |
| 132 | +python evals/eval_evs/eval_DATASETNAME_evs.py --datapath=<DATASETPATH> --weights="DEVO.pth" --stride=1 --trials=1 --expname=<your name> |
130 | 133 | ```
|
131 | 134 |
|
| 135 | +The qualitative and quantitative results will be written to `results/DATASETNAME/<your name>`. Check [`eval_rpg_evs.py`](evals/eval_evs/eval_rpg_evs.py) for more options. |
| 136 | + |
132 | 137 | ## News
|
133 | 138 | - [x] Code and model are released.
|
134 |
| -- [] TODO Release code for simulation |
| 139 | +- [x] Code for simulation is released. |
135 | 140 |
|
136 | 141 |
|
137 | 142 | ## Citation
|
138 | 143 | If you find our work useful, please cite our paper:
|
139 | 144 |
|
140 | 145 | ```bib
|
141 |
| -@article{klenk2023devo, |
| 146 | +@inproceedings{klenk2023devo, |
142 | 147 | title = {Deep Event Visual Odometry},
|
143 | 148 | author = {Klenk, Simon and Motzet, Marvin and Koestler, Lukas and Cremers, Daniel},
|
144 |
| - journal = {arXiv preprint arXiv:2312.09800}, |
145 |
| - year = {2023} |
| 149 | + booktitle = {International Conference on 3D Vision, 3DV 2024, Davos, Switzerland, |
| 150 | + March 18-21, 2024}, |
| 151 | + pages = {739--749}, |
| 152 | + publisher = {{IEEE}}, |
| 153 | + year = {2024}, |
146 | 154 | }
|
147 | 155 | ```
|
148 | 156 |
|
|
0 commit comments