|
1 | 1 | A Dynamic Points Removal Benchmark in Point Cloud Maps
|
2 | 2 | ---
|
3 | 3 |
|
4 |
| -<!-- [](https://paperswithcode.com/sota/dynamic-point-removal-on-semi-indoor?p=a-dynamic-points-removal-benchmark-in-point) --> |
5 |
| -[](https://arxiv.org/abs/2307.07260) |
6 |
| -[](https://www.bilibili.com/video/BV1bC4y1R7h3) |
7 |
| -[](https://youtu.be/pCHsNKXDJQM?si=nhbAnPrbaZJEqbjx) |
8 |
| -[](https://hkustconnect-my.sharepoint.com/:b:/g/personal/qzhangcb_connect_ust_hk/EQvNHf9JNEtNpyPg1kkNLNABk0v1TgGyaM_OyCEVuID4RQ?e=TdWzAq) |
9 |
| - |
10 |
| -Here is a preview of the readme in codes. Task detects dynamic points in maps and removes them, enhancing the maps: |
| 4 | +Author: [Qingwen Zhang](http://kin-zhang.github.io) (Kin) |
11 | 5 |
|
12 |
| -<center> |
13 |
| -<img src="assets/imgs/background.png" width="80%"> |
14 |
| -</center> |
| 6 | +This is **our wiki page README**, please visit our [main branch](https://github.com/KTH-RPL/DynamicMap_Benchmark) for more information about the benchmark. |
15 | 7 |
|
16 |
| -**Folder** quick view: |
| 8 | +## Install |
17 | 9 |
|
18 |
| -- `methods` : contains all the methods in the benchmark |
19 |
| -- `scripts/py/eval`: eval the result pcd compared with ground truth, get quantitative table |
20 |
| -- `scripts/py/data` : pre-process data before benchmark. We also directly provided all the dataset we tested in the map. We run this benchmark offline in computer, so we will extract only pcd files from custom rosbag/other data format [KITTI, Argoverse2] |
| 10 | +If you want to try the MkDocs locally, the only thing you need is `Python` and some python package. If you are worrying it will destory your env, you can try [virual env](https://docs.python.org/3/library/venv.html) or [anaconda](https://www.anaconda.com/). |
21 | 11 |
|
22 |
| -**Quick** try: |
23 | 12 |
|
24 |
| -- Teaser data on KITTI sequence 00 only 384.8MB in [Zenodo online drive](https://zenodo.org/record/10886629) |
25 |
| - ```bash |
26 |
| - wget https://zenodo.org/records/10886629/files/00.zip |
27 |
| - unzip 00.zip -d ${data_path, e.g. /home/kin/data} |
28 |
| - ``` |
29 |
| -- Clone our repo: |
30 |
| - ```bash |
31 |
| - git clone --recurse-submodules https://github.com/KTH-RPL/DynamicMap_Benchmark.git |
32 |
| - ``` |
33 |
| -- Go to methods folder, build and run through |
34 |
| - ```bash |
35 |
| - cd methods/dufomap && cmake -B build -D CMAKE_CXX_COMPILER=g++-10 && cmake --build build |
36 |
| - ./build/dufomap_run ${data_path, e.g. /home/kin/data/00} ${assets/config.toml} |
37 |
| - ``` |
38 |
| - |
39 |
| -### News: |
40 |
| - |
41 |
| -Feel free to pull a request if you want to add more methods or datasets. Welcome! We will try our best to update methods and datasets in this benchmark. Please give us a star 🌟 and cite our work 📖 if you find this useful for your research. Thanks! |
42 |
| - |
43 |
| -- **2024/04/29** [BeautyMap](https://arxiv.org/abs/2405.07283) is accepted by RA-L'24. Updated benchmark: BeautyMap and DeFlow submodule instruction in the benchmark. Added the first data-driven method [DeFlow](https://github.com/KTH-RPL/DeFlow/tree/feature/dynamicmap) into our benchmark. Feel free to check. |
44 |
| -- **2024/04/18** [DUFOMap](https://arxiv.org/abs/2403.01449) is accepted by RA-L'24. Updated benchmark: DUFOMap and dynablox submodule instruction in the benchmark. Two datasets w/o gt for demo are added in the download link. Feel free to check. |
45 |
| -- **2024/03/08** **Fix statements** on our ITSC'23 paper: KITTI sequences pose are also from SemanticKITTI which used SuMa. In the DUFOMap paper Section V-C, Table III, we present the dynamic removal result on different pose sources. Check discussion in [DUFOMap](https://arxiv.org/abs/2403.01449) paper if you are interested. |
46 |
| -- **2023/06/13** The [benchmark paper](https://arxiv.org/abs/2307.07260) Accepted by ITSC 2023 and release five methods (Octomap, Octomap w GF, ERASOR, Removert) and three datasets (01, 05, av2, semindoor) in [benchmark paper](https://arxiv.org/abs/2307.07260). |
47 |
| - |
48 |
| ---- |
49 |
| - |
50 |
| -- [ ] 2024/04/19: I will update a document page soon (tutorial, manual book, and new online leaderboard), and point out the commit for each paper. Since there are some minor mistakes in the first version. Stay tune with us! |
51 |
| - |
52 |
| - |
53 |
| -## Methods: |
54 |
| - |
55 |
| -Please check in [`methods`](methods) folder. |
56 |
| - |
57 |
| -Online (w/o prior map): |
58 |
| -- [x] DUFOMap (Ours 🚀): [RAL'24](https://arxiv.org/abs/2403.01449), [**Benchmark Instruction**](https://github.com/KTH-RPL/dufomap) |
59 |
| -- [x] Octomap w GF (Ours 🚀): [ITSC'23](https://arxiv.org/abs/2307.07260), [**Benchmark improvement ITSC 2023**](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark) |
60 |
| -- [x] dynablox: [RAL'23 official link](https://github.com/ethz-asl/dynablox), [**Benchmark Adaptation**](https://github.com/Kin-Zhang/dynablox/tree/feature/benchmark) |
61 |
| -- [x] Octomap: [ICRA'10 & AR'13 official link](https://github.com/OctoMap/octomap_mapping), [**Benchmark implementation**](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark) |
62 |
| - |
63 |
| -Learning-based (data-driven) (w pretrain-weights provided): |
64 |
| -- [x] DeFlow (Ours 🚀): [ICRA'24](https://arxiv.org/abs/2401.16122), [**Benchmark Adaptation**](https://github.com/KTH-RPL/DeFlow/tree/feature/dynamicmap) |
65 |
| - |
66 |
| -Offline (need prior map). |
67 |
| -- [x] BeautyMap (Ours 🚀): [RAL'24](https://arxiv.org/abs/2405.07283), [**Official Code**](https://github.com/MKJia/BeautyMap) |
68 |
| -- [x] ERASOR: [RAL'21 official link](https://github.com/LimHyungTae/ERASOR), [**benchmark implementation**](https://github.com/Kin-Zhang/ERASOR/tree/feat/no_ros) |
69 |
| -- [x] Removert: [IROS 2020 official link](https://github.com/irapkaist/removert), [**benchmark implementation**](https://github.com/Kin-Zhang/removert) |
70 |
| - |
71 |
| -Please note that we provided the comparison methods also but modified a little bit for us to run the experiments quickly, but no modified on their methods' core. Please check the LICENSE of each method in their official link before using it. |
72 |
| - |
73 |
| -You will find all methods in this benchmark under `methods` folder. So that you can easily reproduce the experiments. [Or click here to check our score screenshot directly](assets/imgs/eval_demo.png). |
74 |
| -<!-- And we will also directly provide [the result data](TODO) so that you don't need to run the experiments by yourself. ... Where to save this? --> |
75 |
| - |
76 |
| -Last but not least, feel free to pull request if you want to add more methods. Welcome! |
77 |
| - |
78 |
| -## Dataset & Scripts |
79 |
| - |
80 |
| -Download PCD files mentioned in paper from [Zenodo online drive](https://zenodo.org/records/10886629). Or create unified format by yourself through the [scripts we provided](scripts/README.md) for more open-data or your own dataset. Please follow the LICENSE of each dataset before using it. |
81 |
| - |
82 |
| -- [x] [Semantic-Kitti, outdoor small town](https://semantic-kitti.org/dataset.html) VLP-64 |
83 |
| -- [x] [Argoverse2.0, outdoor US cities](https://www.argoverse.org/av2.html#lidar-link) VLP-32 |
84 |
| -- [x] [UDI-Plane] Our own dataset, Collected by VLP-16 in a small vehicle. |
85 |
| -- [x] [KTH-Campuse] Our [Multi-Campus Dataset](https://mcdviral.github.io/), Collected by [Leica RTC360 3D Laser Scan](https://leica-geosystems.com/products/laser-scanners/scanners/leica-rtc360). Only 18 frames included to download for demo, please check [the official website](https://mcdviral.github.io/) for more. |
86 |
| -- [x] [Indoor-Floor] Our own dataset, Collected by Livox mid-360 in a quadruped robot. |
87 |
| -<!-- - [ ] [HKUST-Building] Our [fusionportable Dataset](https://fusionportable.github.io/dataset/fusionportable/), collected by [Leica BLK360 Imaging Laser Scanner](https://leica-geosystems.com/products/laser-scanners/scanners/blk360) --> |
88 |
| -<!-- - [ ] [KTH-Indoor] Our own dataset, Collected by VLP-16/Mid-70 in kobuki. --> |
89 |
| - |
90 |
| -Welcome to contribute your dataset with ground truth to the community through pull request. |
91 |
| - |
92 |
| -### Evaluation |
93 |
| - |
94 |
| -First all the methods will output the clean map, if you are only **user on map clean task,** it's **enough**. But for evaluation, we need to extract the ground truth label from gt label based on clean map. Why we need this? Since maybe some methods downsample in their pipeline, we need to extract the gt label from the downsampled map. |
95 |
| - |
96 |
| -Check [create dataset readme part](scripts/README.md#evaluation) in the scripts folder to get more information. But you can directly download the dataset through the link we provided. Then no need to read the creation; just use the data you downloaded. |
| 13 | +main package [user is only need for sometime, check the issue section] |
| 14 | +```bash |
| 15 | +pip install mkdocs-material |
| 16 | +``` |
97 | 17 |
|
98 |
| -- Visualize the result pcd files in [CloudCompare](https://www.danielgm.net/cc/) or the script to provide, one click to get all evaluation benchmarks and comparison images like paper have check in [scripts/py/eval](scripts/py/eval). |
| 18 | +plugin package |
| 19 | +```bash |
| 20 | +pip install mkdocs-minify-plugin mkdocs-git-revision-date-localized-plugin mkdocs-git-authors-plugin mkdocs-video |
| 21 | +``` |
99 | 22 |
|
100 |
| -- All color bar also provided in CloudCompare, here is [tutorial how we make the animation video](TODO). |
| 23 | +### Run |
| 24 | +```bash |
| 25 | +mkdocs serve |
| 26 | +``` |
101 | 27 |
|
102 | 28 | ## Acknowledgements
|
103 | 29 |
|
|
0 commit comments