Skip to content

Commit 3984d1d

Browse files
committed
feat: first version of docs.
fix: typo change docs branch name in config.
1 parent 160c4ce commit 3984d1d

File tree

12,391 files changed

+25909
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

12,391 files changed

+25909
-0
lines changed

Diff for: .github/workflows/ci.yml

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
name: ci
2+
on:
3+
push:
4+
branches:
5+
- mkdocs
6+
permissions:
7+
contents: write
8+
jobs:
9+
deploy:
10+
runs-on: ubuntu-latest
11+
steps:
12+
- name: Checkout repository
13+
uses: actions/checkout@v3
14+
with:
15+
fetch-depth: 0
16+
- uses: actions/setup-python@v4
17+
with:
18+
python-version: 3.x
19+
- uses: actions/cache@v2
20+
with:
21+
key: ${{ github.ref }}
22+
path: .cache
23+
- run: pip install mkdocs-material
24+
- run: pip install mkdocs-minify-plugin mkdocs-video mkdocs-git-committers-plugin mkdocs-git-revision-date-localized-plugin # mkdocs-git-revision-date-plugin # mkdocs-git-revision-date-localized-plugin mkdocs-git-authors-plugin
25+
- run: mkdocs gh-deploy --force

Diff for: docs/data.md

+199
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,199 @@
1+
# Data
2+
3+
In this section, we will introduce the data format we use in the benchmark, and how to prepare the data (public datasets or collected by ourselves) for the benchmark.
4+
5+
## Format
6+
7+
We saved all our data into PCD files, first let me introduce the [PCD file format](https://pointclouds.org/documentation/tutorials/pcd_file_format.html):
8+
9+
The important two for us are `VIEWPOINT`, `POINTS` and `DATA`:
10+
11+
- **VIEWPOINT** - specifies an acquisition viewpoint for the points in the dataset. This could potentially be later on used for building transforms between different coordinate systems, or for aiding with features such as surface normals, that need a consistent orientation.
12+
13+
The viewpoint information is specified as a translation (tx ty tz) + quaternion (qw qx qy qz). The default value is:
14+
15+
```bash
16+
VIEWPOINT 0 0 0 1 0 0 0
17+
```
18+
19+
- **POINTS** - specifies the number of points in the dataset.
20+
21+
- **DATA** - specifies the data type that the point cloud data is stored in. As of version 0.7, three data types are supported: ascii, binary, and binary_compressed. We saved as binary for faster reading and writing.
22+
23+
### Example
24+
25+
```
26+
# .PCD v0.7 - Point Cloud Data file format
27+
VERSION 0.7
28+
FIELDS x y z intensity
29+
SIZE 4 4 4 4
30+
TYPE F F F F
31+
COUNT 1 1 1 1
32+
WIDTH 125883
33+
HEIGHT 1
34+
VIEWPOINT -15.6504 17.981 -0.934952 0.882959 -0.0239536 -0.0058903 -0.468802
35+
POINTS 125883
36+
DATA binary
37+
```
38+
39+
In this `004390.pcd` we have 125883 points, and the pose (sensor center) of this frame is: `-15.6504 17.981 -0.934952 0.882959 -0.0239536 -0.0058903 -0.468802`. All points are already transformed to the world frame.
40+
41+
## Download benchmark data
42+
43+
We already processed the data in the benchmark, you can download the data from the [following links](https://zenodo.org/records/10886629):
44+
45+
46+
| Dataset | Description | Sensor Type | Total Frame Number | Size |
47+
| --- | --- | --- | --- | --- |
48+
| KITTI sequence 00 | in a small town with few dynamics (including one pedestrian around) | VLP-64 | 141 | 384.8 MB |
49+
| KITTI sequence 05 | in a small town straight way, one higher car, the benchmarking paper cover image from this sequence. | VLP-64 | 321 | 864.0 MB |
50+
| Argoverse2 | in a big city, crowded and tall buildings (including cyclists, vehicles, people walking near the building etc. | 2 x VLP-32 | 575 | 1.3 GB |
51+
| KTH campus (no gt) | Collected by us (Thien-Minh) on the KTH campus. Lots of people move around on the campus. | Leica RTC360 | 18 | 256.4 MB |
52+
| Semi-indoor | Collected by us, running on a small 1x2 vehicle with two people walking around the platform. | VLP-16 | 960 | 620.8 MB |
53+
| Twofloor (no gt) | Collected by us (Bowen Yang) in a quadruped robot. A two-floor structure environment with one pedestrian around. | Livox-mid 360 | 3305 | 725.1 MB |
54+
55+
Download command:
56+
```bash
57+
wget https://zenodo.org/api/records/10886629/files-archive.zip
58+
59+
# or download each sequence separately
60+
wget https://zenodo.org/records/10886629/files/00.zip
61+
wget https://zenodo.org/records/10886629/files/05.zip
62+
wget https://zenodo.org/records/10886629/files/av2.zip
63+
wget https://zenodo.org/records/10886629/files/kthcampus.zip
64+
wget https://zenodo.org/records/10886629/files/semindoor.zip
65+
wget https://zenodo.org/records/10886629/files/twofloor.zip
66+
```
67+
68+
## Create by yourself
69+
70+
If you want to process more data, you can follow the instructions below. (
71+
72+
!!! Note
73+
Feel free to skip this section if you only want to use the benchmark data.
74+
75+
### Custom Data
76+
77+
For our custom dataset, we normally record the pointcloud with rosbag, and then running some slam methods to get the pose. If you don't have clue to use the slam package, check [simple_ndt_slam](https://github.com/Kin-Zhang/simple_ndt_slam) repo the only dependence you need in the repo is ROS. If you don't have ROS/Ubuntu, you can directly use the `docker` to run.
78+
79+
Then, directly export rosbag file [which have pose/tf and pointcloud topic] to pcd we want, after your run with [`simple_ndt_slam`](https://github.com/Kin-Zhang/simple_ndt_slam) check your result rosbag file by `rosbag info`, here is example output:
80+
81+
```
82+
➜ bags rosbag info res_semi_indoor_data.bag
83+
path: res_semi_indoor_data.bag
84+
version: 2.0
85+
duration: 1:47s (107s)
86+
start: Apr 28 2023 11:11:26.79 (1682673086.79)
87+
end: Apr 28 2023 11:13:14.35 (1682673194.35)
88+
size: 810.8 MB
89+
messages: 4803
90+
compression: none [961/961 chunks]
91+
types: nav_msgs/Odometry [cd5e73d190d741a2f92e81eda573aca7]
92+
sensor_msgs/PointCloud2 [1158d486dd51d683ce2f1be655c3c181]
93+
tf2_msgs/TFMessage [94810edda583a504dfda3829e70d7eec]
94+
topics: /auto_odom 960 msgs : nav_msgs/Odometry
95+
/repub_points 960 msgs : sensor_msgs/PointCloud2
96+
/tf 2883 msgs : tf2_msgs/TFMessage
97+
98+
```
99+
100+
Then use the scripts I provided in [`simple_ndt_slam`](https://github.com/Kin-Zhang/simple_ndt_slam) to extract the pcd data to unified format here.
101+
102+
```bash
103+
roscore # since need read rosbag through scripts
104+
105+
./simple_ndt_slam/tools/build/bag2pcd_tf /home/kin/bags/res_semi_indoor_data.bag /home/kin/data/semindoor /repub_points map 1 # 1 for save raw map also since some methods need use it in the framework.
106+
```
107+
108+
### KITTI Dataset
109+
110+
Official data format [Download link](http://www.semantic-kitti.org/dataset.html#download)
111+
112+
#### Extract Point Cloud Data
113+
114+
extract the semantic-kitti dataset from the raw data, when you download the original SemanticKITTI dataset, you will get a folder like this:
115+
```
116+
➜ SemanticKitti tree -L 2
117+
.
118+
├── data_odometry_calib
119+
│ └── dataset
120+
│ └── sequences
121+
├── data_odometry_labels
122+
│ ├── dataset
123+
│ │ └── sequences
124+
│ └── README
125+
├── data_odometry_velodyne
126+
│ └── dataset
127+
│ └── sequences
128+
```
129+
130+
After downloading the official dataset, Run the script like follows:
131+
```
132+
python3 scripts/data/extract_semkitti.py --original_path /home/kin/data/KITTI/SemanticKitti --save_data_folder /home/kin/data/DynamicMap --gt_cloud True --sequence "'00'"
133+
```
134+
135+
Note!!
136+
137+
1. SemanticKITTI pose file is not ground truth pose but run SuMa, more discussion and different can be found here in [semantic-kitti-api/issues/140](https://github.com/PRBonn/semantic-kitti-api/issues/140). We have extra different odometry pose result in [DUFOMap paper, Sec V-C, Table III](https://arxiv.org/pdf/2403.01449), based on [scripts/py/data/extract_diff_pose.py](py/data/extract_diff_pose.py)
138+
139+
2. You can get the sensor pose in the PCD `VIEWPOINT` Field, so you don't need pose file etc.
140+
If you are using CloudCompare to view, drag all pcd files to windows, you will have the correct whole map view.
141+
(NOTE Since we already transform to world frame CloudCompare in 2.11 version will looks correct map
142+
but version to 2.12+ will have double effect on VIEWPOINT Field [you can comment the transform line if you don't like that.] )
143+
144+
Example here:
145+
![](../assets/imgs/kitti_01_data_demo.png)
146+
147+
3. View the ground truth in CloudCompare, intensity=1 means dynamic which are red points in images:
148+
149+
![](../assets/imgs/kitti_gt.png)
150+
151+
4. 2024/03/27 Updated version limit the range because we find the ground truth label is not correct in the far range, so we limit the range to 50m. You can change the range in the script.
152+
153+
![](../assets/imgs/label_des.png)
154+
155+
<!-- 5. Tracking seq 19, the original pose I cannot parse as previous semantickitti dataset, so I run kiss_icp_pipeline to get new pose file. Replace the original pose file with the new one. And tracking seq 19 gt label max range looks like 40m since I check 50m sitll have some mislabel pedestrians. -->
156+
157+
158+
159+
### Argoverse 2.0 Dataset
160+
161+
I manually labeled dynamic and static in one sequence folder name: `07YOTznatmYypvQYpzviEcU3yGPsyaGg__Spring_2020` , you have to go for website to downloaded this ground truth PCD.
162+
163+
#### Download
164+
165+
Check this issue: https://github.com/argoverse/av2-api/issues/161
166+
167+
Installing s5cmd
168+
169+
```bash
170+
#!/usr/bin/env bash
171+
172+
export INSTALL_DIR=$HOME/.local/bin
173+
export PATH=$PATH:$INSTALL_DIR
174+
export S5CMD_URI=https://github.com/peak/s5cmd/releases/download/v1.4.0/s5cmd_1.4.0_$(uname | sed 's/Darwin/macOS/g')-64bit.tar.gz
175+
176+
mkdir -p $INSTALL_DIR
177+
curl -sL $S5CMD_URI | tar -C $INSTALL_DIR -xvzf - s5cmd
178+
```
179+
180+
Download the val dataset since train is toooo big for me, totally is 5T for train dataset although no label.
181+
182+
```bash
183+
s5cmd --no-sign-request cp 's3://argoai-argoverse/av2/lidar/val/*' /home/kin/bags/av2/val
184+
```
185+
186+
#### Extract Point Cloud Data
187+
188+
This time no need cpp file since argoverse have their own api things and we just need to use it. Also I write with save pcd in utils.
189+
190+
Check their [python api](https://pypi.org/project/av2/), [github](https://github.com/argoverse/av2-api)
191+
```bash
192+
pip install av2
193+
```
194+
195+
Please check the folder path inside the script.
196+
```bash
197+
python3 scripts/extract_argoverse2.py
198+
```
199+

Diff for: docs/evaluation.md

+54
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# Evaluation
2+
3+
- Create the data you need for the unified benchmark. But I will recommend to [download directly from link](https://zenodo.org/record/8160051), then we don't need to read <u>Data Creation</u>. Most important reason is Data Creation don't include manually labeled ground truth file. But the link we provided have the human labeled ground truth.
4+
5+
It will also help if you want to create your own dataset for benchmarking. Welcome to contribute your dataset to the community.
6+
7+
- Evaluate the performance of the methods.
8+
9+
10+
- **Compare the result** and **output the visualization** automatically.
11+
12+
It's better to view this `md` file through outline. No need to go through all of them. 😃
13+
14+
## Evaluation
15+
16+
This part include output the quantitative table and qualitative result automatically. To be updated scripts....
17+
18+
All the methods will output the **clean map**, so we need to extract the ground truth label from gt label based on clean map. Why we need this? Since maybe some methods will downsample in their pipeline, so we need to extract the gt label from the downsampled map.
19+
20+
### 0. Run Methods
21+
22+
Check the [`methods`](../methods) folder, there is a [README](../methods/README.md) file to guide you how to run all the methods.
23+
24+
Or check the shell script in [`0_run_methods_all.sh`](../scripts/sh/0_run_methods_all.sh), run them with one command.
25+
26+
```bash
27+
./scripts/sh/0_run_methods_all.sh
28+
```
29+
30+
### 1. Create the eval data
31+
```bash
32+
# Check export_eval_pcd.cpp
33+
./export_eval_pcd [folder that you have the output pcd] [method_name_output.pcd] [min_dis to view as the same point]
34+
35+
# example:
36+
./export_eval_pcd /home/kin/bags/VLP16_cone_two_people octomapfg_output.pcd 0.05
37+
```
38+
39+
Or check the shell script in [`1_export_eval_pcd.sh`](../scripts/sh/1_export_eval_pcd.sh), run them with one command.
40+
41+
```bash
42+
./scripts/sh/1_export_eval_pcd.sh
43+
```
44+
45+
### 2. Print the score
46+
Check the script and the only thing you need do is change the folder path to *your data folder*. And Select the methods you want to compare. Please try to open and read the [script first](py/eval/evaluate_all.py)
47+
48+
```bash
49+
python3 scripts/py/eval/evaluate_all.py
50+
```
51+
52+
Here is the demo output:
53+
54+
![](../assets/imgs/eval_demo.png)

Diff for: docs/index.md

+110
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
# Introduction
2+
3+
**Welcome to the Dynamic Map Benchmark Wiki Page!**
4+
5+
You can always press `F` or top right search bar to search for specific topics.
6+
7+
<details markdown>
8+
<summary>CHANGELOG:</summary>
9+
10+
- 2024/06/25: Qingwen is starting to work on the wiki page.
11+
- **2024/04/29** [BeautyMap](https://arxiv.org/abs/2405.07283) is accepted by RA-L'24. Updated benchmark: BeautyMap and DeFlow submodule instruction in the benchmark. Added the first data-driven method [DeFlow](https://github.com/KTH-RPL/DeFlow/tree/feature/dynamicmap) into our benchmark. Feel free to check.
12+
- **2024/04/18** [DUFOMap](https://arxiv.org/abs/2403.01449) is accepted by RA-L'24. Updated benchmark: DUFOMap and dynablox submodule instruction in the benchmark. Two datasets w/o gt for demo are added in the download link. Feel free to check.
13+
- 2024/03/08 **Fix statements** on our ITSC'23 paper: KITTI sequences pose are also from SemanticKITTI which used SuMa. In the DUFOMap paper Section V-C, Table III, we present the dynamic removal result on different pose sources. Check discussion in [DUFOMap](https://arxiv.org/abs/2403.01449) paper if you are interested.
14+
- 2023/06/13 The [benchmark paper](https://arxiv.org/abs/2307.07260) Accepted by ITSC 2023 and release five methods (Octomap, Octomap w GF, ERASOR, Removert) and three datasets (01, 05, av2, semindoor) in [benchmark paper](https://arxiv.org/abs/2307.07260).
15+
16+
17+
</details>
18+
19+
## Overview
20+
Task: Detect and Remove dynamic points from Point Cloud Maps.
21+
22+
![](https://github.com/KTH-RPL/DynamicMap_Benchmark/blob/main/assets/imgs/background.png?raw=true)
23+
24+
Here is a figure that illustrate of ghost points resulting from dynamic objects in KITTI sequence 7.
25+
The yellow points to the right represent points labeled as belonging to dynamic objects in the dataset.
26+
These ghost points negatively affect downstream tasks and overall point cloud quality
27+
28+
29+
## 🏘️ Good to start from here
30+
31+
* What kind of data format we use?
32+
33+
PCD files (pose information saved in `VIEWPOINT` header). Read [Data Section](data.md)
34+
35+
* How to evaluate the performance of a method?
36+
37+
Two python scripts. Read [Evaluation Section](evaluation.md)
38+
39+
* How to run a benchmark method on my data?
40+
41+
Format your data and run the method. Read [Create data](data.md/) and [Run method](Method.md)
42+
43+
## 🎁 Methods we included
44+
45+
Online (w/o prior map):
46+
47+
- [x] DUFOMap (Ours 🚀): [RAL'24](https://arxiv.org/abs/2403.01449), [**Benchmark Instruction**](https://github.com/KTH-RPL/dufomap)
48+
- [x] Octomap w GF (Ours 🚀): [ITSC'23](https://arxiv.org/abs/2307.07260), [**Benchmark improvement ITSC 2023**](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark)
49+
- [x] dynablox: [RAL'23 official link](https://github.com/ethz-asl/dynablox), [**Benchmark Adaptation**](https://github.com/Kin-Zhang/dynablox/tree/feature/benchmark)
50+
- [x] Octomap: [ICRA'10 & AR'13 official link](https://github.com/OctoMap/octomap_mapping), [**Benchmark implementation**](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark)
51+
52+
Learning-based (data-driven) (w pretrain-weights provided):
53+
54+
- [x] DeFlow (Ours 🚀): [ICRA'24](https://arxiv.org/abs/2401.16122), [**Benchmark Adaptation**](https://github.com/KTH-RPL/DeFlow/tree/feature/dynamicmap)
55+
56+
Offline (need prior map).
57+
58+
- [x] BeautyMap (Ours 🚀): [RAL'24](https://arxiv.org/abs/2405.07283), [**Official Code**](https://github.com/MKJia/BeautyMap)
59+
- [x] ERASOR: [RAL'21 official link](https://github.com/LimHyungTae/ERASOR), [**benchmark implementation**](https://github.com/Kin-Zhang/ERASOR/tree/feat/no_ros)
60+
- [x] Removert: [IROS 2020 official link](https://github.com/irapkaist/removert), [**benchmark implementation**](https://github.com/Kin-Zhang/removert)
61+
62+
Please note that we provided the comparison methods also but modified a little bit for us to run the experiments quickly, but no modified on their methods' core. Please check the LICENSE of each method in their official link before using it.
63+
64+
You will find all methods in this benchmark under `methods` folder. So that you can easily reproduce the experiments. [Or click here to check our score screenshot directly](assets/imgs/eval_demo.png).
65+
<!-- And we will also directly provide [the result data](TODO) so that you don't need to run the experiments by yourself. ... Where to save this? -->
66+
67+
Last but not least, **feel free to pull request if you want to add more methods**. Welcome!
68+
69+
## 💖 Acknowledgements
70+
71+
This benchmark implementation is based on codes from several repositories as we mentioned in the beginning. Thanks for these authors who kindly open-sourcing their work to the community. Please see our paper reference section to get more information.
72+
73+
Thanks to HKUST Ramlab's members: Bowen Yang, Lu Gan, Mingkai Tang, and Yingbing Chen, who help collect additional datasets.
74+
75+
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program ([WASP](https://wasp-sweden.org/)) funded by the Knut and Alice Wallenberg Foundation
76+
77+
### Cite Our Papers
78+
79+
Please cite our works if you find these useful for your research:
80+
81+
```
82+
@inproceedings{zhang2023benchmark,
83+
author={Zhang, Qingwen and Duberg, Daniel and Geng, Ruoyu and Jia, Mingkai and Wang, Lujia and Jensfelt, Patric},
84+
booktitle={IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)},
85+
title={A Dynamic Points Removal Benchmark in Point Cloud Maps},
86+
year={2023},
87+
pages={608-614},
88+
doi={10.1109/ITSC57777.2023.10422094}
89+
}
90+
@article{jia2024beautymap,
91+
author={Jia, Mingkai and Zhang, Qingwen and Yang, Bowen and Wu, Jin and Liu, Ming and Jensfelt, Patric},
92+
journal={IEEE Robotics and Automation Letters},
93+
title={BeautyMap: Binary-Encoded Adaptable Ground Matrix for Dynamic Points Removal in Global Maps},
94+
year={2024},
95+
volume={},
96+
number={},
97+
pages={1-8},
98+
doi={10.1109/LRA.2024.3402625}
99+
}
100+
@article{daniel2024dufomap,
101+
author={Duberg, Daniel and Zhang, Qingwen and Jia, Mingkai and Jensfelt, Patric},
102+
journal={IEEE Robotics and Automation Letters},
103+
title={{DUFOMap}: Efficient Dynamic Awareness Mapping},
104+
year={2024},
105+
volume={9},
106+
number={6},
107+
pages={5038-5045},
108+
doi={10.1109/LRA.2024.3387658}
109+
}
110+
```

0 commit comments

Comments
 (0)