Skip to content

Commit e83d04b

Browse files
committed
feat(reorganize): reorganize docs structure and add more detail about data read, visualization etc.
1 parent 8eb397a commit e83d04b

File tree

10 files changed

+277
-287
lines changed

10 files changed

+277
-287
lines changed

.github/workflows/ci.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,5 +21,5 @@ jobs:
2121
key: ${{ github.ref }}
2222
path: .cache
2323
- run: pip install mkdocs-material
24-
- run: pip install mkdocs-minify-plugin mkdocs-video mkdocs-git-committers-plugin mkdocs-git-revision-date-localized-plugin # mkdocs-git-revision-date-plugin # mkdocs-git-revision-date-localized-plugin mkdocs-git-authors-plugin
24+
- run: pip install mkdocs-minify-plugin mkdocs-video mkdocs-git-committers-plugin-2 mkdocs-git-revision-date-localized-plugin # mkdocs-git-revision-date-plugin # mkdocs-git-revision-date-localized-plugin mkdocs-git-authors-plugin
2525
- run: mkdocs gh-deploy --force

README.md

+16-90
Original file line numberDiff line numberDiff line change
@@ -1,103 +1,29 @@
11
A Dynamic Points Removal Benchmark in Point Cloud Maps
22
---
33

4-
<!-- [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/a-dynamic-points-removal-benchmark-in-point/dynamic-point-removal-on-semi-indoor)](https://paperswithcode.com/sota/dynamic-point-removal-on-semi-indoor?p=a-dynamic-points-removal-benchmark-in-point) -->
5-
[![arXiv](https://img.shields.io/badge/arXiv-2307.07260-b31b1b?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2307.07260)
6-
[![video](https://img.shields.io/badge/中文-Bilibili-74b9ff?logo=bilibili&logoColor=white)](https://www.bilibili.com/video/BV1bC4y1R7h3)
7-
[![video](https://img.shields.io/badge/video-YouTube-FF0000?logo=youtube&logoColor=white)](https://youtu.be/pCHsNKXDJQM?si=nhbAnPrbaZJEqbjx)
8-
[![poster](https://img.shields.io/badge/Poster-6495ed?style=flat&logo=Shotcut&logoColor=wihte)](https://hkustconnect-my.sharepoint.com/:b:/g/personal/qzhangcb_connect_ust_hk/EQvNHf9JNEtNpyPg1kkNLNABk0v1TgGyaM_OyCEVuID4RQ?e=TdWzAq)
9-
10-
Here is a preview of the readme in codes. Task detects dynamic points in maps and removes them, enhancing the maps:
4+
Author: [Qingwen Zhang](http://kin-zhang.github.io) (Kin)
115

12-
<center>
13-
<img src="assets/imgs/background.png" width="80%">
14-
</center>
6+
This is **our wiki page README**, please visit our [main branch](https://github.com/KTH-RPL/DynamicMap_Benchmark) for more information about the benchmark.
157

16-
**Folder** quick view:
8+
## Install
179

18-
- `methods` : contains all the methods in the benchmark
19-
- `scripts/py/eval`: eval the result pcd compared with ground truth, get quantitative table
20-
- `scripts/py/data` : pre-process data before benchmark. We also directly provided all the dataset we tested in the map. We run this benchmark offline in computer, so we will extract only pcd files from custom rosbag/other data format [KITTI, Argoverse2]
10+
If you want to try the MkDocs locally, the only thing you need is `Python` and some python package. If you are worrying it will destory your env, you can try [virual env](https://docs.python.org/3/library/venv.html) or [anaconda](https://www.anaconda.com/).
2111

22-
**Quick** try:
2312

24-
- Teaser data on KITTI sequence 00 only 384.8MB in [Zenodo online drive](https://zenodo.org/record/10886629)
25-
```bash
26-
wget https://zenodo.org/records/10886629/files/00.zip
27-
unzip 00.zip -d ${data_path, e.g. /home/kin/data}
28-
```
29-
- Clone our repo:
30-
```bash
31-
git clone --recurse-submodules https://github.com/KTH-RPL/DynamicMap_Benchmark.git
32-
```
33-
- Go to methods folder, build and run through
34-
```bash
35-
cd methods/dufomap && cmake -B build -D CMAKE_CXX_COMPILER=g++-10 && cmake --build build
36-
./build/dufomap_run ${data_path, e.g. /home/kin/data/00} ${assets/config.toml}
37-
```
38-
39-
### News:
40-
41-
Feel free to pull a request if you want to add more methods or datasets. Welcome! We will try our best to update methods and datasets in this benchmark. Please give us a star 🌟 and cite our work 📖 if you find this useful for your research. Thanks!
42-
43-
- **2024/04/29** [BeautyMap](https://arxiv.org/abs/2405.07283) is accepted by RA-L'24. Updated benchmark: BeautyMap and DeFlow submodule instruction in the benchmark. Added the first data-driven method [DeFlow](https://github.com/KTH-RPL/DeFlow/tree/feature/dynamicmap) into our benchmark. Feel free to check.
44-
- **2024/04/18** [DUFOMap](https://arxiv.org/abs/2403.01449) is accepted by RA-L'24. Updated benchmark: DUFOMap and dynablox submodule instruction in the benchmark. Two datasets w/o gt for demo are added in the download link. Feel free to check.
45-
- **2024/03/08** **Fix statements** on our ITSC'23 paper: KITTI sequences pose are also from SemanticKITTI which used SuMa. In the DUFOMap paper Section V-C, Table III, we present the dynamic removal result on different pose sources. Check discussion in [DUFOMap](https://arxiv.org/abs/2403.01449) paper if you are interested.
46-
- **2023/06/13** The [benchmark paper](https://arxiv.org/abs/2307.07260) Accepted by ITSC 2023 and release five methods (Octomap, Octomap w GF, ERASOR, Removert) and three datasets (01, 05, av2, semindoor) in [benchmark paper](https://arxiv.org/abs/2307.07260).
47-
48-
---
49-
50-
- [ ] 2024/04/19: I will update a document page soon (tutorial, manual book, and new online leaderboard), and point out the commit for each paper. Since there are some minor mistakes in the first version. Stay tune with us!
51-
52-
53-
## Methods:
54-
55-
Please check in [`methods`](methods) folder.
56-
57-
Online (w/o prior map):
58-
- [x] DUFOMap (Ours 🚀): [RAL'24](https://arxiv.org/abs/2403.01449), [**Benchmark Instruction**](https://github.com/KTH-RPL/dufomap)
59-
- [x] Octomap w GF (Ours 🚀): [ITSC'23](https://arxiv.org/abs/2307.07260), [**Benchmark improvement ITSC 2023**](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark)
60-
- [x] dynablox: [RAL'23 official link](https://github.com/ethz-asl/dynablox), [**Benchmark Adaptation**](https://github.com/Kin-Zhang/dynablox/tree/feature/benchmark)
61-
- [x] Octomap: [ICRA'10 & AR'13 official link](https://github.com/OctoMap/octomap_mapping), [**Benchmark implementation**](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark)
62-
63-
Learning-based (data-driven) (w pretrain-weights provided):
64-
- [x] DeFlow (Ours 🚀): [ICRA'24](https://arxiv.org/abs/2401.16122), [**Benchmark Adaptation**](https://github.com/KTH-RPL/DeFlow/tree/feature/dynamicmap)
65-
66-
Offline (need prior map).
67-
- [x] BeautyMap (Ours 🚀): [RAL'24](https://arxiv.org/abs/2405.07283), [**Official Code**](https://github.com/MKJia/BeautyMap)
68-
- [x] ERASOR: [RAL'21 official link](https://github.com/LimHyungTae/ERASOR), [**benchmark implementation**](https://github.com/Kin-Zhang/ERASOR/tree/feat/no_ros)
69-
- [x] Removert: [IROS 2020 official link](https://github.com/irapkaist/removert), [**benchmark implementation**](https://github.com/Kin-Zhang/removert)
70-
71-
Please note that we provided the comparison methods also but modified a little bit for us to run the experiments quickly, but no modified on their methods' core. Please check the LICENSE of each method in their official link before using it.
72-
73-
You will find all methods in this benchmark under `methods` folder. So that you can easily reproduce the experiments. [Or click here to check our score screenshot directly](assets/imgs/eval_demo.png).
74-
<!-- And we will also directly provide [the result data](TODO) so that you don't need to run the experiments by yourself. ... Where to save this? -->
75-
76-
Last but not least, feel free to pull request if you want to add more methods. Welcome!
77-
78-
## Dataset & Scripts
79-
80-
Download PCD files mentioned in paper from [Zenodo online drive](https://zenodo.org/records/10886629). Or create unified format by yourself through the [scripts we provided](scripts/README.md) for more open-data or your own dataset. Please follow the LICENSE of each dataset before using it.
81-
82-
- [x] [Semantic-Kitti, outdoor small town](https://semantic-kitti.org/dataset.html) VLP-64
83-
- [x] [Argoverse2.0, outdoor US cities](https://www.argoverse.org/av2.html#lidar-link) VLP-32
84-
- [x] [UDI-Plane] Our own dataset, Collected by VLP-16 in a small vehicle.
85-
- [x] [KTH-Campuse] Our [Multi-Campus Dataset](https://mcdviral.github.io/), Collected by [Leica RTC360 3D Laser Scan](https://leica-geosystems.com/products/laser-scanners/scanners/leica-rtc360). Only 18 frames included to download for demo, please check [the official website](https://mcdviral.github.io/) for more.
86-
- [x] [Indoor-Floor] Our own dataset, Collected by Livox mid-360 in a quadruped robot.
87-
<!-- - [ ] [HKUST-Building] Our [fusionportable Dataset](https://fusionportable.github.io/dataset/fusionportable/), collected by [Leica BLK360 Imaging Laser Scanner](https://leica-geosystems.com/products/laser-scanners/scanners/blk360) -->
88-
<!-- - [ ] [KTH-Indoor] Our own dataset, Collected by VLP-16/Mid-70 in kobuki. -->
89-
90-
Welcome to contribute your dataset with ground truth to the community through pull request.
91-
92-
### Evaluation
93-
94-
First all the methods will output the clean map, if you are only **user on map clean task,** it's **enough**. But for evaluation, we need to extract the ground truth label from gt label based on clean map. Why we need this? Since maybe some methods downsample in their pipeline, we need to extract the gt label from the downsampled map.
95-
96-
Check [create dataset readme part](scripts/README.md#evaluation) in the scripts folder to get more information. But you can directly download the dataset through the link we provided. Then no need to read the creation; just use the data you downloaded.
13+
main package [user is only need for sometime, check the issue section]
14+
```bash
15+
pip install mkdocs-material
16+
```
9717

98-
- Visualize the result pcd files in [CloudCompare](https://www.danielgm.net/cc/) or the script to provide, one click to get all evaluation benchmarks and comparison images like paper have check in [scripts/py/eval](scripts/py/eval).
18+
plugin package
19+
```bash
20+
pip install mkdocs-minify-plugin mkdocs-git-revision-date-localized-plugin mkdocs-git-authors-plugin mkdocs-video
21+
```
9922

100-
- All color bar also provided in CloudCompare, here is [tutorial how we make the animation video](TODO).
23+
### Run
24+
```bash
25+
mkdocs serve
26+
```
10127

10228
## Acknowledgements
10329

docs/data.md docs/data/creation.md

+6-66
Original file line numberDiff line numberDiff line change
@@ -1,73 +1,13 @@
1-
# Data
1+
# Data Creation
22

3-
In this section, we will introduce the data format we use in the benchmark, and how to prepare the data (public datasets or collected by ourselves) for the benchmark.
4-
5-
## Format
6-
7-
We saved all our data into PCD files, first let me introduce the [PCD file format](https://pointclouds.org/documentation/tutorials/pcd_file_format.html):
8-
9-
The important two for us are `VIEWPOINT`, `POINTS` and `DATA`:
10-
11-
- **VIEWPOINT** - specifies an acquisition viewpoint for the points in the dataset. This could potentially be later on used for building transforms between different coordinate systems, or for aiding with features such as surface normals, that need a consistent orientation.
12-
13-
The viewpoint information is specified as a translation (tx ty tz) + quaternion (qw qx qy qz). The default value is:
14-
15-
```bash
16-
VIEWPOINT 0 0 0 1 0 0 0
17-
```
18-
19-
- **POINTS** - specifies the number of points in the dataset.
20-
21-
- **DATA** - specifies the data type that the point cloud data is stored in. As of version 0.7, three data types are supported: ascii, binary, and binary_compressed. We saved as binary for faster reading and writing.
22-
23-
### Example
24-
25-
```
26-
# .PCD v0.7 - Point Cloud Data file format
27-
VERSION 0.7
28-
FIELDS x y z intensity
29-
SIZE 4 4 4 4
30-
TYPE F F F F
31-
COUNT 1 1 1 1
32-
WIDTH 125883
33-
HEIGHT 1
34-
VIEWPOINT -15.6504 17.981 -0.934952 0.882959 -0.0239536 -0.0058903 -0.468802
35-
POINTS 125883
36-
DATA binary
37-
```
38-
39-
In this `004390.pcd` we have 125883 points, and the pose (sensor center) of this frame is: `-15.6504 17.981 -0.934952 0.882959 -0.0239536 -0.0058903 -0.468802`. All points are already transformed to the world frame.
40-
41-
## Download benchmark data
42-
43-
We already processed the data in the benchmark, you can download the data from the [following links](https://zenodo.org/records/10886629):
44-
45-
46-
| Dataset | Description | Sensor Type | Total Frame Number | Size |
47-
| --- | --- | --- | --- | --- |
48-
| KITTI sequence 00 | in a small town with few dynamics (including one pedestrian around) | VLP-64 | 141 | 384.8 MB |
49-
| KITTI sequence 05 | in a small town straight way, one higher car, the benchmarking paper cover image from this sequence. | VLP-64 | 321 | 864.0 MB |
50-
| Argoverse2 | in a big city, crowded and tall buildings (including cyclists, vehicles, people walking near the building etc. | 2 x VLP-32 | 575 | 1.3 GB |
51-
| KTH campus (no gt) | Collected by us (Thien-Minh) on the KTH campus. Lots of people move around on the campus. | Leica RTC360 | 18 | 256.4 MB |
52-
| Semi-indoor | Collected by us, running on a small 1x2 vehicle with two people walking around the platform. | VLP-16 | 960 | 620.8 MB |
53-
| Twofloor (no gt) | Collected by us (Bowen Yang) in a quadruped robot. A two-floor structure environment with one pedestrian around. | Livox-mid 360 | 3305 | 725.1 MB |
54-
55-
Download command:
56-
```bash
57-
wget https://zenodo.org/api/records/10886629/files-archive.zip
58-
59-
# or download each sequence separately
60-
wget https://zenodo.org/records/10886629/files/00.zip
61-
wget https://zenodo.org/records/10886629/files/05.zip
62-
wget https://zenodo.org/records/10886629/files/av2.zip
63-
wget https://zenodo.org/records/10886629/files/kthcampus.zip
64-
wget https://zenodo.org/records/10886629/files/semindoor.zip
65-
wget https://zenodo.org/records/10886629/files/twofloor.zip
66-
```
3+
In this section, we demonstrate how to extract expected format data from public datasets (KITTI, Argoverse 2) and also collected by ourselves (rosbag).
4+
<!-- I will add soo -->
5+
Still, I recommend you to download the benchmark data directly from the [Zenodo](https://zenodo.org/records/10886629) link without reading this section. Back to [data download and visualize](index.md#download-benchmark-data) page.
6+
It's only needed for people who want to **run more data from themselves**.
677

688
## Create by yourself
699

70-
If you want to process more data, you can follow the instructions below. (
10+
If you want to process more data, you can follow the instructions below.
7111

7212
!!! Note
7313
Feel free to skip this section if you only want to use the benchmark data.

docs/data/index.md

+122
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,122 @@
1+
# Data Description
2+
3+
In this section, we will introduce the data format we use in the benchmark, and how to visualize the data easily.
4+
Next section on creation will show you how to create this format data from your own data.
5+
6+
## Benchmark Unified Format
7+
8+
We saved all our data into **PCD files**, first let me introduce the [PCD file format](https://pointclouds.org/documentation/tutorials/pcd_file_format.html):
9+
10+
The important two for us are `VIEWPOINT`, `POINTS` and `DATA`:
11+
12+
- **VIEWPOINT** - specifies an acquisition viewpoint for the points in the dataset. This could potentially be later on used for building transforms between different coordinate systems, or for aiding with features such as surface normals, that need a consistent orientation.
13+
14+
The viewpoint information is specified as a translation (tx ty tz) + quaternion (qw qx qy qz). The default value is:
15+
16+
```bash
17+
VIEWPOINT 0 0 0 1 0 0 0
18+
```
19+
20+
- **POINTS** - specifies the number of points in the dataset.
21+
22+
- **DATA** - specifies the data type that the point cloud data is stored in. As of version 0.7, three data types are supported: ascii, binary, and binary_compressed. We saved as binary for faster reading and writing.
23+
24+
### A Header Example
25+
26+
I directly show a example header here from `004390.pcd` in KITTI sequence 00:
27+
28+
```
29+
# .PCD v0.7 - Point Cloud Data file format
30+
VERSION 0.7
31+
FIELDS x y z intensity
32+
SIZE 4 4 4 4
33+
TYPE F F F F
34+
COUNT 1 1 1 1
35+
WIDTH 125883
36+
HEIGHT 1
37+
VIEWPOINT -15.6504 17.981 -0.934952 0.882959 -0.0239536 -0.0058903 -0.468802
38+
POINTS 125883
39+
DATA binary
40+
```
41+
42+
In this `004390.pcd` we have 125883 points, and the pose (sensor center) of this frame is: `-15.6504 17.981 -0.934952 0.882959 -0.0239536 -0.0058903 -0.468802`.
43+
44+
Again, all points from data frames are ==already transformed to the world frame== and VIEWPOINT is the sensor pose.
45+
46+
### How to read PCD files
47+
48+
In C++, we usually use PCL library to read PCD files, here is a simple example:
49+
50+
```cpp
51+
#include <pcl/point_cloud.h>
52+
#include <pcl/point_types.h>
53+
54+
pcl::PointCloud<pcl::PointXYZI>::Ptr pcd(new pcl::PointCloud<pcl::PointXYZI>);
55+
pcl::io::loadPCDFile<pcl::PointXYZI>("data/00/004390.pcd", *pcd);
56+
```
57+
58+
In Python, we have a simple script to read PCD files in [the benchmark code](https://github.com/KTH-RPL/DynamicMap_Benchmark/blob/main/scripts/py/utils/pcdpy3.py), or from [my gits](https://gist.github.com/Kin-Zhang/bd6475bdfa0ebde56ab5c060054d5185), you don't need to read the script in detail but use it directly.
59+
60+
```python
61+
import pcdpy3 # the script I provided
62+
pcd_data = pcdpy3.PointCloud.from_path('data/00/004390.pcd')
63+
pc = pcd_data.np_data[:,:3] # shape (N, 3) N: the number of point, 3: x y z
64+
# if the header have intensity or rgb field, you can get it by:
65+
# pc_intensity = pcd_data.np_data[:,3] # shape (N,)
66+
# pc_rgb = pcd_data.np_data[:,3:6] # shape (N, 3)
67+
```
68+
69+
## Download benchmark data
70+
71+
We already processed the data in the benchmark, you can download the data from the [following links](https://zenodo.org/records/10886629):
72+
73+
74+
| Dataset | Description | Sensor Type | Total Frame Number | Size |
75+
| --- | --- | --- | --- | --- |
76+
| KITTI sequence 00 | in a small town with few dynamics (including one pedestrian around) | VLP-64 | 141 | 384.8 MB |
77+
| KITTI sequence 05 | in a small town straight way, one higher car, the benchmarking paper cover image from this sequence. | VLP-64 | 321 | 864.0 MB |
78+
| Argoverse2 | in a big city, crowded and tall buildings (including cyclists, vehicles, people walking near the building etc. | 2 x VLP-32 | 575 | 1.3 GB |
79+
| KTH campus (no gt) | Collected by us (Thien-Minh) on the KTH campus. Lots of people move around on the campus. | Leica RTC360 | 18 | 256.4 MB |
80+
| Semi-indoor | Collected by us, running on a small 1x2 vehicle with two people walking around the platform. | VLP-16 | 960 | 620.8 MB |
81+
| Twofloor (no gt) | Collected by us (Bowen Yang) in a quadruped robot. A two-floor structure environment with one pedestrian around. | Livox-mid 360 | 3305 | 725.1 MB |
82+
83+
Download command:
84+
```bash
85+
wget https://zenodo.org/api/records/10886629/files-archive.zip
86+
87+
# or download each sequence separately
88+
wget https://zenodo.org/records/10886629/files/00.zip
89+
wget https://zenodo.org/records/10886629/files/05.zip
90+
wget https://zenodo.org/records/10886629/files/av2.zip
91+
wget https://zenodo.org/records/10886629/files/kthcampus.zip
92+
wget https://zenodo.org/records/10886629/files/semindoor.zip
93+
wget https://zenodo.org/records/10886629/files/twofloor.zip
94+
```
95+
96+
## Visualize the data
97+
98+
We provide a simple script to visualize the data in the benchmark, you can find it in [scripts/py/data/play_data.py](https://github.com/KTH-RPL/DynamicMap_Benchmark/blob/main/scripts/py/data/play_data.py). You may want to download the data and requirements first.
99+
100+
```bash
101+
cd scripts/py
102+
103+
# download the data
104+
wget https://zenodo.org/records/10886629/files/twofloor.zip
105+
106+
# https://github.com/KTH-RPL/DynamicMap_Benchmark/blob/main/scripts/py/requirements.txt
107+
pip install -r requirements.txt
108+
```
109+
110+
Run it:
111+
```bash
112+
python data/play_data.py --data_folder /home/kin/data/twofloor --speed 1 # speed 1 for normal speed, 2 for faster with 2x speed
113+
```
114+
115+
It will pop up a window to show the point cloud data, you can use the mouse to rotate, zoom in/out, and move the view. Terminal show the help information to start/stop the play.
116+
117+
<center>
118+
![type:video](https://github.com/user-attachments/assets/158040bd-02ab-4fd4-ab93-2dcacabf342a)
119+
</center>
120+
121+
The axis here shows the sensor frame. The video is play in sensor-frame, so you can see the sensor move around in the video.
122+

0 commit comments

Comments
 (0)