-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
165 lines (144 loc) · 5.45 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
<meta name="description" content="Mobile mapping results">
<meta name="author" content="BEAM">
<link rel="icon" href="../../favicon.ico">
<title>Beam Robotics | Maps</title>
<!-- Bootstrap core CSS -->
<link href="libs/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="potree.css" rel="stylesheet">
<!-- Google Analytics - Attached to Nicks gmail -->
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-139157422-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-139157422-1');
</script>
</head>
<body>
<script src="content.js"></script>
<div class="container">
<div class="jumbotron">
<h2>Beam Robotics | Map Viewer</h2>
<br><br>
<h3>Welcome to our online map viewer! Here you will find interacting maps of
pervious scans performed by Beam as well as some of our open source datasets.
<br><br>
</h3>
<hr>
<h4>Platforms</h4>
<h3>
<br><br>
We have four scanning platforms:
<br><br>
</h3>
<div id="results_container">
<div id="ig_handle_container">
<h3><strong>4. Inspector Gadget Handle (IG Handle) :</strong> IG Handle is designed to be both a mobile
hand scanning platform as well as capable of mounting onto a ground robot for autonomous scanning.
todo: ad all sensors here
We have released data collected from the IG Handle <a href="dataset.html"><button>HERE</button></a>
</h3>
<br><br>
</div>
<div id="ig_container">
<h3><strong>1. Inspector Gadget (IG) :</strong> IG is more focused on inspection. For localization,
IG relies on a horizontal VLP16 lidar, IMU, and Fisheye SLAM camera. For mapping and inspection,
IG uses the upper sensor system which includes a vertical VLP16 lidar, one high resolution camera
for detecting defects, one fisheye camera for map colourization, and one infrared camera for
detecting subsurface defects or any temperature gradients.
</h3>
<br><br>
</div>
<div id="roben_container">
<h3><strong>2. Robot Eng (RobEn) :</strong> RobEn is more focused on rapid mapping and full coverage.
For localization, RobEn has a horizontal VLP16 lidar, IMU, and Ladybug5+ spherical camera. For
mapping, RobEn uses a vertically mounted rotating VLP16 which gets full 360 degree coverage once
every second, and the spherical Ladybug5+ camera which gets full coverage at a high rate.
</h3>
<br><br>
</div>
<div id="faro_container">
<h3><strong>3. Faro Focus :</strong> We also have access to a more
traditional tripod based laser scanner
(often referred to as Terrestrial Laser Scanners). This is the current
go-to solution for most land-based laser scanning. These scans are very time
consuming and manual data-processing is required after collecting the
scans to generate the final maps.
</h3>
<br><br>
</div>
</div>
<div id="results_container">
<div id="ig_handle_container">
<img src="images/IGHandle.jpg" alt="IGHandle" /><br />
</div>
<div id="ig_container">
<img src="images/InspectorGadget.jpg" alt="IG" /><br />
</div>
<div id="roben_container">
<img src="images/RobEn.jpg" alt="RobEn" /><br />
</div>
<div id="faro_container">
<img src="images/FaroFocus.jpeg" alt="Faro" /><br />
</div>
</div>
<br><br>
<hr>
<h4>Map Results</h4>
<br><br>
<div id="results_container">
<div id="ig_container">
<h1>IG Maps</h1>
<script>includeIGResults();</script>
</div>
<div id="roben_container">
<h1>RobEn Maps</h1>
<script>includeRobenResults();</script>
</div>
<div id="faro_container">
<h1>Faro Maps</h1>
<script>includeFaroResults();</script>
</div>
</div>
<br><br>
<hr>
<h4>Mapping with the Autonomoose</h4>
<h3>
<br><br>
We also have some maps generated using data from the University of Waterloo's
autonomous vehicle platform, the <strong><a href="https://www.autonomoose.net/">Autonomous</a></strong>.
This work was done in
collaboration with the Waterloo Autonomous Vehicles
(<strong><a href="http://wavelab.uwaterloo.ca/">WAVE</a></strong>)
lab. Specifically, we lead research on filtering snowfall from Lidar scans
for autonomous vehicles.
We developped a custom de-noising filter to remove snow in real-time without
removing important environmental features. This work was published and presented
at the Computer and Robot Vision (CRV) conference in 2018 (see
<strong><a href="https://ieeexplore.ieee.org/abstract/document/8575761">paper</a></strong>).
<br><br>
</h3>
<div id="results_container">
<div id="ig_container">
<script>includeMooseResults1();</script>
</div>
<div id="roben_container">
<script>includeMooseResults2();</script>
</div>
<div id="faro_container">
<script>includeMooseImage();</script>
</div>
</div>
<!--<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>-->
<script src="libs/bootstrap/js/bootstrap.min.js"></script>
</body>
</html>