Skip to content

Commit 29f8654

Browse files
author
Sean M. Bryan
committed
Implemented Docs Folder
1 parent beff95c commit 29f8654

File tree

110 files changed

+668
-2
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

110 files changed

+668
-2
lines changed

_config.yml

+4-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,4 @@
1-
theme: jekyll-theme-minimal
1+
theme: jekyll-theme-minimal
2+
3+
plugis:
4+
- jekyll-sitemap
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
# Adaptive Monte Carlo Localization
2+
3+
## What is a particle filter?
4+
Particle filter are initialized by a very high number of particles spanning the entire state space. As you get additional measurements, you predict and update your measurements which makes your robot have a multi-modal posterior distribution. This is a big difference from a Kalman Filter which approximates your posterior distribution to be a Gaussian. Over multiple iterations, the particles converge to a unique value in state space.
5+
6+
![Particle Filter in Action over Progressive Time Steps](assets/AdaptiveMonteCarloLocalization-65e37.png)
7+
8+
**Figure 1:** Particle Filter in Action over Progressive Time Steps
9+
10+
The steps followed in a Particle Filter are:
11+
1. **Re-sampling:** Draw with replacement a random sample from the sample set according to the (discrete) distribution defined through the importance weights. This sample can be seen as an instance of the belief.
12+
13+
2. **Sampling:** Use previous belief and the control information to sample 􀀀from the distribution which describes the dynamics of the system. The current belief now represents the density given by the product of distribution and an instance of the previous belief. This density is the proposal distribution used in the next step.
14+
15+
3. **Importance sampling:** Weight the sample by the importance weight, the likelihood of the sample X given the measurement Z.
16+
17+
Each iteration of these three steps generates a sample drawn from the posterior belief. After n iterations, the importance weights of the samples are normalized so that they sum up to 1.
18+
19+
For further details on this topic, [Sebastian Thrun's paper on Particle Filter in Robotics](http://robots.stanford.edu/papers/thrun.pf-in-robotics-uai02.pdf) is a good source for a mathematical understanding of particle filters, their applications and drawbacks.
20+
21+
## What is an adaptive particle filter?
22+
A key problem with particle filter is maintaining the random distribution of particles throughout the state space, which goes out of hand if the problem is high dimensional. Due to these reasons it is much better to use an adaptive particle filter which converges much faster and is computationally much more efficient than a basic particle filter.
23+
24+
The key idea is to bound the error introduced by the sample-based representation of the particle filter. To derive this bound, it is assumed that the true posterior is given by a discrete, piece-wise constant distribution such as a discrete density tree or a multidimensional histogram. For such a representation we can determine the number of samples so that the distance between the maximum likelihood estimate (MLE) based on the samples and the true posterior does not exceed a pre-specified threshold. As is finally derived, the number of particles needed is proportional to the inverse of this threshold.
25+
26+
[Dieter Fox's paper on Adaptive Particle Filters](http://papers.nips.cc/paper/1998-kld-sampling-adaptive-particle-filters.pdf) delves much deeper into the theory and mathematics behind these concepts. It also covers the implementation and performance aspects of this technique.
27+
28+
## Use of Adaptive Particle Filter for Localization
29+
To use adaptive particle filter for localization, we start with a map of our environment and we can either set robot to some position, in which case we are manually localizing it or we could very well make the robot start from no initial estimate of its position. Now as the robot moves forward, we generate new samples that predict the robot's position after the motion command. Sensor readings are incorporated by re-weighting these samples and normalizing the weights. Generally it is good to add few random uniformly distributed samples as it helps the robot recover itself in cases where it has lost track of its position. In those cases, without these random samples, the robot will keep on re-sampling from an incorrect distribution and will never recover. The reason why it takes the filter multiple sensor readings to converge is that within a map, we might have dis-ambiguities due to symmetry in the map, which is what gives us a multi-modal posterior belief.
30+
31+
![Localization Process using Particle Filters](assets/AdaptiveMonteCarloLocalization-0d322.png)
32+
33+
[Dieter Fox's paper on Monte Carlo Localization for Mobile Robots](https://www.ri.cmu.edu/pub_files/pub1/fox_dieter_1999_1/fox_dieter_1999_1.pdf) gives further details on this topic and also compares this technique to many others such as Kalman Filter based Localization, Grid Based and Topological Markov Localization.
34+
35+
## Configuring ROS AMCL package
36+
At the conceptual level, the AMCL package maintains a probability distribution over the set of all possible robot poses, and updates this distribution using data from odometry and laser range-finders. Depth cameras can also be used to generate these 2D laser scans by using the package `depthimage_to_laserscan` which takes in depth stream and publishes laser scan on `sensor_msgs/LaserScan`. More details can be found on the [ROS Wiki](http://wiki.ros.org/depthimage_to_laserscan).
37+
38+
The package also requires a predefined map of the environment against which to compare observed sensor values. At the implementation level, the AMCL package represents the probability distribution using a particle filter. The filter is "adaptive" because it dynamically adjusts the number of particles in the filter: when the robot's pose is highly uncertain, the number of particles is increased; when the robot's pose is well determined, the number of particles is decreased. This enables the robot to make a trade-off between processing speed and localization accuracy.
39+
40+
Even though the AMCL package works fine out of the box, there are various parameters which one can tune based on their knowledge of the platform and sensors being used. Configuring these parameters can increase the performance and accuracy of the AMCL package and decrease the recovery rotations that the robot carries out while carrying out navigation.
41+
42+
There are three categories of ROS Parameters that can be used to configure the AMCL node: overall filter, laser model, and odometery model. The full list of these configuration parameters, along with further details about the package can be found on the [webpage for AMCL](http://wiki.ros.org/amcl). They can be edited in the `amcl.launch` file.
43+
44+
Here is a sample launch file. Generally you can leave many parameters at their default values.
45+
```
46+
<launch>
47+
48+
<arg name="use_map_topic" default="false"/>
49+
<arg name="scan_topic" default="scan"/>
50+
51+
<node pkg="amcl" type="amcl" name="amcl">
52+
<param name="use_map_topic" value="$(arg use_map_topic)"/>
53+
<!-- Publish scans from best pose at a max of 10 Hz -->
54+
<param name="odom_model_type" value="diff"/>
55+
<param name="odom_alpha5" value="0.1"/>
56+
<param name="gui_publish_rate" value="10.0"/> <!-- 10.0 -->
57+
<param name="laser_max_beams" value="60"/>
58+
<param name="laser_max_range" value="12.0"/>
59+
<param name="min_particles" value="500"/>
60+
<param name="max_particles" value="2000"/>
61+
<param name="kld_err" value="0.05"/>
62+
<param name="kld_z" value="0.99"/>
63+
<param name="odom_alpha1" value="0.2"/>
64+
<param name="odom_alpha2" value="0.2"/>
65+
<!-- translation std dev, m -->
66+
<param name="odom_alpha3" value="0.2"/>
67+
<param name="odom_alpha4" value="0.2"/>
68+
<param name="laser_z_hit" value="0.5"/>
69+
<param name="laser_z_short" value="0.05"/>
70+
<param name="laser_z_max" value="0.05"/>
71+
<param name="laser_z_rand" value="0.5"/>
72+
<param name="laser_sigma_hit" value="0.2"/>
73+
<param name="laser_lambda_short" value="0.1"/>
74+
<param name="laser_model_type" value="likelihood_field"/>
75+
<!-- <param name="laser_model_type" value="beam"/> -->
76+
<param name="laser_likelihood_max_dist" value="2.0"/>
77+
<param name="update_min_d" value="0.25"/>
78+
<param name="update_min_a" value="0.2"/>
79+
<param name="odom_frame_id" value="odom"/>
80+
<param name="resample_interval" value="1"/>
81+
<!-- Increase tolerance because the computer can get quite busy -->
82+
<param name="transform_tolerance" value="1.0"/>
83+
<param name="recovery_alpha_slow" value="0.0"/>
84+
<param name="recovery_alpha_fast" value="0.0"/>
85+
<remap from="scan" to="$(arg scan_topic)"/>
86+
</node>
87+
</launch>
88+
```
89+
90+
Best way to tune these parameters is to record a ROS bag file, with odometry and laser scan data, and play it back while tuning AMCL and visualizing it on RViz. This helps in tracking the performance based on the changes being made on a fixed data-set.
+40
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Oculus Prime Navigation
2+
3+
There are multiple techniques to carry out way-point navigation on the Oculus Prime platform using the navigation stack.
4+
5+
1. **"Pure" ROS:** To set a single goal the pure ROS way, bypassing the use of the oculusprime browser UI altogether, first run:
6+
```
7+
$ roslaunch oculusprime globalpath_follow.launch
8+
```
9+
Or try:
10+
```
11+
$ roslaunch oculusprime remote_nav.launch
12+
```
13+
- This `globalpath_follow` launch file sets up the basic nodes necessary to have Oculus Prime go where ROS Navigation wants it to go, and launches the rest of the navigation stack. Once it’s running, you can [set initial position and goals graphically using Rviz](http://wiki.ros.org/oculusprime_ros/navigation_rviz_tutorial)
14+
15+
2. **Via Command Line:** You can also send coordinates via command line (AFTER you set initial position using the web browser or Rviz map). Enter a command similar to:
16+
```
17+
$ rostopic pub /move_base_simple/goal geometry_msgs/PoseStamped \
18+
'{ header: { frame_id: "map" }, pose: { position: { x: 4.797, y: 2.962, z: 0 }, orientation: { x: 0, y: 0, z: 0.999961751128, w: -0.00874621528223 } } }'
19+
```
20+
- Change the coordinates shown to match where your goal is. An easy way to find goal coordinates is to set the goal on the map with a mouse, and while the robot is driving there, enter:
21+
```
22+
$ rostopic echo /move_base/current_goal
23+
```
24+
- An example of doing this via a python ROS node, starting with simpler coordinates (x,y,th), can be found [here](https://gist.github.com/xaxxontech/6cbfefd38208b9f8b153).
25+
26+
3. **Waypoints:** If you want to choose from a list of waypoints instead, you can use the functionality built into the oculusprime server and do it via oculusprime commands:
27+
- First read the waypoints:
28+
```
29+
state rosmapwaypoints
30+
```
31+
- This should return a long comma-separated string with no spaces using the format "name,x,y,th," for all the waypoints, similar to `<telnet> <state> waypointA, 4.387, -0.858, -0.3218, waypointB, 2.081, 2.739, -1.5103`.
32+
- Change the coordinates of the particular waypoint you want to change within the string, then send the whole string to the savewaypoints command, eg:
33+
Advanced -> telnet text command -> enter command:
34+
```
35+
savewaypoints waypointA,5.456,-2.345,-0.3218,waypointB,2.081,2.739,-1.5103
36+
```
37+
- Then drive to the new coordinates by sending:
38+
```
39+
gotowaypoint waypointA
40+
```

docs/Intelligence/ROCON.md

+24
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# ROCON Multi-master Framework
2+
ROCON stands for Robotics in Concert. It is a multi-master framework provided by ROS. It provides easy solutions for multi-robot/device/tablet platforms.
3+
Multimaster is primarily a means for connecting autonomously capable ROS subsystems. It could of course all be done underneath a single master - but the key point is for subsystems to be autonomously independent. This is important for wirelessly connected robots that can’t always guarantee their connection on the network.
4+
5+
## The Appable Robot
6+
The appable robot is intended to be a complete framework intended to simplify:
7+
- Software Installation
8+
- Launching
9+
- Retasking
10+
- Connectivity (pairing or multimaster modes)
11+
- Writing portable software
12+
13+
It also provides useful means of interacting with the robot over the public interface via two different modes:
14+
1. **Pairing Mode:** 1-1 human-robot configuration and interaction.
15+
2. **Concert Mode:** autonomous control of a robot through a concert solution.
16+
17+
## Rapps
18+
ROS Indigo has introduced the concept of Rapps for setting the multi-master system using ROCON. Rapps are meta packages providing configuration, launching rules and install dependencies. These are essentially launchers with no code.
19+
- The specifications and parameters for rapps can be found at in the [Rapp Specifications](http://cmumrsdproject.wikispaces.com/link)
20+
- [A guide](http://cmumrsdproject.wikispaces.com/Creating+Rapps) to creating rapps and working with them is also available.
21+
22+
## The Gateway Model
23+
The gateway model is the engine of a ROCON Multimaster system. They are used by the Rapp Manager and the concert level components to coordinate the exchange of ROS topics, services and actions between masters. The gateway model is based on the LAN concept where a gateway stands in between your LAN and the rest of the internet, controlling both what communications are allowed through, and what is sent out. Gateways for ROS systems are similar conceptually. They interpose themselves between a ROS system and other ROS systems and are the coordinators of traffic going in and out of a ROS system to remote ROS systems.
24+
A hub acts as a shared key-value store for multiple ROS systems. Further detailed information on how the topics are shared between the multiple masters can be found [here](http://cmumrsdproject.wikispaces.com/The+Gateway+Model).

0 commit comments

Comments
 (0)