Skip to content

Commit 789c475

Browse files
committed
refactored code base with pick n place example
1 parent bbc53c3 commit 789c475

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+1977
-2
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ python3 docker_frankapy_test.py
220220
221221
## Using calibration
222222
223-
Please checkout [calibration/docs](calibration/docs) for documentations of hand-eye calibration, usage of this tool
223+
Please checkout [robot_toolkit/docs](robot_toolkit/docs) for documentations of robot camera extrinsic calibration, usage of this tool.
224224
225225
226226
## Test robotiq gripper

docker/workstation_computer/docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,5 +19,5 @@ services:
1919
volumes:
2020
- ../../calibration:/root/git/calibration
2121
- ../../tests/frankapy_control_test_scripts:/root/git/tests/frankapy_control_test_scripts
22-
- ../../refactor:/root/git/refactor
22+
- ../../robot_toolkit:/root/git/robot_toolkit
2323

robot_toolkit/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Please checkout [docs](docs) for documentations of hand-eye calibration, usage of this tool
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
import cv2
2+
3+
4+
markerLength = 0.45
5+
# aruco_dict = cv2.aruco.Dictionary_get( cv2.aruco.DICT_4X4_1000 )
6+
# arucoParams = cv2.aruco.DetectorParameters_create()
7+
dictionary = cv2.aruco.getPredefinedDictionary( cv2.aruco.DICT_4X4_1000)
8+
id_ = 0
9+
img = cv2.aruco.generateImageMarker(dictionary, id_,sidePixels=3300)
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
only_calibration: True
2+
n_data: 20
3+
move_robot_automatically: True
4+
debug_image: True
5+
with_gripper: True
6+
camera_in_hand: True
7+
output_file: "robot_arm_algos/robot_camera_extrinsics.yaml"
8+
calib_data_file_name: "robot_arm_algos/src/robot_camera_calibration/data/file.txt"
9+
fg_optim_data_file_name: "robot_arm_algos/src/robot_camera_calibration/data/fg/"
10+
tag_pose_collector: aruco_board #["aruco_board", "aruco_tag", "april_board", "april_tag"]
11+
robot_pose_collector: ros_tf #["ros_tf", "zmq"]
12+
camera: ros_camera #[ros_camera, realsense_camera]
13+
14+
ros_tf:
15+
ee_frame: "panda_end_effector"
16+
base_frame: "panda_link0"
17+
18+
aruco_board:
19+
dictionary: "DICT_6X6_1000"
20+
marker_length: 0.025
21+
marker_separation: 0.005
22+
n_rows: 5
23+
n_cols: 7
24+
25+
zmq:
26+
zmq_server_ip: 192.168.0.3 #same as realtime pc's IP
27+
zmq_server_port: 2000
28+
29+
ros_camera:
30+
image_topic_name: "/camera/color/image_raw"
31+
camera_info_topic: "/camera/color/camera_info"

robot_toolkit/docs/BACKGROUND.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
2+
## Introduction
3+
Visual manipulation systems have two configurations- camera attached to gripper or camera attached to environment. For both the configurations, most manipulation approaches require the rigid transformation between the camera's coordinate frame and the robot's coordinate frame.
4+
5+
Extrinsic calibration is the process of estimating this tranform. This process is usually done offline(like our tool) or sometimes online. When done offline, a calibration target with known dimensions(like [fiducial markers](/https://link.springer.com/content/pdf/10.1007/s10846-020-01307-9.pdf)) is used as they provide easy 3D-2D correspondences of detected keypoints in the image. The collected data is used to setup an optimization problem to estimate the rigid body transform that we require. Below, you can find further details regarding camera-robot calibration problem and details specific to our tool.
6+
7+
## Camera robot calibration
8+
9+
For both the configurations of camera-in-hand and camera-to-hand, the main constraint comes from the fact that the camera is <ins>**rigidly**</ins> attached to the gripper/environment and the calibration target is <ins>**rigidly**</ins> attached to the world/robot respectively. This gives rise to the popular AX=XB formulation where A,B,X are all homogeneous transformations. X is the transform we wish to estimate, A and B are transforms that we meaure i.e 1. tranform between gripper and the robot base and 2. transform between calibration target and camera. Refer [here](https://docs.opencv.org/4.5.4/d9/d0c/group__calib3d.html#gaebfc1c9f7434196a374c382abf43439b) for more details on how we formulate this problem. Different solvers/approaches exist to solve this problem. In OpenCV there are 5 approaches. You can look at the comparison between these approaches [here](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0273261).
10+
11+
## Data Collection
12+
Although, in principle we only need a minimum of 2 motions with non parallel rotation axes to determine the hand-eye transformation i.e 3 different poses. Our AX=XB solvers does not account for bias and variance in our measurements of calibration target pose and end-effector pose. So in real world we are trying to solve A $\delta$ A X = XB $\delta$ B when only AX=XB is true, and $\delta$ A and $\delta$ B are the errors in our measurement. The common approach to deal with this to collect as much data as possible, usually >15 pairs of poses.
13+
14+
- ### Calibration Target Pose Estimation
15+
For estimating the pose of the calibration target, we first detect the fiducials in the image using OpenCV. Since we know the 3D model of the fiducials, we know the corresponding 3D points of the detected 2D points, therefore we can use [P-n-P algorithm](https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga357634492a94efe8858d0ce1509da869) to estimate the pose of the calibration target with respect to the camera.
16+
17+
We can gauge the accuracy of the pose estimate by reprojecting the 3D coordinates of the fiducials' corners onto the image plane in the camera's frame of reference using calibration tag's pose estimate and pre-estimated camera intrinsics. We can then calculate the rmse between the detected pixel coordinates and the expected coordinates based on reprojection RMSE.
18+
19+
Large reprojection error(bad pose estimates) usually occur for the following reasons.
20+
- when the calibration tag is far away from the camera
21+
- if the calibration tag's normal vector is not parallel to the z-axis of the camera frame i.e the tag is not frontal facing to the camera
22+
- this affects higher with the calibration tag farther away from the camera. For more info refer [here](https://www.semanticscholar.org/paper/Analysis-of-Tracking-Accuracy-for-Single-Camera-Pentenrieder/70c5d9b33a978ff2d03eeaa627afaf4f6f609a1f)
23+
24+
- ### Robot Poses
25+
Robot motion causes vibrations to the calibration tag- in the eye-to-hand case. Make sure data is collected only when vibrations have subsided.
26+
The calibration algorithm works on the assumption that the calibration target is rigidly attached to the scene/robot depending on eye-in-hand/eye-to-hand case. Make sure the calibration tag is fixed rigidly and
27+
28+
29+
## Tips for data collection to get accurate calibration results
30+
Few Referenced from [here](https://github.com/IFL-CAMP/easy_handeye#:~:text=can%27t%20hurt%20either.-,Tips%20for%20accuracy,-The%20following%20tips)
31+
- Maximize rotation between poses.
32+
- Minimize the translation between poses.
33+
- make sure the AR calibration target is flat and that there are no bumps
34+
- use opencv board with mulitple markers(eg. aruco board) instead of single tag
35+
- record transforms data closer to the camera/robot base
36+
- by doing this, errors in rotation are not propagated as large errors in translation.
37+
- use high resolution mode of RGB camera especially if the calibration target is far or small
38+
- only use transforms with low rmse reprojection error (< ~0.3 pixels) (already implemented in our tool)
39+
- Minimize the distance from the target to the camera of the tracking system.
40+
- Calibrate the camera intrinsics if necessary / applicable.
41+
42+
43+
44+
# Trouble shooting
45+
- Ensure the right end-effector is attached to the robot in the Franka Desk App as this the frame with respect to robot base that is returned by libfranka if we query for the end-effector's pose.
46+
- Ensure the calibration tag printed hasn't changed the scale of the tag from the PDF, the marker edge should be 0.025m and the marker separation should be 0.005. If on measurement you find different values, set pass to the constructor of the calibration classes.
47+
48+
## References:
49+
- [opencv documentation]( https://docs.opencv.org/4.5.4/d9/d0c/group__calib3d.html#gaebfc1c9f7434196a374c382abf43439b)
50+
- [TUM documentation](https://campar.in.tum.de/Chair/HandEyeCalibration)
51+
52+
53+
## Other similar and relevant frameworks for multicamera calibration and robot-camera calibration:
54+
55+
- easy-handeye(uses opencv solvers)
56+
- https://github.com/IFL-CAMP/easy_handeye
57+
- vicalib
58+
- https://github.com/arpg/vicalib
59+
- mccalib
60+
- https://github.com/rameau-fr/MC-Calib/issues/4
61+
- atom
62+
- https://github.com/lardemua/atom
63+
- robot_calibration
64+
- https://github.com/mikeferguson/robot_calibration
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# Documentation and steps on how to use Kalibr for multicamera calibration
2+
3+
## Introduction
4+
5+
[Kalibr](https://github.com/ethz-asl/kalibr) is a tool box that can be used to calibrate visual-inertial systems, i.e cameras and IMUs. We use it primarily for calibrating multiple cameras' intrinsics and the extrinsics between the cameras, below you will find the steps to achieve the same.
6+
7+
Kalibr provides a docker container that includes all the dependancies and creates the ros workspace that is build the source code. They have a really nice [wiki](https://github.com/ethz-asl/kalibr/wiki), which was a great reference to create this documentation. You can think of this documentation of cherry picked instructions from the wiki and extra information that is not provided or is unclear in the wiki.
8+
9+
The high level process of how kalibr should be used is:
10+
- clone the reposiory and build the docker container
11+
- create a "data" folder that will be mounted to the container
12+
- create a calibration tag using the tool they provide [here](https://github.com/ethz-asl/kalibr/wiki/calibration-targets)
13+
- Collect a ROSbag of the cameras' "image_raw" topics that will be used by Kalibr to compute the intrinsics and extrinsics
14+
- Create a .yaml file that describes the calibration tag used.
15+
- Both the bags and the .yaml should be present in the "data folder"
16+
- run Kalibr calibration
17+
- each topic in the ROS bag are processed to estimate the 2D keypoints of the checkerboard or apriltag corners
18+
- These keypoints are used to estimate an initial guess for the intrinsics based on the camera model you pass as the argument.
19+
- These keypoints are also used to estimate an initial guess of the extrinsics based on epipolar geometry
20+
- The initial intrinsics and extrinsics guesses are then used to solve a batch optimization problem that minimizes reprojection error of the detected Keypoints keeping the cameras' intrinsics and extrinsics as the design variables.
21+
- You can find more detailed description of these steps below-
22+
23+
## Usage
24+
25+
1. First clone kalibr repository and build the docker container they provide
26+
27+
```
28+
mkdir $USER/git
29+
cd $HOME/git
30+
git clone https://github.com/ethz-asl/kalibr
31+
cd $HOME/git/kalibr
32+
33+
docker build -t kalibr -f Dockerfile_ros1_20_04 . # change this to whatever ubuntu version you want
34+
```
35+
2. run the docker container with:
36+
37+
```
38+
xhost +local:root
39+
xhost +local:docker
40+
FOLDER=/home/ruthrash/utm/data
41+
sudo docker run -it -e "DISPLAY" -e "QT_X11_NO_MITSHM=1" \
42+
-v "/tmp/.X11-unix:/tmp/.X11-unix:rw" \
43+
-v "$FOLDER:/data" kalibr
44+
```
45+
where FOLDER is the location where we would store our rosbags. **make sure** it points to the right path in your local machine
46+
47+
3. Now, use the kalibr_create_target_pdf node to create a calibration target as described [here](https://github.com/ethz-asl/kalibr/wiki/calibration-targets)
48+
49+
- We suggest using Aprilgrid over the checkerboard as their points can be detected even under partial occlusion of tags
50+
- We also suggest to create a calibration tag as big as possible with maximum possible dimensions of the april tag marker and separation between them.
51+
- an example command would be:
52+
```
53+
source devel/setup.bash
54+
rosrun kalibr kalibr_create_target_pdf --nx 3 --ny 3 --tsize 0.045 --tspace 0.2 --type apriltag
55+
```
56+
follow instructions [here](https://github.com/ethz-asl/kalibr/wiki/calibration-targets) to figure out tspace and tsize
57+
58+
4. Once you have collected the data, you can run the calibration node like so,
59+
```
60+
source devel/setup.bash
61+
rosrun kalibr kalibr_calibrate_cameras --bag /data/ball_dropping/calibration.bag --target /data/3x5.yaml --models pinhole-radtan pinhole-radtan pinhole-radtan --topics /cam_1/color/image_raw /cam_2/color/image_raw /cam_3/color/image_raw
62+
```
63+
Here we are calibrating 3 cameras with pinhole-radtan model. **make sure** to modify the command for your case

robot_toolkit/docs/USAGE.md

Lines changed: 188 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,188 @@
1+
2+
# Documentation for using this camera-robot arm extrinsic calibration tool
3+
4+
5+
6+
## Introduction
7+
8+
There are two scripts available for data collection and Hand eye calibration using Tsai algorithm from opencv,
9+
10+
1. CameraRobotCalibration.py
11+
12+
2. ROSCameraRobotCalibration.py
13+
14+
15+
16+
CameraRobotCalibration.py uses pyrealsense2 sdk and opencv to process images from Intel cameras, gets robot poses from a small zmq server in the realtime docker that gets the end effector pose using libfranka. Whereas the ROSCameraRobotCalibration.py requires a ROS Tf tree to be populated with the end effector and robot base frames and also a image and camera_info publisher to subscribe RGB images and the camera's intrinsic information and then uses opencv to estimate the pose of the calibration target.
17+
18+
19+
20+
## Dependancies
21+
22+
All Dependancies are automatically installed when building the workstation docker environment.
23+
24+
25+
26+
## Calibration Process and Usage Instructions
27+
28+
29+
30+
### a. <u>Calibration Tag preparation</u>
31+
32+
33+
34+
To perform the robot-camera calibration, we would have to first collect the calibration tag + End effector pose data for a number of configurations(15-30). To this end, to first create the calibration tag, print the PDF provided in [calibration/calibration_board](../calibration_board) directory. Affix this tag to a **completely** flat surface (eg. wooden board, exam pad, dibond aluminum etc), while ensuring there are no bumps when you stick the calibration tag on its surface using glue.
35+
36+
37+
38+
For the camera-in-hand case, the calibration tag is fixed to the environment rigidly. Ensure the tag doesn't move or vibrate too much throughout the calibration process.
39+
40+
41+
42+
For the camera-in-environment case, the calibration tag needs to be **rigidly** attached to the robot's End-Effector. You could use the provided CAD files for the finger tips and gripping points in [models_4_3d_printing](../models_4_3d_printing). The [finger tips](../models_4_3d_printing/franka_custom_finger_tips.stl) are to be attached to Franka End-effector and the [gripping points](../models_4_3d_printing/finger_grasp_points.stl) or [handle plate](../models_4_3d_printing/finger_handle_plate.stl) are drilled/screwed onto the calibration tag. Now make the Franka End-effector with custom finger tips(figure 1) grasp the calibration tag(as show in figure 3) with the attached custom gripping points(figure 2), this ensures that the tag remains rigid with respect to the End-effector.
43+
44+
<img src="imgs/finger_tip.jpeg" width="120" height="80">
45+
46+
figure 1: custom finger tip
47+
48+
<img src="imgs/grasp_point.jpeg" width="120" height="80">
49+
50+
figure 2: grasping point
51+
52+
<img src="imgs/grasp_calib_tag.jpeg" width="120" height="80">
53+
54+
figure 3: grasping calibration tag
55+
56+
### b.<u> Preparation to provide End-Effector Poses. </u>
57+
58+
59+
60+
If you are using the ROS API ensure, there's a node that populates the TF tree with the robot base and end-effector frames. There should also be a node running that publishes RGB images and the camera's intrinsic in the camera_info topic. An example workflow of commands is shown below,
61+
62+
63+
64+
In Real time Computer,
65+
66+
Bring the built docker container up
67+
68+
```
69+
70+
sudo docker-compose -f docker/realtime_computer/docker-compose-gui.yml up
71+
72+
```
73+
74+
In workstation computer,
75+
76+
Bring the built workstation docker container up
77+
78+
```
79+
80+
xhost +local:docker
81+
82+
sudo docker-compose -f docker/workstation_computer/docker-compose-gui.yml up
83+
84+
```
85+
86+
open a bash terminal inside the workstation docker container
87+
88+
```
89+
90+
(sudo) docker exec -it workstation_computer_docker bash
91+
92+
```
93+
94+
In the worstation computer docker terminal, launch the nodes for publishing the robot's tf tree containing robot base and end effector frames, for example: bringing up frankapy
95+
96+
and running realsense launch file,
97+
98+
```
99+
100+
cd /root/git/frankapy/
101+
102+
bash ./bash_scripts/start_control_pc.sh -i (realtime computer ip) -u (realtimecomputer username) -d /root/git/franka-interface -a (robot_ip) -w (workstation IP)
103+
104+
105+
106+
roslaunch realsense2_camera rs_camera.launch
107+
108+
```
109+
110+
---
111+
112+
If you are not using the ROS APIs and are using Realsense cameras that can work with pyrealsense SDK then, first, ensure the camera is connected to the workstation computer, then run the read states server in the realtime docker.
113+
114+
115+
116+
In Real time Computer,
117+
118+
Bring the built docker container up
119+
120+
```
121+
122+
sudo docker-compose -f docker/realtime_computer/docker-compose-gui.yml up
123+
124+
```
125+
126+
open a bash terminal inside the realtime docker container
127+
128+
```
129+
130+
(sudo) docker exec -it realtime_docker bash
131+
132+
cd /root/git/franka_control_suite/build
133+
134+
```
135+
136+
run the readstates server that send end effector poses when requested,
137+
138+
```
139+
140+
./read_states <robot_ip> <realtime_pc_ip> <zmq_port_number>
141+
142+
```
143+
144+
145+
146+
147+
148+
### c.<u> Run Pose Data Collection + Calibration Script </u>
149+
150+
151+
152+
### In workstation computer,
153+
154+
Bring the built workstation docker container up
155+
156+
```
157+
158+
xhost +local:docker
159+
160+
sudo docker-compose -f docker/workstation_computer/docker-compose-gui.yml up
161+
162+
```
163+
164+
open a bash terminal inside the workstation docker container and go to appropriate directory
165+
166+
```
167+
168+
(sudo) docker exec -it workstation_computer_docker bash
169+
170+
cd /root/git/robot_toolkit
171+
172+
```
173+
174+
Now make sure to setup your config file like the one show [here](../config/robot_camera_calibration.yaml) and the run,
175+
176+
```
177+
python3 robot_camera_calibration.py --config_file <path to your config file eg: config/robot_camera_calibration.yaml>
178+
```
179+
180+
181+
For both the non ROS and ROS based pose data collection, move the robot to different configurations by hand, and press enter for the calibration script to record the calibration target pose and the end-effector pose. Collect 15-30 poses and then press any other key than "Enter" to signal end of data collection and for the calibration process to start.
182+
183+
The output hand eye calibration result for all available [methods](https://docs.opencv.org/4.5.4/d9/d0c/group__calib3d.html#gad10a5ef12ee3499a0774c7904a801b99) in opencv will be stored in the output file mentioned in config YAML
184+
185+
For the camera in hand case the output is the Transformation of the camera frame in the EndEffector's frame. For the camera in the environment case. the output is the transformation of the camera frame in the robot's base frame.
186+
187+
### c.<u> Instructions for Testing </u>
188+
TBD
53.8 KB
Loading
58 KB
Loading

0 commit comments

Comments
 (0)