|
| 1 | + |
| 2 | +# Documentation for using this camera-robot arm extrinsic calibration tool |
| 3 | + |
| 4 | + |
| 5 | + |
| 6 | +## Introduction |
| 7 | + |
| 8 | +There are two scripts available for data collection and Hand eye calibration using Tsai algorithm from opencv, |
| 9 | + |
| 10 | +1. CameraRobotCalibration.py |
| 11 | + |
| 12 | +2. ROSCameraRobotCalibration.py |
| 13 | + |
| 14 | + |
| 15 | + |
| 16 | +CameraRobotCalibration.py uses pyrealsense2 sdk and opencv to process images from Intel cameras, gets robot poses from a small zmq server in the realtime docker that gets the end effector pose using libfranka. Whereas the ROSCameraRobotCalibration.py requires a ROS Tf tree to be populated with the end effector and robot base frames and also a image and camera_info publisher to subscribe RGB images and the camera's intrinsic information and then uses opencv to estimate the pose of the calibration target. |
| 17 | + |
| 18 | + |
| 19 | + |
| 20 | +## Dependancies |
| 21 | + |
| 22 | +All Dependancies are automatically installed when building the workstation docker environment. |
| 23 | + |
| 24 | + |
| 25 | + |
| 26 | +## Calibration Process and Usage Instructions |
| 27 | + |
| 28 | + |
| 29 | + |
| 30 | +### a. <u>Calibration Tag preparation</u> |
| 31 | + |
| 32 | + |
| 33 | + |
| 34 | +To perform the robot-camera calibration, we would have to first collect the calibration tag + End effector pose data for a number of configurations(15-30). To this end, to first create the calibration tag, print the PDF provided in [calibration/calibration_board](../calibration_board) directory. Affix this tag to a **completely** flat surface (eg. wooden board, exam pad, dibond aluminum etc), while ensuring there are no bumps when you stick the calibration tag on its surface using glue. |
| 35 | + |
| 36 | + |
| 37 | + |
| 38 | +For the camera-in-hand case, the calibration tag is fixed to the environment rigidly. Ensure the tag doesn't move or vibrate too much throughout the calibration process. |
| 39 | + |
| 40 | + |
| 41 | + |
| 42 | +For the camera-in-environment case, the calibration tag needs to be **rigidly** attached to the robot's End-Effector. You could use the provided CAD files for the finger tips and gripping points in [models_4_3d_printing](../models_4_3d_printing). The [finger tips](../models_4_3d_printing/franka_custom_finger_tips.stl) are to be attached to Franka End-effector and the [gripping points](../models_4_3d_printing/finger_grasp_points.stl) or [handle plate](../models_4_3d_printing/finger_handle_plate.stl) are drilled/screwed onto the calibration tag. Now make the Franka End-effector with custom finger tips(figure 1) grasp the calibration tag(as show in figure 3) with the attached custom gripping points(figure 2), this ensures that the tag remains rigid with respect to the End-effector. |
| 43 | + |
| 44 | +<img src="imgs/finger_tip.jpeg" width="120" height="80"> |
| 45 | + |
| 46 | +figure 1: custom finger tip |
| 47 | + |
| 48 | +<img src="imgs/grasp_point.jpeg" width="120" height="80"> |
| 49 | + |
| 50 | +figure 2: grasping point |
| 51 | + |
| 52 | +<img src="imgs/grasp_calib_tag.jpeg" width="120" height="80"> |
| 53 | + |
| 54 | +figure 3: grasping calibration tag |
| 55 | + |
| 56 | +### b.<u> Preparation to provide End-Effector Poses. </u> |
| 57 | + |
| 58 | + |
| 59 | + |
| 60 | +If you are using the ROS API ensure, there's a node that populates the TF tree with the robot base and end-effector frames. There should also be a node running that publishes RGB images and the camera's intrinsic in the camera_info topic. An example workflow of commands is shown below, |
| 61 | + |
| 62 | + |
| 63 | + |
| 64 | +In Real time Computer, |
| 65 | + |
| 66 | +Bring the built docker container up |
| 67 | + |
| 68 | +``` |
| 69 | +
|
| 70 | +sudo docker-compose -f docker/realtime_computer/docker-compose-gui.yml up |
| 71 | +
|
| 72 | +``` |
| 73 | + |
| 74 | +In workstation computer, |
| 75 | + |
| 76 | +Bring the built workstation docker container up |
| 77 | + |
| 78 | +``` |
| 79 | +
|
| 80 | +xhost +local:docker |
| 81 | +
|
| 82 | +sudo docker-compose -f docker/workstation_computer/docker-compose-gui.yml up |
| 83 | +
|
| 84 | +``` |
| 85 | + |
| 86 | +open a bash terminal inside the workstation docker container |
| 87 | + |
| 88 | +``` |
| 89 | +
|
| 90 | +(sudo) docker exec -it workstation_computer_docker bash |
| 91 | +
|
| 92 | +``` |
| 93 | + |
| 94 | +In the worstation computer docker terminal, launch the nodes for publishing the robot's tf tree containing robot base and end effector frames, for example: bringing up frankapy |
| 95 | + |
| 96 | +and running realsense launch file, |
| 97 | + |
| 98 | +``` |
| 99 | +
|
| 100 | +cd /root/git/frankapy/ |
| 101 | +
|
| 102 | +bash ./bash_scripts/start_control_pc.sh -i (realtime computer ip) -u (realtimecomputer username) -d /root/git/franka-interface -a (robot_ip) -w (workstation IP) |
| 103 | +
|
| 104 | + |
| 105 | +
|
| 106 | +roslaunch realsense2_camera rs_camera.launch |
| 107 | +
|
| 108 | +``` |
| 109 | + |
| 110 | +--- |
| 111 | + |
| 112 | +If you are not using the ROS APIs and are using Realsense cameras that can work with pyrealsense SDK then, first, ensure the camera is connected to the workstation computer, then run the read states server in the realtime docker. |
| 113 | + |
| 114 | + |
| 115 | + |
| 116 | +In Real time Computer, |
| 117 | + |
| 118 | +Bring the built docker container up |
| 119 | + |
| 120 | +``` |
| 121 | +
|
| 122 | +sudo docker-compose -f docker/realtime_computer/docker-compose-gui.yml up |
| 123 | +
|
| 124 | +``` |
| 125 | + |
| 126 | +open a bash terminal inside the realtime docker container |
| 127 | + |
| 128 | +``` |
| 129 | +
|
| 130 | +(sudo) docker exec -it realtime_docker bash |
| 131 | +
|
| 132 | +cd /root/git/franka_control_suite/build |
| 133 | +
|
| 134 | +``` |
| 135 | + |
| 136 | +run the readstates server that send end effector poses when requested, |
| 137 | + |
| 138 | +``` |
| 139 | +
|
| 140 | +./read_states <robot_ip> <realtime_pc_ip> <zmq_port_number> |
| 141 | +
|
| 142 | +``` |
| 143 | + |
| 144 | + |
| 145 | + |
| 146 | + |
| 147 | + |
| 148 | +### c.<u> Run Pose Data Collection + Calibration Script </u> |
| 149 | + |
| 150 | + |
| 151 | + |
| 152 | +### In workstation computer, |
| 153 | + |
| 154 | +Bring the built workstation docker container up |
| 155 | + |
| 156 | +``` |
| 157 | +
|
| 158 | +xhost +local:docker |
| 159 | +
|
| 160 | +sudo docker-compose -f docker/workstation_computer/docker-compose-gui.yml up |
| 161 | +
|
| 162 | +``` |
| 163 | + |
| 164 | +open a bash terminal inside the workstation docker container and go to appropriate directory |
| 165 | + |
| 166 | +``` |
| 167 | +
|
| 168 | +(sudo) docker exec -it workstation_computer_docker bash |
| 169 | +
|
| 170 | +cd /root/git/robot_toolkit |
| 171 | +
|
| 172 | +``` |
| 173 | + |
| 174 | +Now make sure to setup your config file like the one show [here](../config/robot_camera_calibration.yaml) and the run, |
| 175 | + |
| 176 | +``` |
| 177 | +python3 robot_camera_calibration.py --config_file <path to your config file eg: config/robot_camera_calibration.yaml> |
| 178 | +``` |
| 179 | + |
| 180 | + |
| 181 | +For both the non ROS and ROS based pose data collection, move the robot to different configurations by hand, and press enter for the calibration script to record the calibration target pose and the end-effector pose. Collect 15-30 poses and then press any other key than "Enter" to signal end of data collection and for the calibration process to start. |
| 182 | + |
| 183 | +The output hand eye calibration result for all available [methods](https://docs.opencv.org/4.5.4/d9/d0c/group__calib3d.html#gad10a5ef12ee3499a0774c7904a801b99) in opencv will be stored in the output file mentioned in config YAML |
| 184 | + |
| 185 | +For the camera in hand case the output is the Transformation of the camera frame in the EndEffector's frame. For the camera in the environment case. the output is the transformation of the camera frame in the robot's base frame. |
| 186 | + |
| 187 | +### c.<u> Instructions for Testing </u> |
| 188 | +TBD |
0 commit comments