Skip to content

Commit 12f0924

Browse files
Team B Unitree Go1 Pull Request (#154)
* Create unitree-go1.md * Create docker-security.md * Create azure-block-detection.md * Add images * Update navigation.yml --------- Co-authored-by: Nevin Valsaraj <[email protected]>
1 parent f4836ac commit 12f0924

22 files changed

+450
-0
lines changed

Diff for: _data/navigation.yml

+6
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,8 @@ wiki:
4545
url: /wiki/common-platforms/dji-drone-breakdown-for-technical-projects/
4646
- title: DJI SDK
4747
url: /wiki/common-platforms/dji-sdk/
48+
- title: Unitree Go1
49+
url: /wiki/common-platforms/unitree-go1/
4850
- title: Pixhawk
4951
url: /wiki/common-platforms/pixhawk/
5052
- title: Asctec Pelican UAV Setup Guide
@@ -114,6 +116,8 @@ wiki:
114116
url: /wiki/sensing/robotic-total-stations.md
115117
- title: Thermal Cameras
116118
url: /wiki/sensing/thermal-cameras/
119+
- title: Azure Block Detection
120+
url: /wiki/sensing/azure-block-detection/
117121
- title: DWM1001 UltraWideband Positioning System
118122
url: /wiki/sensing/ultrawideband-beacon-positioning.md
119123
- title: Actuation
@@ -283,6 +287,8 @@ wiki:
283287
children:
284288
- title: Docker
285289
url: /wiki/tools/docker/
290+
- title: Docker Security
291+
url: /wiki/tools/docker-security
286292
- title: Docker for PyTorch
287293
url: /wiki/tools/docker-for-pytorch/
288294
- title: Vim

Diff for: assets/images/privesc.png

68 KB
Loading

Diff for: wiki/common-platforms/assets/docker_socket.png

126 KB
Loading

Diff for: wiki/common-platforms/assets/form_factor.png

229 KB
Loading

Diff for: wiki/common-platforms/assets/unitree_side.png

+1
Loading

Diff for: wiki/common-platforms/assets/unitree_top.png

855 KB
Loading

Diff for: wiki/common-platforms/assets/wired.png

190 KB
Loading

Diff for: wiki/common-platforms/assets/wireless.png

159 KB
Loading

Diff for: wiki/common-platforms/unitree-go1.md

+102
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
---
2+
# Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be
3+
# overwritten except in special circumstances.
4+
# You should set the date the article was last updated like this:
5+
date: 2023-05-03 # YYYY-MM-DD
6+
# This will be displayed at the bottom of the article
7+
# You should set the article's title:
8+
title: Unitree Go1 Edu
9+
# The 'title' is automatically displayed at the top of the page
10+
# and used in other parts of the site.
11+
---
12+
This is an article that provides an overview of the Unitree Go1 Edu robot, including its features and capabilities. Unitree Robotics is a leading Chinese manufacturer that specializes in developing, producing, and selling high-performance quadruped robots. One of the company's primary advantages is that they offer quadruped platforms at a significantly lower cost compared to competitors like Boston Dynamics. In addition, they have announced plans to release experimental humanoid platforms in the near future.
13+
14+
There are three versions of the Unitree Go1: Air, Pro, and Edu. The Edu model is designed for educational purposes and provides developers with access to the platform. In this article, we will focus on the capabilities of the Go1 Edu, which is a popular choice for students and researchers due to its affordability and ease of use.
15+
16+
## Form Factor
17+
![Form_Factor](assets/form_factor.png)
18+
The Unitree Go1 Edu has compact dimensions of 645 x 280 x 400 mm and weighs 12 kg.
19+
It boasts a top speed of 3.7-5 m/s and a maximum load capacity of 10 kg, although it's recommended to keep the payload under 5 kg.
20+
By default, the robot can traverse steps up to 10 cm high, but with programming, it's possible to overcome larger obstacles.
21+
The Go1 Edu features 12 degrees of freedom, including HAA (hip abduction/adduction), HFE (hip flexion/extension), and KFE (knee flexion/extension) joints.
22+
The Body/Thigh Joint Motor design is highly adaptable to various mechanical equipment, with an instantaneous torque of 23.7 N·m, while the Knee Joint has a torque of 35.55 N·m.
23+
24+
## Power and Interface
25+
![Unitree_TOP](assets/unitree_top.png)
26+
27+
The Unitree Go1 Edu robot is equipped with a reliable lithium-ion power cell with a 6000mAh capacity that provides an endurance time of 1-2.5 hours. The robot's battery management system (BMS) closely monitors the battery status, ensuring safe and stable operation during use. The batteries themselves feature overcharge protection, providing an additional layer of safety.
28+
29+
The top plate of the robot features several ports, including USB and HDMI ports that connect to corresponding computers. The USB and HDMI port pair located closest to the Ethernet port, along with the Ethernet port itself, connects to the Raspberry Pi. Additionally, users can draw out 24V, 12A power from the top plate using an XT30 connector.
30+
31+
## Sensors and Processors
32+
33+
The Unitree Go1 Edu robot is equipped with a range of sensors, including five pairs of stereo-fisheye cameras located at the face, chin, lower belly, right torso, and left torso, providing a 360-degree field of view. Additionally, it has three sets of ultrasonic sensors positioned in different directions to detect obstacles in its path. The robot also features an IMU, four foot force sensors, and face LEDs, which can be programmed to display different expressions.
34+
35+
Moreover, Unitree provides customization options for processors and additional sensors. In the 2023 MRSD Unitree Go1, for instance, there is one Raspberry Pi CM4 (Compute Module 4), two Nvidia Jetson Nanos, and one Nvidia NX. The Raspberry Pi comes with a 32 GB SD card where Unitree's off-the-shelf software is pre-installed.
36+
37+
## Network Configuration for Unitree Go1 Camera Streaming
38+
* Four computers inside Unitree Go1: three Jetson Nano and one Raspberry Pi. Four devices are connected with a switch.
39+
* The inbuilt wifi card inside Raspberry Pi is connected to the switch and is called Eth0.
40+
* Raspberry Pi also has an extra Wi-Fi card, which is used as a hotspot 192.168.12.1.
41+
* User laptop connects to the robot hotspot, with a static IP 192.168.12.18.
42+
* Users can connect to all four devices via Ethernet cable, with a static IP 192.168.123.123.
43+
![Wired](assets/wired.png)
44+
45+
* Each Nano controls and processes a pair of fisheye cameras. The Unitree camera SDK provides an API that captures and rectifies skewed fisheye camera stream and sends out the UDP packets.
46+
* `./bins/example_putImagetrans` sends camera streams with udp packets
47+
* `./bins/example_getimagetrans` receives the udp packets and show camera streams with gstreamer
48+
* You can modify the receiver program and do whatever you want
49+
* The de-fish API requires a straight connection with the camera. It must be run inside Jetson Nano. Users can’t receive raw camera stream and run this inbuilt program on their own laptop. In addition, this API is designed for Ethernet connection. It requires the third segment of the image receiver IP address to be 123. This means the user's laptop must have a 123-segment IP address.
50+
* In addition, user need to modify the config file inside Unitree Nano Jetson, which is `/UnitreecameraSDK/trans_rect_config.yaml`.
51+
52+
## Wirelessly Stream Camera Feed from Unitree Go1's Head Cameras to Desktop
53+
In order to receive a camera stream wirelessly, you will need to modify the routing tables on your device.
54+
55+
```console
56+
-----------------------------head nano---------------------------
57+
sudo route del -net 192.168.123.0 netmask 255.255.255.0
58+
59+
#the following four commands kill the camera processes
60+
ps -aux | grep point_cloud_node | awk '{print $2}' | xargs kill -9
61+
ps -aux | grep mqttControlNode | awk '{print $2}' | xargs kill -9
62+
ps -aux | grep live_human_pose | awk '{print $2}' | xargs kill -9
63+
ps -aux | grep rosnode | awk '{print $2}' | xargs kill -9
64+
65+
cd UnitreecameraSDK
66+
./bins/example_putImagetrans
67+
68+
69+
----------------------------raspberry pi-----------------------
70+
sudo route add -host 192.168.123.123 dev wlan1
71+
72+
73+
----------------------------user laptop-----------------------
74+
# input ifconfig and find out the wifi name that is used for Go1
75+
# mine is wlp0s20f3
76+
sudo ifconfig wlp0s20f3:123 192.168.123.123 netmask 255.255.255.0
77+
sudo route del -net 192.168.123.0 netmask 255.255.255.0
78+
79+
cd UnitreecameraSDK
80+
./bins/example_getimagetrans
81+
```
82+
83+
## Controlling Unitree in Simulation and Real-World Scenarios
84+
85+
### Introduction
86+
Unitree Robotics provides a high-level control interface for directly controlling the real robot. However, controlling the movement of a robot in simulation using simple commands is a challenge. This documentation provides an overview of the issues we faced and the solutions we found while controlling the Unitree Go1 robot in simulation and real-world scenarios.
87+
88+
### Controlling the Robot in Simulation
89+
The Gazebo simulation environment currently limits the use of `unitree_legged_msgs::LowCmd` as the subscribed message type, which requires manual motor torque and angle setting. To convert `unitree_legged_msgs::HighCmd` to `unitree_legged_msgs::LowCmd`, the `HighCmd` to `LowCmd` functions are hidden in the robot interface high level in `/raspi/Unitree/auto start/programming/programming.py`. However, this limitation can be overcome by exploring the MIT Champ code and using the IsaacSim platform from Nvidia.
90+
91+
### Controlling the Robot in Real-World Scenarios
92+
To ensure safety, it is crucial to carefully review the user manual and record the full action sequence of the Unitree Go1 robot. The provided software packages, including the unitree legged SDK and unitree ROS to real, can be used to study example codes and create custom packages for specific use cases. For instance, the example_walk.cpp can be used to send the HIGH Command message to the robot, allowing users to set start and end points for the robot to plan its route from start to end.
93+
94+
## Summary
95+
If you are considering using the Unitree Go1 for your project, be aware that you will either need to be content with the default controller or implement your own state estimation and legged controller. One of the main drawbacks of using commercial products like this is that the code is closed-source. When deploying your own code on the Unitree Raspberry Pi, it is important to keep an eye on memory usage and find a balance between performance and computing capabilities. (Note: This section contains the latest information as of May 2023)
96+
97+
98+
99+
## References
100+
- [Unitree Go1 Education Plus](https://www.wevolver.com/specs/unitree-robotics-go1-edu-plus)
101+
- [Unitree vs. Boston Dynamics](https://www.generationrobots.com/blog/en/unitree-robotics-vs-boston-dynamics-the-right-robot-dog-for-me/)
102+
- [Unitree 3D Lidar](https://www.active-robots.com/unitree-go1-air-3.html)

Diff for: wiki/sensing/assets/cropped.png

548 KB
Loading

Diff for: wiki/sensing/assets/hsv_img.png

131 KB
Loading

Diff for: wiki/sensing/assets/norm_img.png

331 KB
Loading

Diff for: wiki/sensing/assets/norm_mask.png

124 KB
Loading

Diff for: wiki/sensing/assets/norm_result.png

132 KB
Loading

Diff for: wiki/sensing/assets/normalized.png

350 KB
Loading

Diff for: wiki/sensing/assets/original.png

3.2 MB
Loading

Diff for: wiki/sensing/assets/pipeline.png

314 KB
Loading

Diff for: wiki/sensing/assets/rgb_vector.png

118 KB
Loading

Diff for: wiki/sensing/assets/zoom1.png

31 KB
Loading

Diff for: wiki/sensing/assets/zoom2.png

24 KB
Loading

Diff for: wiki/sensing/azure-block-detection.md

+71
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
---
2+
# Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be
3+
# overwritten except in special circumstances.
4+
# You should set the date the article was last updated like this:
5+
date: 2023-12-13 # YYYY-MM-DD
6+
# This will be displayed at the bottom of the article
7+
# You should set the article's title:
8+
title: Azure Block Detection
9+
# The 'title' is automatically displayed at the top of the page
10+
# and used in other parts of the site.
11+
---
12+
13+
This article presents an overview of object detection using the Azure camera without relying on learning-based methods. It is utilized within our Robot Autonomy project, specifically for detecting Jenga blocks and attempting to assemble them.
14+
15+
### Detection Pipeline
16+
To identify individual blocks and their respective grasping points, the perception subsystem undergoes a series of five steps. Initially, it crops the Azure Kinect camera image to center on the workspace. Following this, it applies color thresholding to filter out irrelevant objects and discern the blocks. Subsequently, it identifies the contours of these blocks and filters them based on their area and shape characteristics. Once the blocks are recognized, the perception subsystem computes the grasping points for each block. Collectively, these steps facilitate the accurate detection of block locations and their corresponding grasping points on the workstation.
17+
18+
19+
![Pipeline of Block Detection](assets/pipeline.png)
20+
21+
### Image Cropping
22+
The initial stage of the perception subsystem involves cropping the raw image. Raw images often contain extraneous details, such as the workspace's supporting platform or the presence of individuals' feet near the robot. By cropping the image to focus solely on the workspace, we eliminate a significant amount of unnecessary information, thereby enhancing the system's efficiency and robustness.
23+
24+
Currently, this approach employs hard-coded cropping parameters, requiring manual specification of the rows and columns to retain within the image.
25+
26+
![Cropped Image](assets/cropped.png)
27+
28+
### Color Segmentation
29+
Color segmentation can pose challenges in images with prominent shadows. Shadows cause a decrease in RGB pixel values, while light causes an increase, making it challenging to distinguish between different colors. To address this, we employ HSV (Hue, Saturation, Value) thresholding on the image.
30+
31+
For reliable detection of the brown color of the Jenga blocks under varying lighting conditions, we utilize the HSV color space, consisting of three channels: hue, saturation, and value. By thresholding these channels, we filter out the desired colors. However, using a fixed RGB threshold for detecting brown is challenging due to its variable RGB values under different lighting.
32+
33+
To tackle this issue, we employed color meter software to establish the brown color range for the Jenga blocks. This range, encompassing lower and upper brown values, was applied to our HSV thresholding function. The resulting HSV thresholded image is depicted in Figure 10b.
34+
35+
To further refine Jenga block detection and eliminate background noise, we apply a mask to the HSV thresholded image. Initially, we create a mask by contour area thresholding and then fill any holes within the contour to obtain a solid mask. The resulting masked image is shown in Figure 6a. This process ensures the reliable detection of Jenga blocks by removing remaining noise or unwanted objects.
36+
37+
![RGB Vector](assets/rgb_vector.png)
38+
39+
### Block Contours
40+
41+
Contours play a pivotal role in computer vision's object detection. In our perception system, we utilize the mask derived from the HSV thresholded image to generate precise and consistent contours, enhancing accuracy.
42+
43+
We utilize OpenCV2's 'findContours' function to generate contours from the masked image. However, these contours encompass not only the Jenga blocks but also the robot manipulator. Since our focus is solely on detecting rectangular shapes corresponding to the Jenga blocks, we employ thresholding based on approximate block size and rectangular characteristics.
44+
45+
To simplify contours and reduce points, we apply OpenCV2's 'minAreaRect' function to the contours. This function generates contours with only four points representing the four corners of the blocks. Comparing the area of the original contour with the 'minAreaRect' contour allows us to confirm if the detected object is indeed a rectangle by setting a threshold ratio.
46+
47+
Subsequently, we identify the two grasp points of the block by detecting its longer sides. To determine these grasp points in the image frame, we align the depth image with the RGB image to acquire the depth value. Utilizing the x, y, and depth values, we transform the 2D pixel points back to the 3D pose in the camera frame using the intrinsic matrix. The grasp point concerning the base frame is then computed by performing a transform tree lookup, thereby completing the entire perception cycle.
48+
49+
![Contours](assets/zoom1.png)
50+
51+
52+
### Image HSV Thresholding vs. Normalization
53+
54+
To mitigate the issue of filtering out irrelevant data, we explored two approaches: HSV thresholding and image normalization. In addition to the conventional representation of each pixel as an RGB value, pixels can also be depicted as 3D vectors in RGB space. While lighting influences the vector's magnitude, it doesn't alter its direction. Normalizing each vector nullifies the lighting effect, preserving only its direction and effectively converting RGB vectors into unit vectors.
55+
56+
For identifying jenga block pixels, we calculated the cosine similarity between each pixel's RGB vector and the background color. Pixels with excessive similarity to the background were masked out.
57+
58+
Although image normalization showed promise, it proved less effective in cluttered scenarios compared to the HSV method. The HSV method, involving thresholding in the HSV color space, exhibited greater reliability in detecting jenga blocks across varying lighting conditions.
59+
60+
Normalized Image | HSV Image
61+
:-------------------------:|:-------------------------:
62+
![Norm](assets/norm_img.png) | ![HSV](assets/hsv_img.png)
63+
64+
65+
## References
66+
- [MIT Jenga Robot](https://news.mit.edu/2019/robot-jenga-0130)
67+
68+
69+
70+
71+

0 commit comments

Comments
 (0)