Skip to content

Commit 5091979

Browse files
authored
Create 2018-09-25-from-ui-to-motors.markdown
1 parent 6aaf5ce commit 5091979

File tree

1 file changed

+54
-0
lines changed

1 file changed

+54
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
---
2+
layout: post
3+
title: "From UI to motors"
4+
date: 2018-09-25 21:55:00
5+
---
6+
7+
How do commands given to the robot get translated into motor commands?
8+
9+
There are several layers to this, as you can see in
10+
[software-architecture.html](https://github.com/tue-robotics/tue-robotics.github.io/blob/master/software-architecture.html).
11+
When you use [Telegram](https://telegram.org/) to chat with the robot,
12+
your text command first gets sent to the servers of Telegram,
13+
which passes the message on to our [telegram_ros](https://github.com/tue-robotics/telegram_ros) node. That node implements a [Telegram Bot](https://core.telegram.org/bots)
14+
15+
Instead of Telegram, there is also a speech-to-text engine on our robots, that take in audio and try to convert that to text.
16+
On Amigo, we use [dragonfly_speech_recognition](https://github.com/tue-robotics/dragonfly_speech_recognition).
17+
This also uses the grammar (as describer below) to aid in speech recognition.
18+
The grammar restricts what the STT can hear and thus the result is always something the robot can at least parse.
19+
20+
The text gets published to a ROS topic and then read by the
21+
[conversation_engine](https://github.com/tue-robotics/conversation_engine).
22+
This interprets the command using the [grammar_parser](https://github.com/tue-robotics/grammar_parser).
23+
The result of the parsing is an action description that gets sent to the [action_server](https://github.com/tue-robotics/action_server).
24+
25+
The action_server performs some action planning to execute the desired command, which results either in
26+
- A sequence of actions
27+
- An error. The most interesting one being that the command is underspecified.
28+
29+
In the last case, the conversion_engine talks with the user to fill in more details of the command, also using Telegram or via speech.
30+
31+
Once a sequence of actions is defined, it gets executed by the action_server, which also reports on the progress of the action.
32+
33+
The actions themselves are implemented using the [robot_smach_states](https://github.com/tue-robotics/tue_robocup/tree/master/robot_smach_states).
34+
This is a library of various state machines that for example provide Grab, NavigateTo..., Place, LearnOperator actions.
35+
Each of those accepts a robot-parameter, which is an instance of a Robot-subclass from the [robot_skills](https://github.com/tue-robotics/tue_robocup/tree/master/robot_skills)-package, for example Amigo, Sergio and Hero.
36+
When the state machines execute, they call methods of RobotParts (eg. a Base, Arm, Head, Worldmodel etc),
37+
which in turn tap into the ROS [Action Servers](http://wiki.ros.org/actionlib#Client-Server_Interaction), [Services](http://wiki.ros.org/ROS/Tutorials/UnderstandingServicesParams#ROS_Services) and [Topics](http://wiki.ros.org/ROS/Tutorials/UnderstandingTopics) of the robot.
38+
39+
In short, the robot_skills provide an abstract, mostly ROS-independent interface to the actual robot.
40+
41+
A major component in our robots is the worldmodel, called [ED](https://github.com/tue-robotics/ed) (Environment Descriptor).
42+
We use this to get a symbolic view on the world. ED Keeps track of objects and uses various plugins to e.g. recognize objects.
43+
ED also prvides a maps for the robot to localize itself and variosu other tasks.
44+
45+
There are several action_servers ont he robot, for example for navigation.
46+
ROS robots typoically use move_base but we have a bit more advanced version called [cb_base_navigation](https://github.com/tue-robotics/cb_base_navigation).
47+
The advantage of that is that we can move to an area (defined by some mathematically described constraints) rather than a single Pose.
48+
49+
Another action-server is provided by [MoveIt](https://moveit.ros.org/) to command the arms to various joint goals and cartesian goals.
50+
51+
These planners talk with the hardware interface components, that use EtherCAT to interface with the motors and encoders to send references and read back what the various joint really did.
52+
53+
That is all published on the joint_states-topic.
54+
The robot_state_publisher than uses that to publish TF-frames so we can see where different parts of the robot are and so that relative positions can be calculated etc.

0 commit comments

Comments
 (0)