From 50919794473c33d9d403ebdca573af869d05056e Mon Sep 17 00:00:00 2001
From: Loy <LoyVanBeek@users.noreply.github.com>
Date: Tue, 25 Sep 2018 22:39:43 +0200
Subject: [PATCH 01/10] Create 2018-09-25-from-ui-to-motors.markdown

---
 _posts/2018-09-25-from-ui-to-motors.markdown | 54 ++++++++++++++++++++
 1 file changed, 54 insertions(+)
 create mode 100644 _posts/2018-09-25-from-ui-to-motors.markdown

diff --git a/_posts/2018-09-25-from-ui-to-motors.markdown b/_posts/2018-09-25-from-ui-to-motors.markdown
new file mode 100644
index 000000000000..1a50d79a5f13
--- /dev/null
+++ b/_posts/2018-09-25-from-ui-to-motors.markdown
@@ -0,0 +1,54 @@
+---
+layout: post
+title:  "From UI to motors"
+date:   2018-09-25 21:55:00
+---
+
+How do commands given to the robot get translated into motor commands? 
+
+There are several layers to this, as you can see in 
+[software-architecture.html](https://github.com/tue-robotics/tue-robotics.github.io/blob/master/software-architecture.html).
+When you use [Telegram](https://telegram.org/) to chat with the robot, 
+your text command first gets sent to the servers of Telegram, 
+which passes the message on to our [telegram_ros](https://github.com/tue-robotics/telegram_ros) node. That node implements a [Telegram Bot](https://core.telegram.org/bots)
+
+Instead of Telegram, there is also a speech-to-text engine on our robots, that take in audio and try to convert that to text.
+On Amigo, we use [dragonfly_speech_recognition](https://github.com/tue-robotics/dragonfly_speech_recognition). 
+This also uses the grammar (as describer below) to aid in speech recognition. 
+The grammar restricts what the STT can hear and thus the result is always something the robot can at least parse.
+
+The text gets published to a ROS topic and then read by the 
+[conversation_engine](https://github.com/tue-robotics/conversation_engine). 
+This interprets the command using the [grammar_parser](https://github.com/tue-robotics/grammar_parser).
+The result of the parsing is an action description that gets sent to the [action_server](https://github.com/tue-robotics/action_server). 
+
+The action_server performs some action planning to execute the desired command, which results either in
+- A sequence of actions
+- An error. The most interesting one being that the command is underspecified. 
+
+In the last case, the conversion_engine talks with the user to fill in more details of the command, also using Telegram or via speech.
+
+Once a sequence of actions is defined, it gets executed by the action_server, which also reports on the progress of the action.
+
+The actions themselves are implemented using the [robot_smach_states](https://github.com/tue-robotics/tue_robocup/tree/master/robot_smach_states).
+This is a library of various state machines that for example provide Grab, NavigateTo..., Place, LearnOperator actions.
+Each of those accepts a robot-parameter, which is an instance of a Robot-subclass from the [robot_skills](https://github.com/tue-robotics/tue_robocup/tree/master/robot_skills)-package, for example Amigo, Sergio and Hero.
+When the state machines execute, they call methods of RobotParts (eg. a Base, Arm, Head, Worldmodel etc), 
+which in turn tap into the ROS [Action Servers](http://wiki.ros.org/actionlib#Client-Server_Interaction), [Services](http://wiki.ros.org/ROS/Tutorials/UnderstandingServicesParams#ROS_Services) and [Topics](http://wiki.ros.org/ROS/Tutorials/UnderstandingTopics) of the robot.
+
+In short, the robot_skills provide an abstract, mostly ROS-independent interface to the actual robot. 
+
+A major component in our robots is the worldmodel, called [ED](https://github.com/tue-robotics/ed) (Environment Descriptor).
+We use this to get a symbolic view on the world. ED Keeps track of objects and uses various plugins to e.g. recognize objects.
+ED also prvides a maps for the robot to localize itself and variosu other tasks. 
+
+There are several action_servers ont he robot, for example for navigation. 
+ROS robots typoically use move_base but we have a bit more advanced version called [cb_base_navigation](https://github.com/tue-robotics/cb_base_navigation).
+The advantage of that is that we can move to an area (defined by some mathematically described constraints) rather than a single Pose.
+
+Another action-server is provided by [MoveIt](https://moveit.ros.org/) to command the arms to various joint goals and cartesian goals.
+
+These planners talk with the hardware interface components, that use EtherCAT to interface with the motors and encoders to send references and read back what the various joint really did.
+
+That is all published on the joint_states-topic. 
+The robot_state_publisher than uses that to publish TF-frames so we can see where different parts of the robot are and so that relative positions can be calculated etc.

From 8ce1bcb3aacddaa03b28f251696cfc7056f9e3d3 Mon Sep 17 00:00:00 2001
From: Matthijs van der Burgh <matthijs.vander.burgh@live.nl>
Date: Tue, 30 Oct 2018 18:44:09 +0100
Subject: [PATCH 02/10] Update _posts/2018-09-25-from-ui-to-motors.markdown

Co-Authored-By: LoyVanBeek <LoyVanBeek@users.noreply.github.com>
---
 _posts/2018-09-25-from-ui-to-motors.markdown | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/_posts/2018-09-25-from-ui-to-motors.markdown b/_posts/2018-09-25-from-ui-to-motors.markdown
index 1a50d79a5f13..311b1ed89d19 100644
--- a/_posts/2018-09-25-from-ui-to-motors.markdown
+++ b/_posts/2018-09-25-from-ui-to-motors.markdown
@@ -39,7 +39,7 @@ which in turn tap into the ROS [Action Servers](http://wiki.ros.org/actionlib#Cl
 In short, the robot_skills provide an abstract, mostly ROS-independent interface to the actual robot. 
 
 A major component in our robots is the worldmodel, called [ED](https://github.com/tue-robotics/ed) (Environment Descriptor).
-We use this to get a symbolic view on the world. ED Keeps track of objects and uses various plugins to e.g. recognize objects.
+We use this to get a symbolic representation of the world. ED Keeps track of objects and uses various plugins to e.g. recognize objects.
 ED also prvides a maps for the robot to localize itself and variosu other tasks. 
 
 There are several action_servers ont he robot, for example for navigation. 

From 2d04437784d68a0d40c999a70d115f0a9182dfd2 Mon Sep 17 00:00:00 2001
From: Loy van Beek <loy.vanbeek@gmail.com>
Date: Tue, 30 Oct 2018 18:54:43 +0100
Subject: [PATCH 03/10] Incorporate HMI into this story

---
 _posts/2018-09-25-from-ui-to-motors.markdown | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/_posts/2018-09-25-from-ui-to-motors.markdown b/_posts/2018-09-25-from-ui-to-motors.markdown
index 311b1ed89d19..c321b3b2005c 100644
--- a/_posts/2018-09-25-from-ui-to-motors.markdown
+++ b/_posts/2018-09-25-from-ui-to-motors.markdown
@@ -17,7 +17,11 @@ On Amigo, we use [dragonfly_speech_recognition](https://github.com/tue-robotics/
 This also uses the grammar (as describer below) to aid in speech recognition. 
 The grammar restricts what the STT can hear and thus the result is always something the robot can at least parse.
 
-The text gets published to a ROS topic and then read by the 
+Because we want to be able to both talk to the robot and text with it, each taing turns or even falling back from one modality to another, we use the [HMI](https://github.com/tue-robotics/hmi). 
+The Human Machine Interface provides a client that the GPSR's conversation_engine uses. This client is connected to several servers that implement some way for the robot to ask the user a question. 
+This can thus be voice (via STT)or text (via Telegram or Slack) or some mock interface for testing. 
+
+The text is eventually read by the 
 [conversation_engine](https://github.com/tue-robotics/conversation_engine). 
 This interprets the command using the [grammar_parser](https://github.com/tue-robotics/grammar_parser).
 The result of the parsing is an action description that gets sent to the [action_server](https://github.com/tue-robotics/action_server). 

From 64adcaa5e293c0bccd946ae792c18e1ddcec5e85 Mon Sep 17 00:00:00 2001
From: Loy van Beek <loy.vanbeek@gmail.com>
Date: Tue, 30 Oct 2018 18:56:59 +0100
Subject: [PATCH 04/10] Process review comments

---
 _posts/2018-09-25-from-ui-to-motors.markdown | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/_posts/2018-09-25-from-ui-to-motors.markdown b/_posts/2018-09-25-from-ui-to-motors.markdown
index c321b3b2005c..840187e78e91 100644
--- a/_posts/2018-09-25-from-ui-to-motors.markdown
+++ b/_posts/2018-09-25-from-ui-to-motors.markdown
@@ -40,7 +40,8 @@ Each of those accepts a robot-parameter, which is an instance of a Robot-subclas
 When the state machines execute, they call methods of RobotParts (eg. a Base, Arm, Head, Worldmodel etc), 
 which in turn tap into the ROS [Action Servers](http://wiki.ros.org/actionlib#Client-Server_Interaction), [Services](http://wiki.ros.org/ROS/Tutorials/UnderstandingServicesParams#ROS_Services) and [Topics](http://wiki.ros.org/ROS/Tutorials/UnderstandingTopics) of the robot.
 
-In short, the robot_skills provide an abstract, mostly ROS-independent interface to the actual robot. 
+In short, the robot_skills provide an abstract, mostly ROS-independent interface to the actual robot.
+This way, we don't need to instantiate a lot of clients to ROS actions and services etc in the actual in the higher layers.  
 
 A major component in our robots is the worldmodel, called [ED](https://github.com/tue-robotics/ed) (Environment Descriptor).
 We use this to get a symbolic representation of the world. ED Keeps track of objects and uses various plugins to e.g. recognize objects.

From bdb6da3ff5999e177af78570e15f6353d824a720 Mon Sep 17 00:00:00 2001
From: Loy van Beek <loy.vanbeek@gmail.com>
Date: Tue, 30 Oct 2018 18:58:20 +0100
Subject: [PATCH 05/10] Mention constraint based nav

---
 _posts/2018-09-25-from-ui-to-motors.markdown | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/_posts/2018-09-25-from-ui-to-motors.markdown b/_posts/2018-09-25-from-ui-to-motors.markdown
index 840187e78e91..ecb8e3de4e18 100644
--- a/_posts/2018-09-25-from-ui-to-motors.markdown
+++ b/_posts/2018-09-25-from-ui-to-motors.markdown
@@ -45,10 +45,10 @@ This way, we don't need to instantiate a lot of clients to ROS actions and servi
 
 A major component in our robots is the worldmodel, called [ED](https://github.com/tue-robotics/ed) (Environment Descriptor).
 We use this to get a symbolic representation of the world. ED Keeps track of objects and uses various plugins to e.g. recognize objects.
-ED also prvides a maps for the robot to localize itself and variosu other tasks. 
+ED also prvides a maps for the robot to localize itself and various other tasks. 
 
 There are several action_servers ont he robot, for example for navigation. 
-ROS robots typoically use move_base but we have a bit more advanced version called [cb_base_navigation](https://github.com/tue-robotics/cb_base_navigation).
+ROS robots typoically use move_base but we have a more advanced version called [cb_base_navigation](https://github.com/tue-robotics/cb_base_navigation), which does Constraint Based navigation.
 The advantage of that is that we can move to an area (defined by some mathematically described constraints) rather than a single Pose.
 
 Another action-server is provided by [MoveIt](https://moveit.ros.org/) to command the arms to various joint goals and cartesian goals.

From ee7792b91612ff8ef6a0c9b1bf2762a09fd2d461 Mon Sep 17 00:00:00 2001
From: Loy van Beek <loy.vanbeek@gmail.com>
Date: Tue, 30 Oct 2018 19:00:44 +0100
Subject: [PATCH 06/10] Link to ed_localization

---
 _posts/2018-09-25-from-ui-to-motors.markdown | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/_posts/2018-09-25-from-ui-to-motors.markdown b/_posts/2018-09-25-from-ui-to-motors.markdown
index ecb8e3de4e18..0386ac8cb547 100644
--- a/_posts/2018-09-25-from-ui-to-motors.markdown
+++ b/_posts/2018-09-25-from-ui-to-motors.markdown
@@ -45,7 +45,7 @@ This way, we don't need to instantiate a lot of clients to ROS actions and servi
 
 A major component in our robots is the worldmodel, called [ED](https://github.com/tue-robotics/ed) (Environment Descriptor).
 We use this to get a symbolic representation of the world. ED Keeps track of objects and uses various plugins to e.g. recognize objects.
-ED also prvides a maps for the robot to localize itself and various other tasks. 
+ED, through the [ed_localization](https://github.com/tue-robotics/ed_localization.git)-plugin, also provides a maps for the robot to localize itself (over a map-topic like the standard ROS map server) and various other tasks. 
 
 There are several action_servers ont he robot, for example for navigation. 
 ROS robots typoically use move_base but we have a more advanced version called [cb_base_navigation](https://github.com/tue-robotics/cb_base_navigation), which does Constraint Based navigation.

From c44d9d2bfef13bc0a4617f26c1fc42e1c79eedc5 Mon Sep 17 00:00:00 2001
From: Loy van Beek <loy.vanbeek@gmail.com>
Date: Tue, 30 Oct 2018 19:03:29 +0100
Subject: [PATCH 07/10] Elaborate a bit about the HW interface, but needs more

---
 _posts/2018-09-25-from-ui-to-motors.markdown | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/_posts/2018-09-25-from-ui-to-motors.markdown b/_posts/2018-09-25-from-ui-to-motors.markdown
index 0386ac8cb547..0cf006a22588 100644
--- a/_posts/2018-09-25-from-ui-to-motors.markdown
+++ b/_posts/2018-09-25-from-ui-to-motors.markdown
@@ -57,3 +57,5 @@ These planners talk with the hardware interface components, that use EtherCAT to
 
 That is all published on the joint_states-topic. 
 The robot_state_publisher than uses that to publish TF-frames so we can see where different parts of the robot are and so that relative positions can be calculated etc.
+
+The hardware interface components also contain various controllers and a supervisor. 

From f1fb76652c49eac0fb8e778fb48f8117eee0c6ab Mon Sep 17 00:00:00 2001
From: Loy van Beek <loy.vanbeek@gmail.com>
Date: Tue, 30 Oct 2018 19:05:15 +0100
Subject: [PATCH 08/10] Lin to TF

---
 _posts/2018-09-25-from-ui-to-motors.markdown | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/_posts/2018-09-25-from-ui-to-motors.markdown b/_posts/2018-09-25-from-ui-to-motors.markdown
index 0cf006a22588..0cc48aef13d5 100644
--- a/_posts/2018-09-25-from-ui-to-motors.markdown
+++ b/_posts/2018-09-25-from-ui-to-motors.markdown
@@ -56,6 +56,6 @@ Another action-server is provided by [MoveIt](https://moveit.ros.org/) to comman
 These planners talk with the hardware interface components, that use EtherCAT to interface with the motors and encoders to send references and read back what the various joint really did.
 
 That is all published on the joint_states-topic. 
-The robot_state_publisher than uses that to publish TF-frames so we can see where different parts of the robot are and so that relative positions can be calculated etc.
+The robot_state_publisher than uses that to publish [TF-frames](http://wiki.ros.org/tf) so we can see where different parts of the robot are and so that relative positions can be calculated etc.
 
 The hardware interface components also contain various controllers and a supervisor. 

From d8d5f9458e46bdd29df47995a09ae3032576ccca Mon Sep 17 00:00:00 2001
From: Matthijs van der Burgh <matthijs.vander.burgh@live.nl>
Date: Wed, 31 Oct 2018 14:09:56 +0100
Subject: [PATCH 09/10] Update _posts/2018-09-25-from-ui-to-motors.markdown

Co-Authored-By: LoyVanBeek <LoyVanBeek@users.noreply.github.com>
---
 _posts/2018-09-25-from-ui-to-motors.markdown | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/_posts/2018-09-25-from-ui-to-motors.markdown b/_posts/2018-09-25-from-ui-to-motors.markdown
index 0cc48aef13d5..c7853ebe5ec7 100644
--- a/_posts/2018-09-25-from-ui-to-motors.markdown
+++ b/_posts/2018-09-25-from-ui-to-motors.markdown
@@ -19,7 +19,7 @@ The grammar restricts what the STT can hear and thus the result is always someth
 
 Because we want to be able to both talk to the robot and text with it, each taing turns or even falling back from one modality to another, we use the [HMI](https://github.com/tue-robotics/hmi). 
 The Human Machine Interface provides a client that the GPSR's conversation_engine uses. This client is connected to several servers that implement some way for the robot to ask the user a question. 
-This can thus be voice (via STT)or text (via Telegram or Slack) or some mock interface for testing. 
+This can thus be voice (via STT) or text (via Telegram or Slack) or some mock interface for testing. 
 
 The text is eventually read by the 
 [conversation_engine](https://github.com/tue-robotics/conversation_engine). 

From 92e5d2e6e41df51fcd19f21bb19edf322dbc84d0 Mon Sep 17 00:00:00 2001
From: Loy <LoyVanBeek@users.noreply.github.com>
Date: Tue, 5 Feb 2019 19:30:29 +0100
Subject: [PATCH 10/10] Update 2018-09-25-from-ui-to-motors.markdown

---
 _posts/2018-09-25-from-ui-to-motors.markdown | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/_posts/2018-09-25-from-ui-to-motors.markdown b/_posts/2018-09-25-from-ui-to-motors.markdown
index c7853ebe5ec7..5bcb091bd30b 100644
--- a/_posts/2018-09-25-from-ui-to-motors.markdown
+++ b/_posts/2018-09-25-from-ui-to-motors.markdown
@@ -55,7 +55,7 @@ Another action-server is provided by [MoveIt](https://moveit.ros.org/) to comman
 
 These planners talk with the hardware interface components, that use EtherCAT to interface with the motors and encoders to send references and read back what the various joint really did.
 
-That is all published on the joint_states-topic. 
-The robot_state_publisher than uses that to publish [TF-frames](http://wiki.ros.org/tf) so we can see where different parts of the robot are and so that relative positions can be calculated etc.
+That is all published on the joint_states-topic. This lists, for each joint in the robot, what position it is in, what velocity it is moving at and what effort (in N or Nm) that requires. Together with the robot's URDF model, this is used by the [robot_state_publisher](https://wiki.ros.org/action/fullsearch/urdf/Tutorials/Using%20urdf%20with%20robot_state_publisher) to publish the full state of the robot. 
+The robot_state_publisher publishes [TF-frames](http://wiki.ros.org/tf) so we can see where different parts of the robot are and so that relative positions can be calculated etc.
 
 The hardware interface components also contain various controllers and a supervisor.