diff --git a/_data/navigation.yml b/_data/navigation.yml index a380856f..22b85084 100644 --- a/_data/navigation.yml +++ b/_data/navigation.yml @@ -59,6 +59,8 @@ wiki: url: /wiki/common-platforms/ros/ros-intro/ - title: ROS Arduino Interface url: /wiki/common-platforms/ros/ros-arduino-interface/ + - title: ROS Node Lifecycle + url: /wiki/common-platforms/ros/ros-lifecycle/ - title: ROS Motion Server Framework url: /wiki/common-platforms/ros/ros-motion-server-framework/ - title: ROS Cost Maps @@ -79,6 +81,8 @@ wiki: url: /wiki/common-platforms/ros2-navigation-for-clearpath-husky/ - title: Hello Robot Stretch RE1 url: /wiki/common-platforms/hello-robot + - title: Kinova Arms + url: /wiki/common-platforms/kinova-setup-and-apis/ - title: Sensing url: /wiki/sensing/ children: @@ -252,6 +256,8 @@ wiki: url: /wiki/computing/setup-gpus-for-computer-vision/ - title: Ubuntu Dual Boot and Troubleshooting Guide url: /wiki/computing/troubleshooting-ubuntu-dual-boot/ + - title: Quantum Computing and Qiskit + url: /wiki/computing/quantum/ - title: Fabrication url: /wiki/fabrication/ children: @@ -321,6 +327,8 @@ wiki: url: /wiki/tools/code-editors-Introduction-to-vs-code-and-vim/ - title: Qtcreator UI development with ROS url: /wiki/tools/Qtcreator-ros/ + - title: Tutorial on Using USB Compute Sticks + url: /wiki/tools/usb-compute-sticks/ - title: Datasets url: /wiki/datasets/ children: diff --git a/wiki/common-platforms/assets/kinova_control.png b/wiki/common-platforms/assets/kinova_control.png new file mode 100644 index 00000000..f0205734 Binary files /dev/null and b/wiki/common-platforms/assets/kinova_control.png differ diff --git a/wiki/common-platforms/assets/kinova_controls.png b/wiki/common-platforms/assets/kinova_controls.png new file mode 100644 index 00000000..897b2376 Binary files /dev/null and b/wiki/common-platforms/assets/kinova_controls.png differ diff --git a/wiki/common-platforms/assets/kinova_example_setup.png b/wiki/common-platforms/assets/kinova_example_setup.png new file mode 100644 index 00000000..22a14f81 Binary files /dev/null and b/wiki/common-platforms/assets/kinova_example_setup.png differ diff --git a/wiki/common-platforms/assets/kinova_login.png b/wiki/common-platforms/assets/kinova_login.png new file mode 100644 index 00000000..3ded7a07 Binary files /dev/null and b/wiki/common-platforms/assets/kinova_login.png differ diff --git a/wiki/common-platforms/assets/kinova_new_user.png b/wiki/common-platforms/assets/kinova_new_user.png new file mode 100644 index 00000000..79eb4100 Binary files /dev/null and b/wiki/common-platforms/assets/kinova_new_user.png differ diff --git a/wiki/common-platforms/assets/kinova_packages.png b/wiki/common-platforms/assets/kinova_packages.png new file mode 100644 index 00000000..d084f9c8 Binary files /dev/null and b/wiki/common-platforms/assets/kinova_packages.png differ diff --git a/wiki/common-platforms/assets/kinova_rqt_example.png b/wiki/common-platforms/assets/kinova_rqt_example.png new file mode 100644 index 00000000..0c6bffa5 Binary files /dev/null and b/wiki/common-platforms/assets/kinova_rqt_example.png differ diff --git a/wiki/common-platforms/assets/kinova_rviz.png b/wiki/common-platforms/assets/kinova_rviz.png new file mode 100644 index 00000000..6be4f4eb Binary files /dev/null and b/wiki/common-platforms/assets/kinova_rviz.png differ diff --git a/wiki/common-platforms/assets/kinova_start.png b/wiki/common-platforms/assets/kinova_start.png new file mode 100644 index 00000000..b55f3910 Binary files /dev/null and b/wiki/common-platforms/assets/kinova_start.png differ diff --git a/wiki/common-platforms/assets/kinova_summary.png b/wiki/common-platforms/assets/kinova_summary.png new file mode 100644 index 00000000..50e71ea5 Binary files /dev/null and b/wiki/common-platforms/assets/kinova_summary.png differ diff --git a/wiki/common-platforms/assets/kinova_vision.png b/wiki/common-platforms/assets/kinova_vision.png new file mode 100644 index 00000000..4ad00d35 Binary files /dev/null and b/wiki/common-platforms/assets/kinova_vision.png differ diff --git a/wiki/common-platforms/assets/life_cycle_sm.png b/wiki/common-platforms/assets/life_cycle_sm.png new file mode 100644 index 00000000..4830f35d Binary files /dev/null and b/wiki/common-platforms/assets/life_cycle_sm.png differ diff --git a/wiki/common-platforms/assets/ros_states.png b/wiki/common-platforms/assets/ros_states.png new file mode 100644 index 00000000..b45e858f Binary files /dev/null and b/wiki/common-platforms/assets/ros_states.png differ diff --git a/wiki/common-platforms/kinova-setup-and-apis.md b/wiki/common-platforms/kinova-setup-and-apis.md new file mode 100644 index 00000000..8613c6cc --- /dev/null +++ b/wiki/common-platforms/kinova-setup-and-apis.md @@ -0,0 +1,348 @@ +--- +# Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be +# overwritten except in special circumstances. +# You should set the date the article was last updated like this: +date: 2024-05-05 # YYYY-MM-DD +# This will be displayed at the bottom of the article +# You should set the article's title: +title: Kinova Gen3 Arms +# The 'title' is automatically displayed at the top of the page +# and used in other parts of the site. +--- +Kinova Arms are a popular brand of robotic arms used for research. They are very precise and widely available at many labs and spaces on the CMU campus. Setting them up and using the existing APIs is not trivial, however. This guide covers some good practices when using the arms and how to install and run the Python and ROS APIs. + +## Kinova Arms + +### Introduction + +Kinova is a manufacturer of robotic arms that are to be used mostly in research. They are very precise at the cost of not being very strong or powerful. They are also popular in the CMU campus. The AI Maker Space at Tepper, for example, is equipped with two Kinova Gen3 6DoF arms, each of them equipped with a Robotiq 2F-85 gripper end-effector. One arm has been labeled with a blue band and the other has been labeled with a green band. + +This guide is a summarized version of how to get the arms up and running. For more in-depth information, please consult the materials below: + +Kinova Gen3 6DoF manual: + +Youtube tutorial playlist: + +Kinova website: + +Tech summary of AIMS arms: + +|Firmware version|2\.5.2 (newest available)| +| :- | :- | +|ROS version tested|Noetic with Ubuntu 20| + + +![Kinova Gen3 6DoF Specs](assets/kinova_summary.png) + + +### Starting, using and stopping the arms + +#### Starting + +The arm should be resting as in the position below: + +![Kinova starting position](assets/kinova_start.png) + + +It is essential that the **gripper is not blocked**, as the arm will open and close the gripper upon startup. If the arm is in a position where the gripper is blocked, move the arm gently until it reaches a **stable** position where the gripper is free to open and close. + +Once the arm is in a safe position, check that the power supply is connected and turned on. **Localize the red e-stop button and keep it within reach at all times**. Unlock the robot by twisting the e-stop button in the same sense as the arrow on it. The button should pop out. + +Then press the silver button on the back of the arm until a blue LED lights up and release it. **DO NOT** press the button for more than 10 seconds, as that will factory reset the arm. The lights will then show blue and yellow during startup. **Once the gripper closes and opens and the light turns to solid green, the arm will be ready for use.** + +#### Using + +To check if the arm is working, grab the xbox controller attached to it. Its main layouts are shown below. These are the default layouts the robot can work with. I recommend taking some time to control the robot with the xbox controller in all layouts to see how they behave. Remember to press RB when the robot is not in use to lock it, so as to avoid accidental movements. Unblock it with LB before using it again. + +![Kinova controls](assets/kinova_controls.png) + +Once you have verified the controls are working, it is time to connect to the web interface. This serves two purposes: + +1. Creating an account for you where you will register that you are using the arm. +2. With the web app, you have the simplest way to control the arm “semi-programmatically”. This serves as a good start to people with little to no experience in coding. + +> Even though the users are separate, configurations for the arm are shared regardless of the user who established them. Therefore, DO NOT change settings as IP or firmware type. Only change things that are fundamental to your project, such as camera resolution or custom poses. + +> Related to the previous point, do check if all essential configurations are correct before running your code, as they might have been changed between uses by other users. + +The address of the web interface is simply the IP address of the arm. This should be marked on the arm itself, but is repeated here for convenience. + +|Which robot?|Robot’s IP| +| :- | :- | +|Blue|192\.168.2.9| +|Green|192\.168.2.10| + +Connect the robot’s ethernet cable to your laptop. To ensure connectivity to the robot, you have to be in the same network. To do so, change your wired connection settings to: + +|IPv4|Subnet mask| +| :- | :- | +|192\.168.2.xx|255\.255.255.0| + +Where XX is a number greater than 10 (usually people set this to 11). Where to change IP settings exactly will depend on your OS, but a quick google search usually solves that. You will need to remove the cable and plug it again before using the robot if you change your ip settings in order to reset the connection. + +Connect to the web interface by going to a web browser and typing http://. You should be greeted with the login page: + +![Kinova login](assets/kinova_login.png) + +You will use the admin account only once in order to create your own account. Its credentials are: + +|Login|Password| +| :- | :- | +|admin|admin| + +Once you are logged in, it is time to create your account. Click on the three bars next to the Kinova logo on the top left corner. That will open the arm’s menu. Then click on “users”. In the users page, click on the plus sign in the bottom right corner. That will bring the new user interface: + +![Kinova new user interface](assets/kinova_new_user.png) + +Fill it as follows: + +|Field|Value| +| :- | :- | +|Username|Your andrew id.| +|Password|Whatever your heart desires| +|First/last name|Your first and last name, as registered with the university (or at least something close enough to be easily identifiable). This will be used to match your profile to an actual person | +|User type|Select “research”| +|Notes|Write a brief description as to why you are using the arms. If using for official research project or course, please include advisor/instructor name| + +Once you add your profile, log out of the admin account and into your new profile. + +To test if the web interface is working properly, click on the “pose” icon on the bottom of the screen and try to move the robot. You can also test actions, admittances, etc. + +#### Stopping +To stop the robot: + +1. Press and hold “A” on the xbox controller until the arm stops moving. This will bring the arm to the retracted position. + +2. Once the arm is retracted, hold it with one hand close to the gripper. With the other, hit the e-stop red button. This will make the arm limp. When the arm is stably resting against itself, let go. + +There, the arm is safely shut down. + +### Ways to use the arms +There is also the Kinova API and the Kinova ROS packages. The table below shows some pros and cons of each method. Choose the one that is the most suitable for your goals. The API and ROS API are described further down on this page. + +||Web App|API|ROS| +| :- | :- | :- | :- | +|Pros|

Easiest to use

Least amount of experience required

Graphical interface

|

Integration with Python/C++/MATLAB

Native commands

|Integration with ROS and its products, such as MoveIt and CVBridge| +|Cons|

Very low customization ability

Can only solve simple problems

Low programming capabilities

|Requires some setting up and coding skill|

Requires proficiency with coding and with ROS

Documentation is not extensive. Some reverse engineering of the ROS packages they provide is required

| + + +## Kinova Arms API + +### Introduction + +Kinova distributes an API (Kortex) to be used with the Gen3 arms. This allows you to control the arms programmatically instead of graphically (as in the web app). This means you get more control, new functionality and a lot more automation with the arms’ usage. On the other hand, setting up and using the API requires some programming knowledge. + +This guide is supposed to be a very brief tutorial on how to use the API. For more detailed information, please refer to the Kinova manual. Pages 123-148 detail the different types of robot control available (high-level and low-level). Pages 228-243 include instructions on how to use the API. The API is available in Python, C++ and MATLAB (simplified). This guide will cover Python usage for Linux. + + +Link to API’s github: + +Link to AIMS’s API tutorial: + +Python documentation: + +C++ documentation: + +### Operation + +The robot can be controlled in either high-level or low-level robot control. There are benefits to both forms of control. High-level control is easier with more protections, while low-level control offers faster commands and finer-grained control with less protection. High-level is the default on boot-up and offers the safest and most straight-forward control. This is the mode we will focus on. + +> If you’re an advanced user, please consult the manual for low-level control instructions. + +In both high-level and low-level, commands are sent through the robot base. + +In high-level control, commands are sent to the base via a single command using the Kinova.Api.Base API. These commands are processed by Kinova robot control libraries. This control runs at 40Hz. + +The robot consists of several devices: + +- Base controller +- Actuators (each actuator is a distinct device) +- Interface module +- Vision module + +The devices in the robot each implement a particular set of services, some of which are available across multiple devices. The methods available as part of a service on a device are accessed via remote procedure calls (RPC). + +![Packages/modules of Kinova API](assets/kinova_packages.png) + +The API implements robot control in high-level mode by interacting with the base, which then relays commands to the actuators internally using its own control libraries. + +![Architecture of Kinova API](assets/kinova_control.png) + + +In order to be able to communicate with the robot, you need to follow a couple of steps: + +1. Start a TCP connection to the robot based on its IP +2. Using the TCP connection, start a device router. This will ensure that messages for the robot will go to the right place +3. Use the device router with your credentials to start a session. This will prove to the robot that you are an authorized user +4. After the session has been created, you can access the services offered by each device. For high-level control, you will generally interact only with Base and Vision devices + +On a more technical note, communication with the robot is handled using Google Protocol Buffer. + +### Setting up + +To set up the API for Python, download the .whl file from . This tutorial and the arms have been set up for v2.6.0. Once it has been downloaded, open a terminal and navigate to the file’s location. + +> If you plan on using a virtual environment, remember to activate it before running the following commands. + +Once that is done, run: + +```shell +$ python3 -m pip install ./kortex_api-2.6.0.post3-py3-none-any.whl +``` + +And you should have the Python API installed. + + +### Scripts + +Rather than describing the API step by step, the github link from AIMS provides a set of example codes to get you up to speed. They should cover high-level functions such as: + +- Connecting +- Accessing the camera +- Image treatment using openCV +- Frame transformation +- Commanding cartesian positions +- Commanding the gripper + +The code and its comments should guide you. The Kortex github also offers more examples of API usage. + +The code we offer identifies a green ball using the arm’s camera, picks it up and drops it. + +> Remember to check the code parameters before running the files! + +Code list: + +connect.py: establish the connection to the robot and link to the devices that will be used. This file can be run on its own (remember to change the parameters) + +vision.py: use the camera’s info and stream to identify a green ball in the image and calculate its position globally. Can’t be used on its own as it need a connection to the robot + +arm_mover.py: can send the robot and the gripper to positions. Can’t be used on its own as it needs a connection to the robot + +detect_and_catch.py: combines all the programs together. After the opencv window opens, press ‘a’ (for ‘action’) once you are satisfied with the point the camera pinpoints as the center of the ball. + + +![Kinova arm setup for demo](assets/kinova_example_setup.png) + + +## Kinova Arms - ROS API + + +### Introduction + + +In addition to the native API, which works in Python, C++ and MATLAB, Kinova also publishes packages that aim to integrate the arm with ROS. This allows users to perform commands using ROS packages such as MoveIt. + +For ROS users, this API provides an interesting choice, as you don’t need to know the commands defined by the native API. On the other hand, it offers less documentation and is somewhat less precise. + +> This tutorial assumes that you already know and installed ROS and its tools + +Links: + +ROS API Github: + +ROS Vision API Github: + +AIMS ROS Tutorial files: + +> This tutorial was tested on Ubuntu 20 with ROS Noetic + +### Download and setup + +On your home directory, create a ROS-ready directory + +```shell +$ mkdir -p ros_ws/src +$ cd ros_ws +$ catkin init +``` + +Download the Kinova packages: + +> You need different packages for controlling the arm and for accessing vision. + +> The controller is made up of several packages. To see what each one does, consult the “contents” section of the API’s github. + +```shell +$ cd src +$ git clone -b noetic-devel https://github.com/Kinovarobotics/ros_kortex.git +$ git clone https://github.com/Kinovarobotics/ros_kortex_vision.git +``` + +Configure Conan and install dependencies to build the files + + +```shell +$ cd .. +$ sudo python3 -m pip install conan==1.59 +$ conan config set general.revisions_enabled=1 +$ conan profile new default --detect > /dev/null +$ conan profile update settings.compiler.libcxx=libstdc++11 default +$ rosdep install --from-paths src --ignore-src -y +``` + +Build the package to ensure there are no problems with the installation + +```shell +$ catkin build +``` + +You should get a build that might contain warnings, but no errors. + +### Running the packages + +After resourcing the workspace to detect the new files, you can test if installation was successful by running a few launch commands. + +For the arm commands and planner: + +```shell +$ roslaunch kortex_driver kortex_driver.launch ip_address:= dof:=6 gripper:=robotiq_2f_85 vision:=true start_rviz:=true username:= password:= +``` +RViz should have popped open. You can add a MoveIt node by clicking Add->Motion Planner. Use the arrows to move the arm’s goal, then “plan and execute” to see the movement both in RViz and in real life. + +![Kinova arm in Rviz](assets/kinova_rviz.png) + +The procedure to test the vision node is similar: + +```shell +$ roslaunch kinova_vision kinova_vision.launch device:= +``` + +![Kinova vision in rqt](assets/kinova_vision.png) + +### Running the example code + +As with the native API, the AIMS example has the arm tracking a green ball, then making a plan to catch it, then finally executing the plan. Only now this process is implemented through ROS nodes. + +First, download the example package to your src folder + +```shell +$ cd src +$ git clone https://github.com/CMU-AI-Maker-Space/Kinova_API_Demo_ROS.git +``` + +Then go back to your workspace root, rebuild the package with “catkin build”, and resource it. + +> In case build fails because of dependencies: you can try: + +```shell +$ rosdep install --from-paths src --ignore-src -y +pip3 install numpy opencv-python +``` + +The package object_detection is made of two nodes: + +- object_tracker_node: finds the center of the ball in 3D coordinates and publishes an image with it circled (topic /tracking/image_with_tracking). Also saves the position of the ball as a ros parameter in the server (/ball_center) +- object_catcher_node: moves the arm, both to the tracking position and to catch the ball + +To run the code: + +```shell +$ roslaunch object_detection tracking_catching.launch ip:= username:= password:= +``` + +Watch the streamed image topic on rqt image viewer (rqt should have opened with the launch file). Once you are satisfied with the tracking, you can call the catching service (/my_gen3/catch_ball) from rqt as well. + +> You might need to click the refresh button on rqt to find the service + +![Kinova rqt example](assets/kinova_rqt_example.png) diff --git a/wiki/common-platforms/ros/ros-lifecycle.md b/wiki/common-platforms/ros/ros-lifecycle.md new file mode 100644 index 00000000..bcb2581d --- /dev/null +++ b/wiki/common-platforms/ros/ros-lifecycle.md @@ -0,0 +1,256 @@ +--- +# Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be +# overwritten except in special circumstances. +# You should set the date the article was last updated like this: +date: 2024-11-26 # YYYY-MM-DD +# This will be displayed at the bottom of the article +# You should set the article's title: +title: ROS 2 Node Lifecycle +# The 'title' is automatically displayed at the top of the page +# and used in other parts of the site. +--- +## Introduction +Many robotics platforms implement state machines as part of their functionality. ROS 2 offers a convenient way of working with state machines in the form of ```managed nodes```, also called ```lifecycle nodes```. These nodes can be turned on/off, configured/unconfigured, etc. In a nutshell, lifecycle nodes can be activated or deactivated based on the current state of a robot's state machine. + +Before ROS 2, state machine implementations basically relied on ignoring nodes when they were not useful to the current state. While this is still possible in ROS 2, lifecycle nodes offer significant advantages from an efficiency standpoint: + ++ More control: you don't have to worry about topics from a node that should be ignored influencing the robot's operation. ++ Less network cluttering: If a node and its topics are not contributing to operations, they are just cluttering the network. This can be very noticeable depending on the size of the data being transmitted. ++ Better debugging: you can track on which state of its operation the node failed. + +Given these advantages, it is recommended to use lifecycle nodes as the default implementation of ROS state machines. + +![An example of using node lifecycle to turn nodes on or off](../assets/ros_states.png) + +## Node Lifecycle + +The figure below summarizes the possible states and transitions for each managed node. This image was obtained from the [ROS Design Website](). + +![All states for a managed node](../assets/life_cycle_sm.png) + +There are two types of states a node can be in: + +Primary states: these are the steady states, represented in blue in the picture. A node can be in a primary state indeterminately. These states are: + ++ ```unconfigured```: this is the state the node will be in as soon as it is instantiated. If a non-fatal error is raised during operation, the node can come back to this state. ++ ```inactive```: the node has been been configured, but it is not running any process. Beware: the node can still queue data (e.g. from subscribed topics) while in this state. It will just not perform anything. ++ ```active```: this is where the node behaves as a "traditional" node, performing it's operations regularly. ++ ```finalized```: this is where nodes go when they fail or are terminated. The node will still exist for debugging purposes, but cannot be re-run. For the node to vanish, a ```destroy()``` function has to be called. + +Please note that ROS offers a lot of freedom when implementing these states (even their demo strays a bit from the convention above). Try to keep your use reasonable for other developers. + +Secondary states: also known as "transition states", these states serve as buffers between primary states, where the node will be doing some internal operation relating to a corresponding ```transition``` function. These states are: + + ++ ```Configuring``` ++ ```CleaningUp``` ++ ```ShuttingDown``` ++ ```Activating``` ++ ```Deactivating``` ++ ```ErrorProcessing``` + +While almost all these states' functionalities and their corresponding transition functions can be easily inferred from the lifecycle diagram, ```ErrorProcessing``` deserves some extra explanation. As you can see from the diagram, sometimes transition states can fail, returning to the previous primary state. This is **not** the purpose of ```ErrorProcessing```. The transition state will return to the original primary state when it's function fails "logically", e.g. the program has to be running for 10 minutes before the node activates, checked for inside an if-loop. The ```ErrorProcessing```, on the other hand, is reached when an error is **raised**, e.g. you tried dividing something by zero. + +## Triggering Transitions + +As the diagram shows, there are transitions between the states. It is possible to see that they usually come in pairs. For example, there is transition function ```configure()``` and there is also a ```onConfigure()``` function inside the node secondary state ```Configuring```. The nomenclature can be a bit confusing, so here is a brief explanation: + ++ ```function()```: This is the name used by the lifecycle framework to trigger transitions. When you want to tell a node to move into another state (more in a sec), this is the name you use. These names come with ROS and don't need additional programming. ++ ```onFunction()```: This is a callback function, defined inside the node, that will be actually responsible for performing the state transition. In other words, this is the function that is actually aware of what the node is. When the lifecycle manager calls ```node1 configure```, it is the function ```onConfigure()```, inside ```node1``` that will be executed. ++ ```Functioning```: This is the (transient) state the node is at while executing callback function ```onFunction()```. + +With all this in mind, changing a node state can happen in two ways: either through CLI tools or through a service call. + +### CLI Lifecycle + +For CLI commands, you can run: + +```bash +ros2 lifecycle +``` + +Start the lifecycle talker node provided with ROS: + +```bash +ros2 run lifecycle lifecycle_talker +``` + +To get the state the node is in, run + +```bash +ros2 lifecycle get /lc_talker +``` + +Which should return + +```bash +unconfigured [1] +``` + +As expected. The number in brackets is the id of the node state. This is not super relevant for node states, as these ids are not really used for commands. + +Much more interesting are the ids for transitions. If you run: + +```bash +ros2 lifecycle list /lc_talker +``` + +You should get as output: + +```bash +- configure [1] + Start: unconfigured + Goal: configuring +- shutdown [5] + Start: unconfigured + Goal: shuttingdown +``` + +These are the possible transitions from primary state ```Unconfigured```, as shown in the lifecycle diagram. Note the ids here, as they will be useful when discussing services. + +To change states, you should call the command ```set``` with the transition function name e.g.: + +```bash +ros2 lifecycle set /lc_talker configure +``` + +Returning to the ```lc_talker``` terminal should reveal the messages: + +```bash +[INFO] [1732664038.655707440] [lc_talker]: on_configure() is called. +[INFO] [1732664039.655992380] [lc_talker]: Lifecycle publisher is currently inactive. Messages are not published. +[WARN] [1732664039.656126348] [LifecyclePublisher]: Trying to publish message on the topic '/lifecycle_chatter', but the publisher is not activated +[INFO] [1732664040.656052111] [lc_talker]: Lifecycle publisher is currently inactive. Messages are not published. +``` + +### Services + +All these lifecycle commands are basically services. + +For example, we can make a standard service call to get the current state of the node: + +```bash +ros2 service call /lc_talker/get_state lifecycle_msgs/GetState +``` + +```bash +response: +lifecycle_msgs.srv.GetState_Response(current_state=lifecycle_msgs.msg.State(id=2, label='inactive')) +``` + +See the id field? This is where they become important. For service calls requesting state transitions, you need to know the id of the transition (not to be confused with the id of the state itself). To get those, you could run, for example: + +```bash +ros2 service call /lc_talker/get_available_transitions lifecycle_msgs/srv/GetAvailableTransitions +``` + +```bash +response: +lifecycle_msgs.srv.GetAvailableTransitions_Response(available_transitions=[lifecycle_msgs.msg.TransitionDescription(transition=lifecycle_msgs.msg.Transition(id=2, label='cleanup'), start_state=lifecycle_msgs.msg.State(id=2, label='inactive'), goal_state=lifecycle_msgs.msg.State(id=11, label='cleaningup')), lifecycle_msgs.msg.TransitionDescription(transition=lifecycle_msgs.msg.Transition(id=3, label='activate'), start_state=lifecycle_msgs.msg.State(id=2, label='inactive'), goal_state=lifecycle_msgs.msg.State(id=13, label='activating')), lifecycle_msgs.msg.TransitionDescription(transition=lifecycle_msgs.msg.Transition(id=6, label='shutdown'), start_state=lifecycle_msgs.msg.State(id=2, label='inactive'), goal_state=lifecycle_msgs.msg.State(id=12, label='shuttingdown'))]) +``` + +The output is a bit confusing (and better seen in RQt), but we can notice that the id for ```activate``` is 3. If we want to move to that state, a service call is also possible: + +```bash +ros2 service call /lc_talker/change_state lifecycle_msgs/ChangeState "{transition: {id: 3}}" +``` + +This service also has a ```label``` field, which is not required (but highly recommended). Back in the talker terminal: + +```bash +[INFO] [1732664498.641014385] [lc_talker]: Lifecycle publisher is active. Publishing: [Lifecycle HelloWorld #459] +``` + +When inside the code, all lifecycle changes are done through service calls. More on that below. + +## In Code + +You can find the lifecycle examples at the ROS demo [github](https://github.com/ros2/demos/tree/rolling/lifecycle/src).This guide will comment just a few key points on that code. + +Right at the definition of the talker node, we see: + +```cpp +class LifecycleTalker : public rclcpp_lifecycle::LifecycleNode +``` + +Note that the node doesn't inherit from the typical ```rclcpp:Node``` class. Not all node capabilities are available for a lifecycle node (and vice-versa, obviously). + +As for the callback functions, you can see that they have special signatures and return values: + +```cpp +rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn + on_configure(const rclcpp_lifecycle::State &) + { + // This callback is supposed to be used for initialization and + // configuring purposes. + // We thus initialize and configure our publishers and timers. + // The lifecycle node API does return lifecycle components such as + // lifecycle publishers. These entities obey the lifecycle and + // can comply to the current state of the node. + // As of the beta version, there is only a lifecycle publisher + // available. + pub_ = this->create_publisher("lifecycle_chatter", 10); + timer_ = this->create_wall_timer( + 1s, [this]() {return this->publish();}); + + RCLCPP_INFO(get_logger(), "on_configure() is called."); + + // We return a success and hence invoke the transition to the next + // step: "inactive". + // If we returned TRANSITION_CALLBACK_FAILURE instead, the state machine + // would stay in the "unconfigured" state. + // In case of TRANSITION_CALLBACK_ERROR or any thrown exception within + // this callback, the state machine transitions to state "errorprocessing". + return rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn::SUCCESS; + } +``` + +From the code, you can also see that ```on_configure()``` (and the other callbacks) are never explicitly defined as service callbacks. The lifecycle framework takes care of that. + +The last point that should be highlighted is in ```main```. Notice the node is note run as a regular node: + +```cpp +rclcpp::init(argc, argv); + +rclcpp::executors::SingleThreadedExecutor exe; + +std::shared_ptr lc_node = + std::make_shared("lc_talker"); + +exe.add_node(lc_node->get_node_base_interface()); + +exe.spin(); + +rclcpp::shutdown(); +``` + +Executors are beyond the scope of this document, but you can read more about them [here](https://docs.ros.org/en/foxy/Concepts/About-Executors.html). + +Finally, as kind of a side note, you can also launch and trigger lifecycle nodes from launch files, as explained in [1]: + +```python +from launch import LaunchDescription +from launch_ros.actions import LifecycleNode +from launch_ros.actions import LifecycleTransition +from lifecycle_msgs.msg import Transition + +def generate_launch_description() : +return LaunchDescription([ +LifecycleNode(package = 'package_name' , executable = 'a_managed_node' , name = 'node1'), +LifecycleNode(package = 'package_name' , executable = 'a_managed_node' , name = 'node2'), +LifecycleTransition( + lifecycle_node_names = ['node1', 'node2'], + transition_ids = [Transition.TRANSITION_CONFIGURE, Transition.TRANSITION_ACTIVATE] +) +]) +``` + +## Summary +ROS 2 implements node lifecycle as a feature to conveniently manage node states. This offers multiple advantages over traditional methods and is particularly helpful when implementing state machines. + +## Further Reading +- [ROS Design on Lifecycle Nodes](http://design.ros2.org/articles/node_lifecycle.html) +- [ROS Demo on Lifecycle Nodes](https://github.com/ros2/demos/blob/humble/lifecycle/README.rst) + +## References +[1] M. M. Bassa, *A Very Informal Journey Through ROS 2*. Leanpub, 2024. [Online]. Available: https://leanpub.com/averyinformaljourneythroughros2 diff --git a/wiki/computing/assets/bell_annotated.png b/wiki/computing/assets/bell_annotated.png new file mode 100644 index 00000000..77d5976b Binary files /dev/null and b/wiki/computing/assets/bell_annotated.png differ diff --git a/wiki/computing/assets/bell_measure_annotated.png b/wiki/computing/assets/bell_measure_annotated.png new file mode 100644 index 00000000..28766085 Binary files /dev/null and b/wiki/computing/assets/bell_measure_annotated.png differ diff --git a/wiki/computing/assets/bloch.png b/wiki/computing/assets/bloch.png new file mode 100644 index 00000000..dc9e7330 Binary files /dev/null and b/wiki/computing/assets/bloch.png differ diff --git a/wiki/computing/assets/bloch_qiskit.png b/wiki/computing/assets/bloch_qiskit.png new file mode 100644 index 00000000..6a5952da Binary files /dev/null and b/wiki/computing/assets/bloch_qiskit.png differ diff --git a/wiki/computing/assets/circuit_annotated.png b/wiki/computing/assets/circuit_annotated.png new file mode 100644 index 00000000..5788c563 Binary files /dev/null and b/wiki/computing/assets/circuit_annotated.png differ diff --git a/wiki/computing/assets/counts.png b/wiki/computing/assets/counts.png new file mode 100644 index 00000000..9faab1f7 Binary files /dev/null and b/wiki/computing/assets/counts.png differ diff --git a/wiki/computing/assets/superposition.png b/wiki/computing/assets/superposition.png new file mode 100644 index 00000000..ef2ecb39 Binary files /dev/null and b/wiki/computing/assets/superposition.png differ diff --git a/wiki/computing/assets/teleportation_labelled.png b/wiki/computing/assets/teleportation_labelled.png new file mode 100644 index 00000000..7cbffbfb Binary files /dev/null and b/wiki/computing/assets/teleportation_labelled.png differ diff --git a/wiki/computing/quantum.md b/wiki/computing/quantum.md new file mode 100644 index 00000000..49b6a601 --- /dev/null +++ b/wiki/computing/quantum.md @@ -0,0 +1,341 @@ +--- +# Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be +# overwritten except in special circumstances. +# You should set the date the article was last updated like this: +date: 2024-05-05 # YYYY-MM-DD +# This will be displayed at the bottom of the article +# You should set the article's title: +title: Quantum Computing and the Qiskit Package +# The 'title' is automatically displayed at the top of the page +# and used in other parts of the site. +--- +With the undeniable rise of quantum computers, future generations of roboticists must be versed in the functioning and applications of this technology. However, very few concise guides exist that explain quantum computing in introductory terms for a techincal audience. This article shows some basic principles of quantum computing together with the Python package Qiskit, developed by IBM. + +## Fundamentals of Quantum Computing - Single Qubit Systems + +While classical computers operate in terms of "bits", quantum computers are based on the concept "qubits". The difference between the two reflects a difference between the deterministic and the probabilistic. A classical computer could, for example, read an electric voltage and associate a certain bit value with it: if the voltage is below a certain threshold, it is assigned the bit "0"; if the voltage is higher then the same threshold, it gets the bit "1". Since electrical signals (among other physical entities) follow classical laws, the signal can either be 0 or 1, meaning it is binary. + +On the other hand, consider the spin of some electron, which is described by quantum mechanics. Say we assign the spin up to the state $|0\rangle$ and the spin down to the state $|1\rangle$. This way, any state $|\psi\rangle$ of the electron's spin can be expressed as in the equation, where the normalization is required as per Born's rule. The range of all possible values of $|\psi\rangle$ is called its vector space or state space. + + +$$ +|\psi\rangle = \alpha |0\rangle + \beta|1\rangle \; | \; \alpha,\beta \in \mathbb{C}, \; |\alpha|^2 + |\beta|^2 = 1 +$$ + + +![Classical bit vs quantum bit](assets/superposition.png) + + +We can use the electron’s spin (or some other physical entity bound by quantum mechanics) as a means of transmitting information, therefore stepping into the world of quantum computing (QC). + +Any quantum signal whose state can be written as a superposition of two binary states is a "quibt". In our example, the spin of a single electron is a qubit. The pair $\{|0\rangle, |1\rangle\}$ represents an orthonormal basis, called the computational basis, which can be used to perform measurements. These measurements allow us to access the information stored in a qubit and transform it into a classical signal, or "cbit" (classical bit). This is generally done by assigning $|0\rangle \rightarrow 0$ and $|1\rangle \rightarrow 1$. + +Some differences in relation to classical computing already start to appear. Consider, for example, the qubit whose state is given by $|\psi\rangle = \frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$. If we measure this signal with respect to the computational basis, we get $Pr[|0\rangle] = Pr[|1\rangle] = \frac{1}{2}$, meaning that the same signal, when converted to a classical bit (using the equivalence previously established) will yield bit 0 50% of the time and bit 1 50% of the time. Therefore, the same signal can result in different values when measured. This property is what makes QC different than classical computing. While a classical bit can be either 0 or 1, a quantum bit exists as a whole range of superpositions, and its measured value is probabilistic. + + +Many people think quantum computers are just a more powerful version of regular computers, but that is false. The truth is that they are based on a completely different operating principle. As such, they are better for some tasks and worse for others. Some areas where quantum computers (will) excel are: + +- Cryptography and cryptanalysis +- Simulations +- Materials science +- Etc. + + +Back to the technical bit. We previously mentioned the computational basis $\{|0\rangle, |1\rangle\}$, but in reality any pair of states $\{|a_1\rangle, |a_2\rangle\}$ such that $\langle a_i | a_j \rangle = \delta_{ij}$, with $\delta_{ij}$ the Kronecker delta, can be used as a basis for the qubit's state space. Although we have discussed measurements against the computational basis, in reality measurements can be made against any orthonormal basis. Some special orthonormal state pairs are particularly noteworthy: + + +- Basis $\{|+\rangle, |-\rangle \}$ (also called the Hadamard basis) + - $\vert + \rangle = \frac{1}{\sqrt{2}}(\vert0\rangle + \vert1\rangle)$ + - $\vert - \rangle = \frac{1}{\sqrt{2}}(\vert0\rangle - \vert1\rangle)$ +- Basis $\{|+i\rangle, |-i\rangle \}$ + - $\vert +i \rangle = \frac{1}{\sqrt{2}}(\vert0\rangle + i\vert1\rangle)$ + - $\vert -i \rangle = \frac{1}{\sqrt{2}}(\vert0\rangle - i\vert1\rangle)$ + + +There are several ways to represent a single qubit. One of them is the vector representation, which is particularly useful when we wish to represent operators/gates as matrices. In this representation, a state such is written as $|\psi\rangle = \begin{bmatrix} \alpha \\ \beta \end{bmatrix}$. Henceforth, whenever a vector notation is used, we'll assume it is with respect to the computational basis. + +Another representation of a single qubit system relates to the concepts of global phase and relative phase. Two qubit states $|\psi\rangle$ and $|\psi'\rangle$ are considered equivalent ($|\psi\rangle \sim |\psi'\rangle$) if there exists a value $\alpha \in [0, 2\pi)$ such that $|\psi\rangle = e^{i \alpha} |\psi'\rangle$. The angle $\alpha$ is called the global phase of the state. By manipulating the global phase of the state, it is possible to see that any qubit state can be written as $|\psi\rangle = \begin{bmatrix} \cos{\frac{\theta}{2}} \\ e^{i\varphi} \sin{\frac{\theta}{2}}\end{bmatrix}$, with $\theta \in [0, \pi]$ and $\varphi \in [0, 2\pi)$. The angle $\varphi$ is called the relative phase. The angle pair $(\theta, \varphi)$ can be used to represent the qubit using the so-called "Bloch sphere", which is shown below. In the Bloch sphere, we represent the state $|\psi\rangle$ as a unit vector from the origin, with $\theta$ being its polar angle and $\varphi$ its azimuth angle. Another feature of the Bloch sphere is that it maps the special states previosuly described to key points on the sphere's surface. + +![Representing a state on the Bloch sphere](assets/bloch.png) + +## Single Qubit Systems in Qiskit + +From Wikipedia: + +> Qiskit is an open-source software development kit (SDK) for working with quantum computers at the level of circuits, pulses, and algorithms. It provides tools for creating and manipulating quantum programs and running them on prototype quantum devices on IBM Quantum Platform or on simulators on a local computer. It follows the circuit model for universal quantum computation, and can be used for any quantum hardware (currently supports superconducting qubits and trapped ions) that follows this model. + +As mentioned above, Qiskit offers packages that can run both on local hardware and on IBM's actual quantum computers. In order to run them on IBM's machines, you can follow the instructions here: + +For documentation on Qiskit: + +For installation: + +```shell +pip3 install qiskit +pip3 install qiskit-ibm-runtime +``` + +The code below serves a starter on how to plot a single vector into a Bloch sphere. More advanced code will be shown as we progress + +```python +import numpy as np +from qiskit.visualization import plot_bloch_vector +import matplotlib.pyplot as plt + +# Define a Bloch sphere point using the angles mentioned +r = 1 +theta = np.pi/2 +phi = np.pi/3 +plot_bloch_vector([r, theta, phi], coord_type='spherical', title="Bloch Sphere Qiskit") + +plt.show() +``` + +![Representing a state on the Bloch sphere - Qiskit](assets/bloch_qiskit.png) + + +## Multiple Qubit Systems + +Suppose we now have two qubit channels, $A$ and $B$, each one containing a state: $|\psi_A\rangle_A = a_0|0\rangle_A + a_1 |1\rangle_A$ and $|\psi_B\rangle_B = b_0|0\rangle_B + b_1 |1\rangle_B$ respectively. The full state $|\psi\rangle_{AB}$ of the system is written below, where $\otimes$ represents the tensor product of the elements. + + +$$ +\begin{split} + |\psi\rangle_{AB} = |\psi_A\rangle_A \otimes |\psi_B\rangle_B = a_0b_0 |0\rangle_A \otimes |0\rangle_B \\ + +a_0b_1\vert0\rangle_A\otimes\vert1\rangle_B \\ + +a_1b_0\vert1\rangle_A\otimes\vert0\rangle_B \\ + +a_1b_1\vert1\rangle_A\otimes\vert1\rangle_B +\end{split} +$$ + +We can simplify the notation on the right hand side of the equation above by adopting the convention that $|a\rangle_A \otimes |b\rangle_B = |a\rangle_A|b\rangle_B = |ab\rangle_{AB}$. The values $\{|00\rangle_{AB}, |01\rangle_{AB}, |10\rangle_{AB}, |11\rangle_{AB} \}$ form a basis for the complete system, also called the computational basis (in analogy to the single qubit system). Using the vector notation proposed previously, we have $|\psi\rangle_{AB} = [a_0b_0,\; a_0b_1,\; a_1b_0,\; a_1b_1]^T$. + + +$$ +\begin{split} + |\psi\rangle_{AB} = a_0b_0 |00\rangle_{AB} \\ + +a_0b_1 |01\rangle_{AB} \\ + +a_1b_0 |10\rangle_{AB} \\ + +a_1b_1 |11\rangle_{AB} +\end{split} +$$ + +This notion can be extended. Each qubit is a complex vector space $V_i$ of computational basis $\{|0\rangle_i, |1\rangle_i \}$. For a system of $n$ qubits, the complete vector space is $V=V_{n-1} \otimes ... \otimes V_1 \otimes V_0$, with computational basis $B = \{ |0\rangle_{n-1}...|0\rangle_1|0\rangle_0,|0\rangle_{n-1}...|0\rangle_1|1\rangle_0,..., |1\rangle_{n-1}...|1\rangle_1|1\rangle_0 \}$ yielding a total of $2^n$ elements. We can both drop the subscripts and join the basis states in order to keep the notation for the computational basis more palatable $B = \{|0...00\rangle, |0...01\rangle,...,|1...11\rangle \}$. It is important, however, to remember two things: + +1. the number of qubits involved in the system, which should be clear from the context; and +2. the fact that the rightmost digit of each basis element refers to qubit 0, the second rightmost to qubit 1, and so on. + +We can compress this notation even further by doing a binary-decimal conversion on the elements of the basis. By saying that qubit $0$ is the least significant qubit and that qubit $n-1$ is the most significant, we can convert a basis element $|a_{n-1}a_{n-2},...,a_1,a_0\rangle$, with $a_i \in \{0, 1\}$, to $|k\rangle$, with $k = \sum_{i=0}^{n-1} a_i 2^i$. Using this notation, which shall persist throughout this article, the computational basis can be written as $B = \{|0\rangle, |1\rangle, ..., |2^n - 1\rangle\}$ + +Example: suppose we have a system with $n=2$ qubits, $q_0$, $q_1$. The least significant qubit is $q_0$ and the most significant qubit is $q_1$. The computational basis for the system $V=V_1\otimes V_0$ is $B = \{|00\rangle_{q_1q_0}, |01\rangle_{q_1q_0}, |10\rangle_{q_1q_0}, |11\rangle_{q_1q_0}\}$, which can be written as $B=\{|0\rangle, |1\rangle, |2\rangle, |3\rangle\}$. + +Concerning the measurements for multiple qubit systems, they work in a similar fashion to that of single qubit systems. Suppose a state $|\psi\rangle = \sum_{i=0}^{2^n-1}a_i|i\rangle$ is measured against the computational basis. The probability of measuring state $|k\rangle$ is $Pr[|k\rangle] = |a_k|^2$. By expanding the notation $|k\rangle = |a_{n-1}...a_1a_0\rangle = |a_{n-1}\rangle_{n-1}...|a_1\rangle_1|a_0\rangle_0$, where $a_i \in \{0, 1\}$, we can see that measuring $|k\rangle$ equals measuring $|a_{0}\rangle$ on qubit $0$, $|a_1\rangle$ on qubit 1, and so on. From here, it is merely a question of applying the same encoding to classical bits seen in the section about single qubit systems. Measurements on different basis work in a similar manner. Assuming an orthonormal basis $B' = \{|b_o\rangle, |b_1\rangle,...,|b_{2^n-1}\rangle \}$ for the n-qubit vector space (where $n\geq1$), the probability of measuring state $|b_i\rangle$ from state $|\psi\rangle$ is $Pr[|b_i\rangle] = |\langle b_i | \psi\rangle |^2$. + +### Entangled States + +A key property that arises out of multiple-qubit systems is the existence of the so-called "entangled states". + +Even though the complex vector space for an n-qubit system was constructed by taking the tensor product of the vector spaces of the individual qubits $V=V_{n-1}\otimes...\otimes V_1 \otimes V_0$, that does not mean that every state of the system can be written as a tensor product of states in the original qubits. When $\nexists \{|\psi_{n-1}\rangle_{n-1}, ..., |\psi_{1}\rangle_{1}, |\psi_{0}\rangle_{0}\} \; | \; |\psi\rangle = |\psi_{n-1}\rangle_{n-1}\otimes ... \otimes |\psi_{1}\rangle_{1} \otimes |\psi_{0}\rangle_{0}$, it is said that the system state $|\psi\rangle$ is entangled. + +Example: Consider a system of 2 qubits. The state $|\psi_1\rangle = [1/2, 1/2, 1/2, 1/2]^T$ can be written as $|\psi\rangle = [1/\sqrt{2}, 1/\sqrt{2}]_1^T \otimes [1/\sqrt{2}, 1/\sqrt{2}]_0^T$, meaning it is not entangled. On the other hand, the state $|\psi_2\rangle = [1/\sqrt{2}, 0, 0, 1/\sqrt{2}]^T$ cannot be written as a tensor product over the qubits, being, therefore, entangled. + +The interest in entangled states lies in the fact that measurements on one qubit yield information about some other qubit. Rewriting the state $|\psi_2\rangle$ of the previous example in full state notation, we get $|\psi_2\rangle_{10} = \frac{1}{\sqrt{2}}(|0\rangle_1|0\rangle_0 + |1\rangle_1|1\rangle_0)$. If we perform a measure in the computational basis on qubit 0 and obtain $|0\rangle$, we immediately know that a measure on qubit 1 will yield $|0\rangle$. Likewise, a measurement on qubit 0 yielding $|1\rangle$ means we have to measure $|1\rangle$ on qubit 1, and so on. + +Some entangled states are of special interest: the so-called "Bell states". These are entangled states obtained on 2-qubit systems that represent, at the same time, the simplest and maximum entanglement. The Bell states are listed below: + +- $|\Psi^{00}\rangle = \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$ +- $|\Psi^{01}\rangle = \frac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$ +- $|\Psi^{10}\rangle = \frac{1}{\sqrt{2}}(|00\rangle - |11\rangle)$ +- $|\Psi^{11}\rangle = \frac{1}{\sqrt{2}}(|01\rangle - |10\rangle)$ + +As a general formula for the Bell states, we can write $|\Psi^{ij}\rangle = \frac{1}{\sqrt{2}}(|0j\rangle + (-1)^i |1 \delta_{0j}\rangle)$, for $i,j \in \{0, 1\}$. It is also worth noting that the Bell states are orthonormal, meaning they can be used to generate proper state superpositions. + + +## Operators and Quantum Ports + +Operators will be defined as transformations from the state space of a quantum system to itself \cite{rieffel2011quantum}. Not all operators imaginable are permissible, for they must satisfy the rules of quantum mechanics. Namely, the operators, once defined in their vector spaces, need to satisfy the requirements of linearity, for the principle of superposition to hold, and the preservation of the inner product, so that no contradictions arise in terms of measurement. In these equations, $U$ is an operator and $U^\dag$ means the complex conjugate transpose of $U$. Operators can be represented as both matrices or bra-ket entities. + +Linearity: + +$$ +U\sum_{i=1}^k a_i |\psi_i\rangle = \sum_{i=0}^k a_i U |\psi_i\rangle +$$ + +Inner product preservation: + +$$ +\langle \phi| U^\dag U |\psi\rangle = \langle \phi|\psi\rangle +$$ + +The equations above can be satisfied for all states if we have $U^\dag = U^{-1}$, meaning operators have to be unitary. + +One very important property of this definition is the no-cloning theorem: $\nexists U | U(|a\rangle |0\rangle) = |a\rangle |a\rangle, \; \forall |a\rangle$. What this means is that it is not possible to construct a "cloning" operator that copies a state to another state without altering the original. + +Another consequence of the restriction that operators must be unitary is that every quantum operator can be reversed by applying its conjugate transpose. + +When an operator is applied to a small number of qubits, it is usually called a gate. Some gates are particularly important, for they are used in most quantum computing applications. We can cite, for example, the Pauli gates $X$ (also called a "bit-flip"), $Y$ and $Z$, which act upon 1-qubit systems. Their names are derived from the visual effect they entail on the Bloch sphere: applying an $X$ gate on state $|\psi\rangle$ means rotating this state $\pi$ radians around the $x$ axis (and the same logic holds for gates $Y$ and $Z$). Even more, given this geometrical reasoning, it is easy to verify that $\{|0\rangle, |1\rangle\}$ are eigenstates of gate $Z$, $\{|+\rangle, |-\rangle\}$ are eigenstates of gate $X$, and $\{|+i\rangle, |-i\rangle\}$ are eigenstates of gate $Y$. + + +- $X = \vert 1 \rangle \langle 0 \vert + \vert 0 \rangle \langle 1 \vert = \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix}$ +- $Y = -\vert 1 \rangle \langle 0 \vert + \vert 0 \rangle \langle 1 \vert = \begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix}$ +- $Z = \vert 0 \rangle \langle 0 \vert - \vert 1 \rangle \langle 1 \vert = \begin{bmatrix} 1 & 0\\ 0 & -1 \end{bmatrix}$ + + +Another fundamental 1-qubit operator is the Hadamard gate: + + +- $H = \frac{1}{\sqrt{2}}(\vert 0 \rangle \langle 0 \vert + \vert 0 \rangle \langle 1 \vert + \vert 1 \rangle \langle 0 \vert - \vert 1 \rangle \langle 1 \vert ) = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$ + +The importance of this gate is shown below, where it maps $\{|0\rangle, |1\rangle\} \rightarrow \{|+\rangle, |-\rangle\}$. For an element $|i\rangle, \; i \in \{0, 1\}$ of the 1-qubit computational basis, we can write the Hadamard transform as $H|i\rangle = \frac{1}{\sqrt{2}}(|0\rangle + (-1)^i |1\rangle)$. + + +$$ +\begin{cases} + H|0\rangle = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \end{bmatrix} = |+\rangle \\ + \\ + H|1\rangle = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ -1 \end{bmatrix} = |-\rangle +\end{cases} +$$ + + +Although the Hadamard gate was built for 1-qubit systems, it can be generalized for some specific applications, such as in the equation below, where we have an n-qubit system at state $|0\rangle_{n-1} \otimes ... \otimes |0\rangle_1 \otimes |0\rangle_0 = |0\rangle^{\otimes n}$ (the exponential notation means only a repetition of the tensor product). We can see that such an application takes the initial state, which can be obtained by setting all original qubits to state $|0\rangle$, into a homogeneous superposition of the computational basis's elements for the n-qubit state space. + + +$$ +\begin{split} + H^{\otimes n}\vert 0 \rangle ^{\otimes n} & = (H \vert 0 \rangle)^{\otimes n} \\ + & = \frac{1}{\sqrt{2^n}} (\vert 0...0 \rangle + \vert 0...1 \rangle +... +\vert 1...1 \rangle) +\end{split} +$$ + + +As for multiple-qubit gates, the most important one is the $CNOT$ ("controlled-not") gate which act on 2-qubit systems. Its use, which is illustrated below, means that it flips the state of the second qubit if the first qubit is in state $|1\rangle$, but conserves the state if the first qubit is in state $|0\rangle$. For superpositioned states, we can write the state's decomposition over the computational basis and then apply the $CNOT$ gate accordingly using the operator's linearity. + +- $CNOT = \vert 00 \rangle \langle 00 \vert + \vert 01 \rangle \langle 01 \vert + \vert 11 \rangle \langle 10 \vert + \vert 10 \rangle \langle 11 \vert$ + +$$ +\begin{cases} + CNOT \vert 00 \rangle = \vert 00 \rangle \\ + CNOT \vert 01 \rangle = \vert 01 \rangle \\ + CNOT \vert 10 \rangle = \vert 11 \rangle \\ + CNOT \vert 11 \rangle = \vert 10 \rangle \\ +\end{cases} +$$ + +Any operator can be made into a controlled operator, meaning it only acts upon the state if the controller state is $|1\rangle$, doing nothing otherwise. This notion can also be extended to create classically-controlled gates, which only apply an operator if it receives an input $1$ from a classical bit. + +Gates can be applied in sequence and in parallel to generate circuits, which forms the core of QC. + + +## Quantum Circuits + +Gates can be combined into circuits. The circuits shown in this work, such as the one below, were obtained using Qiskit. The rules for understanding quantum circuits are: + +1. Information flows from left to right; +2. Qubit registers, which are represented by single lines, are numbered, and the higher the value, the more signification the qubit; +3. Double lines represent classical bits, It is possible to have either many double lines, each one representing a bit, or a single double line representing a bit string; +4. Measurements, which are always done against the computational basis, take a 1-qubit state to a classical bit in accordance to the encoding in section \ref{sec:single_qubit}. If measuring into a bit string, significance is preserved; +5. Gates are represented by squares, with the operator involved represented by letters. + + +![Example quantum circuit](assets/teleportation_labelled.png) + +To further solidify the understanding of quantum systems, some examples are shown here. + +Consider the simple, 1-qubit circuit below. In it we have $|\psi_1\rangle = |0\rangle$. After passing the Hadamard gate, we get $|\psi_2\rangle = H|\psi_1\rangle = H|0\rangle = |+\rangle$. Measuring $|\psi_2\rangle = |+\rangle$ against the computational should entail $Pr[|0\rangle] = Pr[|1\rangle] = \frac{1}{2}$, meaning we should get bit 0 and bit 1 with equal probability. Qiskit allows us to run this circuit either on a classical computer (simulating the quantum computer by using randomness packages. Evidently, this is less efficient than using a real quantum computer) or on one of IBM's quantum computers. The results of running this circuit 1024 times are shown below, with these being in agreement with our predictions. + + +![Simple quantum circuit](assets/circuit_annotated.png) + + +![Counts of the circuit above](assets/counts.png) + + + +We can use quantum circuits for more interesting applications. One such possibility is generating bell states using the circuit below, where $i, j \in \{0, 1\}$, meaning we can start the qubit registers at any one of the 1-qubit basis elements. This way, we get $|\psi_1\rangle = |i\rangle_1|j\rangle_0$. After applying the Hadamard gate at qubit 1, we arrive at $|\psi_2\rangle = \frac{1}{\sqrt{2}}(|0\rangle_1 + (-1)^i|1\rangle_1)|j\rangle_0 = \frac{1}{\sqrt{2}}(|0\rangle_1|j\rangle_0 + (-1)^i|1\rangle_1|j\rangle_0)$. We can use the linear property of the $CNOT$ operator to get to the equation below. + +$$ +\begin{split} + |\psi_3\rangle & = CNOT|\psi_2\rangle \\ + & = \frac{1}{\sqrt{2}}(CNOT(|0\rangle_1|j\rangle_0) + (-1)^i CNOT(|1\rangle_1|j\rangle_0)) +\end{split} +$$ + +![Circuit for generating Bell states](assets/bell_annotated.png) + + +The $CNOT$ gate only acts if the controlling qubit is in state $|1\rangle$, meaning the equation above is identical to the one below, which matches our previous equation for Bell states, thus proving that the system is successful in its objective. + + +$$ +|\psi_3\rangle = \frac{1}{\sqrt{2}}(|0\rangle_1|j\rangle_0 + (-1)^i |1\rangle_1|\delta_{0j}\rangle_0) = |\Psi^{ij}\rangle +$$ + +Also, we can build a circuit to solve the inverse problem of finding $i, j$ given a Bell state $|\Psi^{ij}\rangle$. This problem is called "Bell measurement". Since both the $CNOT$ and $H$ operators used in the figure above are real and symmetric, they are their own inverses. Therefore, all we need to do is run the circuit in the "other direction", as in the figure below. + +![Circuit for Bell measurements](assets/bell_measure_annotated.png) + + + +## An Example Circuit in Qiskit + +The circuit above, which generates Bell states, can be implemented in Qiskit using the code snippet below: + +```python +from qiskit import QuantumRegister, ClassicalRegister +from qiskit import QuantumCircuit +import matplotlib.pyplot as plt + +# Define the values of i and j, as explained +# TODO: Change these as you wish +i = 1 +j = 1 + +# Circuit is made up of registers +# Define the two quantum and two classical registers +q = QuantumRegister(2, 'q') +c = ClassicalRegister(2, 'c') + +# Make up the circuit +qc = QuantumCircuit(q, c) + +# Implement the gates and the connections +# Qiskit always starts registers at |0>, so implement an X gate to switch to |1> if needed +if i == 1: + qc.x(q[1]) +if j == 1: + qc.x(q[0]) + +# Hadamard at topmost qbit +qc.h(q[1]) +# CNOT gate at 0 controlled by 1 +qc.cx(q[1], q[0]) +# Measurements +qc.measure(q, c) + +# Plot in mpl to see circuit +# Reverse for visualization, highest is MSQ +qc.draw(output='mpl', initial_state=True, reverse_bits=True) +plt.show() +``` + + +## Further Reading +Some interesting applications of quantum circuits are: + +- Teleportation protocol: transmit quantum information between users +- Deutsch–Jozsa algorithm: oracle query solving. First algorithm proven to be exponentially faster on quantum computers +- Grover's algorithm: search on $O(\sqrt{N})$ evaluations +- Shor's algorithm: breaks several public-key cryptography schemes, such as RSA, DH and EC. Probably the most important quantum algorithm + +2023 Nature article on quantum computing (the good, the bad and the ugly): + +Big challenges in quantum computing: + + +## References +E. Rieffel and Wolfgang Polak, Quantum computing : a gentle introduction. Cambridge, Mass.: Mit Press, 2011. + +M. A. Nielsen and I. L. Chuang, Quantum computation and quantum information. Cambridge Cambridge University Press, 2019. + +N. David Mermin, Quantum Computer Science. Cambridge University Press, 2007. \ No newline at end of file diff --git a/wiki/tools/assets/gti_diagram.png b/wiki/tools/assets/gti_diagram.png new file mode 100644 index 00000000..3e6d17a3 Binary files /dev/null and b/wiki/tools/assets/gti_diagram.png differ diff --git a/wiki/tools/assets/ncs2workflow.png b/wiki/tools/assets/ncs2workflow.png new file mode 100644 index 00000000..16c36720 Binary files /dev/null and b/wiki/tools/assets/ncs2workflow.png differ diff --git a/wiki/tools/assets/plai_demo.png b/wiki/tools/assets/plai_demo.png new file mode 100644 index 00000000..2cd24a3e Binary files /dev/null and b/wiki/tools/assets/plai_demo.png differ diff --git a/wiki/tools/assets/pose_estimation.png b/wiki/tools/assets/pose_estimation.png new file mode 100644 index 00000000..2b980c62 Binary files /dev/null and b/wiki/tools/assets/pose_estimation.png differ diff --git a/wiki/tools/assets/stick-raspi.png b/wiki/tools/assets/stick-raspi.png new file mode 100644 index 00000000..85709080 Binary files /dev/null and b/wiki/tools/assets/stick-raspi.png differ diff --git a/wiki/tools/assets/sticks.png b/wiki/tools/assets/sticks.png new file mode 100644 index 00000000..b431330d Binary files /dev/null and b/wiki/tools/assets/sticks.png differ diff --git a/wiki/tools/usb-compute-sticks.md b/wiki/tools/usb-compute-sticks.md new file mode 100644 index 00000000..df4148a3 --- /dev/null +++ b/wiki/tools/usb-compute-sticks.md @@ -0,0 +1,372 @@ +--- +# Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be +# overwritten except in special circumstances. +# You should set the date the article was last updated like this: +date: 2024-04-30 # YYYY-MM-DD +# This will be displayed at the bottom of the article +# You should set the article's title: +title: Tutorial on Using USB Compute Sticks +# The 'title' is automatically displayed at the top of the page +# and used in other parts of the site. +--- +USB compute sticks are meant to enhance the visual processing power of your hardware setup. Since documentation on these sticks is sorely lacking online, the point of this tutorial is to show +users how to get them working in a fast and effective way. + +## Introduction +As of the time of writing, we have found at CMU: + +- 4 Intel Neural Compute Stick (NCS) 2 +- 6 Gyrfalcon Tech Plai Plug 2803 + +![NCS2 (top) and Plai Plug 2803 (bottom)](assets/sticks.png) + +In terms of usefulness, these sticks don’t add that much power to most modern computers. Your typical notebook GPU (and even CPU) will generally be able to handle more visual processing than the sticks, especially considering real-time applications. **These sticks are useful, however, when you have to do visual inference on low-power hardware, such as running object detection using a Raspberry Pi, for example. Or when the CPU/GPU of your system will already be in full demand, such as in autonomous drone flight. In these cases, the sticks can increase the speed of your program.** + +![Stick with Raspberry Pi](assets/stick-raspi.png) + +### Pros and cons of each stick +**Intel NCS2**: + ++Documentation provided by intel + ++Can find some level of community support online + ++Ready demos available online + +-Software support has been discontinued, so an older version has to be used + +-NCS2 has also been discontinued, so no hardware updates nor new versions + +-Getting the stick to work is not trivial + +**Gyrfalcon Tech Plai Plug 2803**: + ++Honestly, I didn’t find any… but please feel free to use this stick and prove me wrong + +-Stick (and support for it) seems to have been discontinued + +-Community support pretty much non-existent + +-Newest SDK has broken install instructions, and I couldn't get it to work. Had to use an older version + +-The SDK was made for Ubuntu 16, so getting it to work in newer versions requires some hacking + +-Uses a lot of legacy software like python 2, CV 3.0, etc. + + +## Intel Neural Compute Stick (NCS) 2 + +### Introduction and functioning + +The Intel Neural Compute Stick 2 is powered by the Intel Movidius X VPU to power on-device AI applications at high performance with ultra-low power consumption. With new performance enhancements, the Intel Movidius Myriad X VPU is a power-efficient solution revolving around edge computing that brings computer vision and AI applications to edge devices such as connected computers, drones, smart cameras, smart home, security, VR/AR headsets, and 360-degree cameras. You can read more at: . + +In order to use the NCS2, you are going to need OpenVINO. OpenVINO is an Intel deep learning toolkit that can be used in multiple platforms, including the stick. Its idea is to be kinda like Java, a sort of “meta-framework” that abstracts from torch, tensorflow, etc. And like Java, it is a pain to understand and use. There are two versions of OpenVINO: + +- OpenVINO runtime: this is what you will use to run your neural network application for inference. It has to be configured to run with the GPU/NCS2. + +- OpenVINO development tools: this package contains runtime, plus several other applications such as model optimizer and open_model_zoo tools. For python developers, you can just install the pypi version. For C++ developers, you will need to first install and build the runtime version separately before using the development tools. If you are not developing any new models/demos, the python version should work fine for you, as you can run C++ demos using the runtime version (but you’ll not be able to develop new ones). + +For full instructions, please consult . For this example, we will be installing python development tools and OpenVINO runtime on an Ubuntu 20 computer, using an existing model and demo to show how the process works. Please adapt as needed to suit your application. + +> Support for the NCS2 only goes as far as OpenVINO 2022.3.1 LTS, so be sure to use the correct version when installing or consulting documentation. + +![NCS2/OpenVINO workflow](assets/ncs2workflow.png) + + +### Installing OpenVINO development tools in your computer + +We are going to install OpenVINO development tools on an Ubuntu 20 computer running python 3.8. From your home directory, run the following commands: + +```shell +$ python3 -m venv openvino_env +$ source openvino_env/bin/activate +$ python3 -m pip install --upgrade pip +$ pip install openvino-dev[caffe,kaldi,mxnet,ONNX,pytorch,tensorflow2]==2022.3.1 +``` + +> The arguments inside the square brackets are frameworks that OpenVINO will be able to work with. If you would rather work with tensorflow 1, use “tensorflow1” instead of “tensorflow2” in the arguments. Also, remember to activate the environment whenever you want to use the OpenVINO development tools (by running the source command). + +Once the download is complete, you can verify the installation by doing: + +```shell +$ mo -h +``` + +The help page for the model optimizer should come up. + +You can also run: + +```shell +$ omz_downloader --print_all +``` + +To list all the available models in the open model zoo (omz). The omz is an online repository full +of pre-trained neural network models that you can use in your applications. The omz also +provides demos to showcase these models. + +The demo we are going to use is this tutorial is the "3D Human Pose Estimation Python* Demo" +(), since the pipeline to get this demo working makes +use of most OpenVINO commands. This demo needs a model to run: + +- human-pose-estimation-3d-0001 + +Create a folder where you want to store and convert your models: + + +```shell +$ mkdir models +$ cd models/ +``` + +From inside the models folder, you can use the omz_downloader command to get these models +from the open model zoo. Remember to change the name of the model accordingly: + +```shell +$ omz_downloader --name human-pose-estimation-3d-0001 --precisions FP16 -o . +``` + +> The NCS2 only supports FP16 precision. + +You can see all installed models by running: + +```shell +$ find . +``` + +You must have a .bin and .xml file for each model in order to use them. As usually happens with +public models (like ours), that is not the case. If your model is in any other format, you will need +to run the omz_converter command, like the example below. This will generate the .xml and .bin +files. + +```shell +$ omz_converter --name human-pose-estimation-3d-0001 --precisions FP16 -d . -o . +``` + +If you do run the conversion command, make sure you have installed the framework (torch, +tensorflow, etc.) used by the original model. + +Now that the model has been downloaded and converted, **copy the .bin and .xml files into the +computer where you will run the application**. + +### Installing OpenVINO runtime in your computer + +Go to the link for OpenVINO runtime and select your system . + +> It is recommended that you install from archive files. + +In the case of an Ubuntu 20 computer, for example, you should do: + +```shell +$ sudo mkdir /opt/intel +$ cd Downloads +$ curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/linux/l_openvino_toolkit_ubuntu20_2022.3.1.9227.cf2c7da5689_x86_64.tgz --output openvino_2022.3.1.tgz +$ tar -xf openvino_2022.3.1.tgz +$ sudo mv l_openvino_toolkit_ubuntu20_2022.3.1.9227.cf2c7da5689_x86_64 /opt/intel/openvino_2022.3.1 +$ cd /opt/intel/openvino_2022.3.1 +$ sudo -E ./install_dependencies/install_openvino_dependencies.sh +``` + +These commands should download all necessary dependencies for OpenVINO runtime. Next +you should install the dependencies for Python specifically and create a link to you OpenVINO +folder: + +```shell +$ cd /opt/intel/openvino_2022.3.1 +$ python3 -m pip install -r ./python/python3./requirements.txt +$ cd /opt/intel/opt/intel$ sudo ln -s openvino_2022.3.1 openvino_2022 +``` + +Finally, you should configure the environment. We also recommend adding the command to .bashrc so you don’t forget it. Otherwise, you will have to run it anytime you use OpenVINO. + +```shell +$ source /opt/intel/openvino_2022/setupvars.sh +$ echo "source /opt/intel/openvino_2022/setupvars.sh" >> ~/.bashrc +``` + +Before proceeding, you also need to configure the NCS2. Instructions can be found in this link . They are reproduced below for your convenience: + +```shell +$ sudo usermod -a -G users "$(whoami)" +$ cd /opt/intel/openvino_2022/install_dependencies/ +$ sudo cp 97-myriad-usbboot.rules /etc/udev/rules.d/ +$ sudo udevadm control --reload-rules +$ sudo udevadm trigger$ sudo ldconfig +``` + +Reboot the computer after running these commands. + +After rebooting, plug the NCS2 into your machine. + +To check if the installation was successful, open a terminal window and type: + +```shell +$ python3 -c "from openvino.runtime import Core; print(Core().available_devices)" +``` + +If everything is correct, you should see a list that includes a device called ‘MYRIAD’. That is the +NCS2. If the device did not show up, check the steps in to see where the process broke down. + + +To run a demo, we will download the open model zoo repository, which is found at . + +> The date of the branch matters. Get the one shown below. + +```shell +$ git clone --recurse-submodules https://github.com/openvinotoolkit/open_model_zoo.git --branch releases/2022/3 +``` + +The link at also contains instructions on how to build the C++ demos or demos that require native Python, reproduced below: + +```shell +$ cd open_model_zoo/demos +$ python3 -mpip install --user -r ./requirements.txt +$ pip install ./common/python +$ mkdir build +$ cd build +$ cmake -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON .. +$ cmake --build . +$ export PYTHONPATH="$(pwd)/intel64/Release:$PYTHONPATH" +``` + +> Remember to export the PYTHONPATH whenever you open a new terminal. + +Navigate to the desired demo. In our case: + +```shell +$ cd ../human_pose_estimation_3d_demo/python/ +``` + +Some demos won’t support MYRIAD out of the box. Since OpenVINO is supposed to run on +any platform, usually all you need to do is add MYRIAD to some list of allowed devices at the +beginning of the program, like below: + +```py +DEVICE_KINDS = ['CPU', 'GPU', 'HETERO', 'MYRIAD'] +``` + +> Remember you’ll need to recompile files if you change the source code on are a cpp demo. + +Finally, to run your demo, run: + +```shell +$ python3 human_pose_estimation_3d_demo.py -m /human-pose-estimation-3d-0001.xml -i 0 -d MYRIAD +``` + +![Pose estimation using a monocular camera. Running on a NCS2 stick](assets/pose_estimation.png) + + +## Gyrfalcon Tech Plai Plug 2803 + +### Introduction and functioning + +Gyrfalcon Technology’s (GTI) silicon chips are high performance and low power hardware +accelerators used for convolutional neural network (CNN) solutions. They contain an optimized, +expandable parallel CNN processing engine and efficient on-chip memory. The hardware +accelerators can be integrated into embedded systems deployed at the edge or in servers. +(From GTI website) + +Typical use of the Plai Plug involves developing neural network models then using the SDK to +get the models to run on the chips. + + +![Plai Plug work diagram](assets/gti_diagram.png) + +> You need premium membership for accessing model development kits (MDKs). + +> The stick is ancient, so all software used for it is legacy. This tutorial assumes the +use of Ubuntu 20. For older OSs, installation may be easier. + +### Accessing the GTI website + +You will need to download the GTI software from their website. AIMS has an account for that. +Navigate to and log in with credentials. The AI Maker +Space in Tepper has credentials you can ask for in order to work with the sticks. + +> Ignore the expired certificate warning. + +You will be able to download the SDK now. + +### Downloading and unpacking the SDK + +Click on “Software Downloads” on the left-side menu and navigate to “SDK (4.x)”. Download the +appropriate version (e.g. “SDK for Linux x86 (version 4.5.0.3) – 887 MB” for the 2803). After the +download has finished, navigate to your home folder and run: + +```shell +$ mkdir GTI +$ tar zxvf ./Downloads/GTISDK-Linux_x86_64_v4.5.0.3.tgz -C GTI --strip-components=1 +``` + +### Setting up the dependencies + +As mentioned, the stick uses legacy software, so you need to add repositories to get it to work. +In a terminal, run the following command to add a repository for opencv dependencies: + +```shell +$ sudo add-apt-repository ppa:linuxuprising/libpng12 +``` + +Then run: + +```shell +$ sudo gedit /etc/apt/sources.list +``` + +And add the following lines to the end of the file: + +``` +# Xenial repos for old packages +deb http://dk.archive.ubuntu.com/ubuntu/ xenial main +deb http://dk.archive.ubuntu.com/ubuntu/ xenial universe +``` + +Finally, refresh apt and install the dependencies for the stick: + +```shell +$ sudo apt update +$ sudo apt install libx11-dev libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libjpeg-dev python-numpy python-tk libopencv-dev libpng-dev python-opencv +$ sudo apt install libcanberra-gtk-module +``` + +> This will install the package python-is-python2, which adds a symbolic link to your bin that makes the python command point to python2 instead of python3. This might break other applications, so be careful. + + +### Sourcing the stick + +Plug the stick into one of your computer’s USB ports. + + +Go inside the GTI folder created previously where you extracted the package and run: + + +```shell +$ source SourceMe.env +``` + +Press y to install EUSB rules. Press y to install FTDI rules. Press n when prompted to install +PCIe driver (you don’t need it and the install is broken anyway). Press y when prompted if you +want to install gtilib. + +Remember to source SourceMe.env whenever you want to use the Plai Plug stick. + + +### Running demos + +All demos are in the Apps folder. For python demos (example): + +```shell +$ cd Apps/Python +$ python demo.py image ../Models/2803/gti_gnet1_fc1000_2803.model ../Data/Image_bmp_c1000/beach.bmp +``` + +For C demos (example): + +```shell +$ cd Apps/Demos +$ demo image ../Models/2803/gti_gnet1_fc1000_2803.model ../Data/Image_bmp_c1000/beach.bmp +``` + +For more information on running demos, consult the manual in the Documents folder of the GTI +package you downloaded. + +![Result of a demo on the Plai Plug](assets/plai_demo.png) \ No newline at end of file