|
89 | 89 | "cell_type": "markdown",
|
90 | 90 | "metadata": {},
|
91 | 91 | "source": [
|
92 |
| - "```{index} trajectory optimization```In the previous section we saw how use factor graphs for visual SLAM and structure from motion. These perception algorithms are typically run after the robot has gathered some visual information, and provide information about what happened in the past. But how can we plan for the future? \n", |
| 92 | + "```{index} trajectory optimization\n", |
| 93 | + "```\n", |
| 94 | + "In the previous section we saw how use factor graphs for visual SLAM and structure from motion. These perception algorithms are typically run after the robot has gathered some visual information, and provide information about what happened in the past. But how can we plan for the future? \n", |
93 | 95 | "\n",
|
94 | 96 | "We already saw that RRTs are a useful tool for planning in a continuous, potentially high dimensional state space. However, RRTs are not concerned with optimality. They aim for feasible paths, where sometimes feasibility means \"collision-free\" and sometimes it includes honoring the system dynamics. But if we want to achieve optimal trajectories in terms of time to goal, best use of energy, or minimum distance, we need to turn to other methods.\n",
|
95 | 97 | "\n",
|
|
118 | 120 | "cell_type": "markdown",
|
119 | 121 | "metadata": {},
|
120 | 122 | "source": [
|
121 |
| - "```{index} path, trajectory```## Optimizing for Position\n", |
| 123 | + "```{index} path, trajectory\n", |
| 124 | + "```\n", |
| 125 | + "## Optimizing for Position\n", |
122 | 126 | "\n",
|
123 | 127 | "> Position is all we need for the first step.\n",
|
124 | 128 | "\n",
|
|
178 | 182 | "cell_type": "markdown",
|
179 | 183 | "metadata": {},
|
180 | 184 | "source": [
|
181 |
| - "```{index} occupancy map, cost map```## Occupancy and Cost Maps\n", |
| 185 | + "```{index} occupancy map, cost map\n", |
| 186 | + "```\n", |
| 187 | + "## Occupancy and Cost Maps\n", |
182 | 188 | "\n",
|
183 | 189 | "> We can use maps to encode costs to minimize.\n",
|
184 | 190 | "\n",
|
|
823 | 829 | "cell_type": "markdown",
|
824 | 830 | "metadata": {},
|
825 | 831 | "source": [
|
826 |
| - "```{index} vectored thrust```## A Virtual Vectored Thrust\n", |
| 832 | + "```{index} vectored thrust\n", |
| 833 | + "```\n", |
| 834 | + "## A Virtual Vectored Thrust\n", |
827 | 835 | "\n",
|
828 | 836 | "> What we want, in theory...\n",
|
829 | 837 | "\n",
|
|
920 | 928 | "cell_type": "markdown",
|
921 | 929 | "metadata": {},
|
922 | 930 | "source": [
|
923 |
| - "```{index} feedback control```## Combining Open Loop and Feedback Control\n", |
| 931 | + "```{index} feedback control\n", |
| 932 | + "```\n", |
| 933 | + "## Combining Open Loop and Feedback Control\n", |
924 | 934 | "\n",
|
925 | 935 | "> What we want, in practice!\n",
|
926 | 936 | "\n",
|
|
980 | 990 | "cell_type": "markdown",
|
981 | 991 | "metadata": {},
|
982 | 992 | "source": [
|
983 |
| - "```{index} controller gain```We can set up a small simulation to see how this controller behaves in practice, and in particular how the controller behaves for different values of $K_x$ and $K_v$. A factor like this is called a **controller gain**, and choosing the gains optimally is a standard problem in control theory.\n", |
| 993 | + "```{index} controller gain\n", |
| 994 | + "```\n", |
| 995 | + "We can set up a small simulation to see how this controller behaves in practice, and in particular how the controller behaves for different values of $K_x$ and $K_v$. A factor like this is called a **controller gain**, and choosing the gains optimally is a standard problem in control theory.\n", |
984 | 996 | "\n",
|
985 | 997 | "Below we use the same simulation strategy as in Section 7.2, and in particular use the `Drone` class that was defined there. In the simulation below we do not worry about the rotation yet:"
|
986 | 998 | ]
|
|
1090 | 1102 | "cell_type": "markdown",
|
1091 | 1103 | "metadata": {},
|
1092 | 1104 | "source": [
|
1093 |
| - "```{index} proportional```The mathematical equivalent of FPV for a control *algorithm* is to rotate everything into the body frame. Taking the desired thrust vector $T^n$ and multiplying it with the transpose of the attitude $R^n_b$ (which is the inverse rotation, recall Sections 6.1 and 7.1) yields the desired thrust vector $T^b$ in the body frame:\n", |
| 1105 | + "```{index} proportional\n", |
| 1106 | + "```\n", |
| 1107 | + "The mathematical equivalent of FPV for a control *algorithm* is to rotate everything into the body frame. Taking the desired thrust vector $T^n$ and multiplying it with the transpose of the attitude $R^n_b$ (which is the inverse rotation, recall Sections 6.1 and 7.1) yields the desired thrust vector $T^b$ in the body frame:\n", |
1094 | 1108 | "\\begin{equation}\n",
|
1095 | 1109 | "T^b = (R^n_b)^T T^n.\n",
|
1096 | 1110 | "\\end{equation}\n",
|
|
1270 | 1284 | "cell_type": "markdown",
|
1271 | 1285 | "metadata": {},
|
1272 | 1286 | "source": [
|
1273 |
| - "```{index} cascaded controller```Note that in the code we now have an outer and an inner loop. The outer loop is for the \"slow\" translational dynamics, whereas the inner loop simulates the \"fast\" attitude dynamics. Such a **cascaded controller** is a typical design choice for drone applications." |
| 1287 | + "```{index} cascaded controller\n", |
| 1288 | + "```\n", |
| 1289 | + "Note that in the code we now have an outer and an inner loop. The outer loop is for the \"slow\" translational dynamics, whereas the inner loop simulates the \"fast\" attitude dynamics. Such a **cascaded controller** is a typical design choice for drone applications." |
1274 | 1290 | ]
|
1275 | 1291 | },
|
1276 | 1292 | {
|
|
0 commit comments