Skip to content

Commit c274e37

Browse files
authored
Merge pull request #107 from gtbook/frank_feb10
Chapter 6
2 parents 6dbaa09 + 24412ca commit c274e37

16 files changed

+950
-1023
lines changed

S23_sorter_sensing.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@
217217
"id": "1SqwIzxjWfu6",
218218
"metadata": {},
219219
"source": [
220-
"As an example, in Figure [1](fig:category_prior) we define a CPT for our binary sensor example and pretty-print it. Note the rows add up to 1.0, as each row is a valid probability mass function (PMF)."
220+
"As an example, in Figure [1](#fig:category_prior) we define a CPT for our binary sensor example and pretty-print it. Note the rows add up to 1.0, as each row is a valid probability mass function (PMF)."
221221
]
222222
},
223223
{

S34_vacuum_perception.ipynb

Lines changed: 10 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
},
2424
{
2525
"cell_type": "code",
26-
"execution_count": 1,
26+
"execution_count": null,
2727
"id": "E5rsQom9hatQ",
2828
"metadata": {
2929
"tags": [
@@ -40,12 +40,12 @@
4040
}
4141
],
4242
"source": [
43-
"%pip install -U -q gtbook\n"
43+
"%pip install -U -q gtbook"
4444
]
4545
},
4646
{
4747
"cell_type": "code",
48-
"execution_count": 2,
48+
"execution_count": null,
4949
"id": "OFkNEfdX_CLe",
5050
"metadata": {
5151
"tags": [
@@ -74,12 +74,12 @@
7474
" import google.colab\n",
7575
"except:\n",
7676
" import plotly.io as pio\n",
77-
" pio.renderers.default = \"png\"\n"
77+
" pio.renderers.default = \"png\""
7878
]
7979
},
8080
{
8181
"cell_type": "code",
82-
"execution_count": 3,
82+
"execution_count": null,
8383
"id": "gIariOyXd8PE",
8484
"metadata": {
8585
"tags": [
@@ -104,7 +104,7 @@
104104
"N = 3\n",
105105
"X = VARIABLES.discrete_series(\"X\", range(1, N+1), vacuum.rooms)\n",
106106
"A = VARIABLES.discrete_series(\"A\", range(1, N), vacuum.action_space)\n",
107-
"Z = VARIABLES.discrete_series(\"Z\", range(1, N+1), vacuum.light_levels)\n"
107+
"Z = VARIABLES.discrete_series(\"Z\", range(1, N+1), vacuum.light_levels)"
108108
]
109109
},
110110
{
@@ -458,7 +458,7 @@
458458
"<figcaption>An HMM for three time steps, represented as a Bayes net.</figcaption>\n",
459459
"</figure>\n",
460460
"\n",
461-
"Figure [3.9](#fig:unrolledHMM) shows an example of an HMM for three time steps, i.e., \n",
461+
"Figure [2](#fig:unrolledHMM) shows an example of an HMM for three time steps, i.e., \n",
462462
"$\\mathcal{X}=\\{X_1, X_2, X_3\\}$ and\n",
463463
"$\\mathcal{Z}=\\{Z_1, Z_2, Z_3\\}$. As discussed above, in a Bayes net\n",
464464
"each node is associated with a conditional distribution: the Markov\n",
@@ -750,7 +750,7 @@
750750
"we only represent the *hidden* variables $X_1$, $X_2$, and $X_3$, \n",
751751
"connected to factors that encode probabilistic information. For\n",
752752
"our example with three hidden states, the corresponding factor graph is\n",
753-
"shown in Figure [3.25](#fig:HMM-FG).\n",
753+
"shown in Figure [4](#fig:HMM-FG).\n",
754754
"It should be clear from the figure that the connectivity of a factor\n",
755755
"graph encodes, for each factor $\\phi_{i}$, which subset of variables\n",
756756
"$\\mathcal{X}_{i}$ it depends on. We write:\n",
@@ -793,9 +793,7 @@
793793
"In other words, the independence relationships are encoded by the edges\n",
794794
"$e_{ij}$ of the factor graph, with each factor $\\phi_{i}$ a function of\n",
795795
"*only* the variables $\\mathcal{X}_{i}$ in its adjacency set. As example, \n",
796-
"for the factor graph in Figure\n",
797-
"<a href=\"#fig:HMM-FG\" data-reference-type=\"ref\" data-reference=\"fig:HMM-FG\">2</a>\n",
798-
"we have: \n",
796+
"for the factor graph in Figure [4](#fig:HMM-FG) we have: \n",
799797
"\\begin{equation}\n",
800798
"\\begin{aligned}\n",
801799
"\\mathcal{X}_1 & =\\{X_1\\}\\\\\n",
@@ -1161,9 +1159,7 @@
11611159
"Given an HMM factor graph of size $n$, the **max-product algorithm** is an $O(n)$ algorithm\n",
11621160
"to find the MAP estimate, which is used by GTSAM under the hood.\n",
11631161
"\n",
1164-
"Let us use the example from Figure\n",
1165-
"<a href=\"#fig:HMM-FG\" data-reference-type=\"ref\" data-reference=\"fig:HMM-FG\">2</a>\n",
1166-
"to understand the main idea behind it. To find the MAP estimate for $\\mathcal{X}$ we need to\n",
1162+
"Let us use the example from Figure [4](#fig:HMM-FG) to understand the main idea behind it. To find the MAP estimate for $\\mathcal{X}$ we need to\n",
11671163
"*maximize* the product\n",
11681164
"\\begin{equation}\n",
11691165
"\\phi(X_1, X_2, X_3)=\\prod\\phi_{i}(\\mathcal{X}_{i})\n",

S44_logistics_perception.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1191,7 +1191,7 @@
11911191
"\\begin{equation}\n",
11921192
"\\Phi(X)=\\sum_i \\phi(X_i) = \\frac{1}{2} \\sum_i \\|A_i X_i-b_i\\|^2.\n",
11931193
"\\end{equation}\n",
1194-
"In the continuous case we use *minimization* of the log-likelihood rather than maximization over the probabilities. The main reason is because then inference becomes a linear least squares problem."
1194+
"In the continuous case we use *minimization* of the log-likelihood rather than maximization over the probabilities. The main reason is because then inference becomes a linear least-squares problem."
11951195
]
11961196
},
11971197
{
@@ -1320,7 +1320,7 @@
13201320
"id": "pDcm35cDLftk",
13211321
"metadata": {},
13221322
"source": [
1323-
"### Sparse Least-Squares\n",
1323+
"### Sparse Least Squares\n",
13241324
"\n",
13251325
"In practice we use *sparse factorization methods* to solve for $X^*$. In particular, *sparse Cholesky* factorization can efficiently decompose the sparse Hessian $Q$ into its matrix square root $R$\n",
13261326
"\\begin{equation}\n",

S54_diffdrive_perception.ipynb

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -305,12 +305,10 @@
305305
"For example, the first pixel in the edge image has the value 2, which is calculated from the values \n",
306306
"$\\begin{bmatrix}3 & 3 & 5\\end{bmatrix}$, as highlighted below:\n",
307307
"\\begin{equation}\n",
308-
"\\begin{align}\n",
309308
"\\begin{bmatrix}\n",
310309
"3 & \\textbf{3} & \\textbf{3} & \\textbf{5} & 5 & 5 & 5 & 2 & 2 & 2 \\\\\n",
311310
"3 & 0 & \\textbf{2} & 2 & 0 & 0 & -3 & -3 & 0 & -2\n",
312311
"\\end{bmatrix}\n",
313-
"\\end{align}\n",
314312
"\\end{equation}\n",
315313
"The \"recipe\" to calculate the edge value is just taking a weighted sum,\n",
316314
"where the weights are defined by our filter:\n",
@@ -343,12 +341,10 @@
343341
"```\n",
344342
"Let us examine the input and output again:\n",
345343
"\\begin{equation}\n",
346-
"\\begin{align}\n",
347344
"\\begin{bmatrix}\n",
348345
"3 & 3 & 3 & 5 & 5 & 5 & 5 & 2 & 2 & 2 \\\\\n",
349346
"3 & 0 & 2 & 2 & 0 & 0 & -3 & -3 & 0 & -2\n",
350347
"\\end{bmatrix}\n",
351-
"\\end{align}\n",
352348
"\\end{equation}\n",
353349
"We already understand the first $2$. The output pixel next to it *also* has the value $2$, as you can verify using the formula. You might object to the fact that the edge seems to be \"doubly wide\", and that we could do better with the simpler filter $\\begin{bmatrix}-1 & 1\\end{bmatrix}$, which people also use. However, making a $1\\times 3$ filter with a zero in the middle ensures that the edges do not \"shift\". The resulting simple filter is widely used and known a **Sobel filter**.\n",
354350
"\n",
@@ -394,15 +390,11 @@
394390
"\n",
395391
"Armed with this formula, we can now understand the edge detection above. For each output pixel $h[i,j]$, we do a pointwise multiplication of the $1 \\times 3$ filter \n",
396392
"\\begin{equation}\n",
397-
"\\begin{align}\n",
398393
"\\begin{pmatrix}g[0,-1] & g[0,0] & g[0,1]\\end{pmatrix} = \\begin{pmatrix}-1 & 0 & 1\\end{pmatrix}\n",
399-
"\\end{align}\n",
400394
"\\end{equation}\n",
401395
"with the $1 \\times 3$ window \n",
402396
"\\begin{equation}\n",
403-
"\\begin{align}\n",
404397
"\\begin{pmatrix}f[i,j-1] & f[i,j+0] & f[i,j+1]\\end{pmatrix}\n",
405-
"\\end{align}\n",
406398
"\\end{equation}\n",
407399
"in the input image $f$.\n",
408400
"\n",

S56_diffdrive_learning.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -421,7 +421,7 @@
421421
"source": [
422422
"We can then use the PyTorch training code below, which is a standard way of training any differentiable function, including our `LineGrid` class. That is because all the operations inside the `LineGrid` class are differentiable, so gradient descent will just work.\n",
423423
"\n",
424-
"Inside the training loop below, you'll find the typical sequence of operations: zeroing gradients, performing a forward pass to get predictions, computing the loss, and doing a backward pass to update the model's parameters. Try to understand the code, as this same training loop is at the core of most deep learning architectures. Now, let's take a closer look at the code itself, which is extensively documented for clarity, and listed in Figure [2](#train_gd)."
424+
"Inside the training loop below, you'll find the typical sequence of operations: zeroing gradients, performing a forward pass to get predictions, computing the loss, and doing a backward pass to update the model's parameters. Try to understand the code, as this same training loop is at the core of most deep learning architectures. Now, let's take a closer look at the code itself, which is extensively documented for clarity, and listed in Figure [2](#code:train_gd)."
425425
]
426426
},
427427
{

S60_driving_intro.ipynb

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@
88
"source": [
99
"# Autonomous Vehicles\n",
1010
"\n",
11+
"```{index} self-driving cars\n",
12+
"```\n",
1113
"> Self-driving cars can be thought of as large-scale wheeled mobile robots that navigate in the real world based on sensor data.\n",
1214
"\n",
1315
"<img src=\"Figures6/S60-Autonomous_Vehicle_with_LIDAR_and_cameras-09.jpg\" alt=\"Splash image with steampunk autonomous car\" width=\"60%\" align=center style=\"vertical-align:middle;margin:10px 0px\">\n"
@@ -18,13 +20,27 @@
1820
"id": "YhpQ6vC4mBFg",
1921
"metadata": {},
2022
"source": [
23+
"```{index} autonomous driving\n",
24+
"```\n",
2125
"In this chapter we look at some of the basic concepts involved in autonomous driving. Needless to say, the topic of autonomous vehicles is rather large, and we only cover a small selection in this chapter. \n",
2226
"\n",
27+
"```{index} SO(2), SE(2), Ackermann steering\n",
28+
"```\n",
2329
"We begin by becoming a bit more serious about movement in the plane, first introducing the matrix group SO(2) to represent rotation in the plane, and then extending this to the matrix group SE(2), which can be used to represent both rotation and translation in the plane. We then introduce kinematics in the form of Ackermann steering, which is common in automobiles. \n",
2430
"\n",
31+
"```{index} LIDAR, Pose SLAM\n",
32+
"```\n",
33+
"```{index} pair: iterative closest points; ICP\n",
34+
"```\n",
35+
"```{index} pair: simultaneous localization and mapping; SLAM\n",
36+
"```\n",
2537
"In addition to cameras, a very popular sensor in autonomous driving is the LIDAR sensor. We develop the basic geometry of LIDAR sensors, and then present the iterative closest points (ICP) algorithm as a way to obtain relative pose measurements from successive LIDAR scans. This leads naturally to the problem of simultaneous localization and mapping or SLAM, a very popular topic in robotics. Here we cover the most basic version, *Pose SLAM*, which only needs relative pose measurements. \n",
2638
"\n",
27-
"In section 5 we look at motion primitives to do some motion planning on the road. Finally, in section 6, we discuss the basics of deep reinforcement learning."
39+
"```{index} motion primitives\n",
40+
"```\n",
41+
"```{index} pair: deep reinforcement learning; DRL\n",
42+
"```\n",
43+
"We then look at motion primitives to do motion planning on the road, alongside polynomial and spline-based path planning. Finally, we discuss the basics of deep reinforcement learning with an autonomous driving example."
2844
]
2945
}
3046
],

0 commit comments

Comments
 (0)