diff --git a/notebooks/chapter19/Learners.ipynb b/notebooks/chapter19/Learners.ipynb
index 9997cfbcc..c6f3d1e4f 100644
--- a/notebooks/chapter19/Learners.ipynb
+++ b/notebooks/chapter19/Learners.ipynb
@@ -318,7 +318,7 @@
"\n",
"By default we use dense networks with two hidden layers, which has the architecture as the following:\n",
"\n",
- "
\n",
+ "
\n",
"\n",
"In our code, we implemented it as:"
]
@@ -500,7 +500,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.2"
+ "version": "3.6.9"
}
},
"nbformat": 4,
diff --git a/notebooks/chapter19/Loss Functions and Layers.ipynb b/notebooks/chapter19/Loss Functions and Layers.ipynb
index cccad7a88..25676e899 100644
--- a/notebooks/chapter19/Loss Functions and Layers.ipynb
+++ b/notebooks/chapter19/Loss Functions and Layers.ipynb
@@ -40,7 +40,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "
"
+ "
"
]
},
{
@@ -88,7 +88,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "
"
+ "
"
]
},
{
@@ -390,7 +390,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.2"
+ "version": "3.6.9"
}
},
"nbformat": 4,
diff --git a/notebooks/chapter19/Optimizer and Backpropagation.ipynb b/notebooks/chapter19/Optimizer and Backpropagation.ipynb
index 6a67e36ce..5194adc7a 100644
--- a/notebooks/chapter19/Optimizer and Backpropagation.ipynb
+++ b/notebooks/chapter19/Optimizer and Backpropagation.ipynb
@@ -251,7 +251,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "
"
+ "
"
]
},
{
@@ -260,7 +260,7 @@
"source": [
"Applying optimizers and back-propagation algorithm together, we can update the weights of a neural network to minimize the loss function with alternatively doing forward and back-propagation process. Here is a figure form [here](https://medium.com/datathings/neural-networks-and-backpropagation-explained-in-a-simple-way-f540a3611f5e) describing how a neural network updates its weights:\n",
"\n",
- "
"
+ "
"
]
},
{
@@ -303,7 +303,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.2"
+ "version": "3.6.9"
}
},
"nbformat": 4,
diff --git a/notebooks/chapter19/RNN.ipynb b/notebooks/chapter19/RNN.ipynb
index 16d4928df..b6971b36a 100644
--- a/notebooks/chapter19/RNN.ipynb
+++ b/notebooks/chapter19/RNN.ipynb
@@ -12,7 +12,7 @@
"\n",
"Recurrent neural networks address this issue. They are networks with loops in them, allowing information to persist.\n",
"\n",
- "
"
+ "
"
]
},
{
@@ -21,7 +21,7 @@
"source": [
"A recurrent neural network can be thought of as multiple copies of the same network, each passing a message to a successor. Consider what happens if we unroll the above loop:\n",
" \n",
- "
"
+ "
"
]
},
{
@@ -30,7 +30,7 @@
"source": [
"As demonstrated in the book, recurrent neural networks may be connected in many different ways: sequences in the input, the output, or in the most general case both.\n",
"\n",
- "
"
+ "
"
]
},
{
@@ -303,7 +303,7 @@
"\n",
"Autoencoders are an unsupervised learning technique in which we leverage neural networks for the task of representation learning. It works by compressing the input into a latent-space representation, to do transformations on the data. \n",
"\n",
- "
"
+ "
"
]
},
{
@@ -314,7 +314,7 @@
"\n",
"Autoencoders have different architectures for different kinds of data. Here we only provide a simple example of a vanilla encoder, which means they're only one hidden layer in the network:\n",
"\n",
- "
\n",
+ "
\n",
"\n",
"You can view the source code by:"
]
@@ -479,7 +479,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.8"
+ "version": "3.6.9"
}
},
"nbformat": 4,
diff --git a/notebooks/chapter24/Image Edge Detection.ipynb b/notebooks/chapter24/Image Edge Detection.ipynb
index cc1672e51..6429943a1 100644
--- a/notebooks/chapter24/Image Edge Detection.ipynb
+++ b/notebooks/chapter24/Image Edge Detection.ipynb
@@ -69,7 +69,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "
"
+ "
"
]
},
{
@@ -105,7 +105,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "
\n",
+ "
\n",
"\n",
"We will use `matplotlib` to read the image as a numpy ndarray:"
]
@@ -226,7 +226,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "
"
+ "
"
]
},
{
@@ -318,7 +318,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "
"
+ "
"
]
},
{
@@ -334,7 +334,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "
"
+ "
"
]
},
{
@@ -400,7 +400,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.2"
+ "version": "3.6.9"
}
},
"nbformat": 4,
diff --git a/notebooks/chapter24/Objects in Images.ipynb b/notebooks/chapter24/Objects in Images.ipynb
index 9ffe6e957..03fc92235 100644
--- a/notebooks/chapter24/Objects in Images.ipynb
+++ b/notebooks/chapter24/Objects in Images.ipynb
@@ -306,7 +306,7 @@
"source": [
"The bounding boxes are drawn on the original picture showed in the following:\n",
"\n",
- "
"
+ "
"
]
},
{
@@ -324,7 +324,7 @@
"\n",
"[Ross Girshick et al.](https://arxiv.org/pdf/1311.2524.pdf) proposed a method where they use selective search to extract just 2000 regions from the image. Then the regions in bounding boxes are feed into a convolutional neural network to perform classification. The brief architecture can be shown as:\n",
"\n",
- "
"
+ "
"
]
},
{
@@ -446,7 +446,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.2"
+ "version": "3.6.9"
}
},
"nbformat": 4,