diff --git a/notebooks/chapter19/Optimizer and Backpropagation.ipynb b/notebooks/chapter19/Optimizer and Backpropagation.ipynb index e1c0a4db7..6a67e36ce 100644 --- a/notebooks/chapter19/Optimizer and Backpropagation.ipynb +++ b/notebooks/chapter19/Optimizer and Backpropagation.ipynb @@ -10,7 +10,7 @@ "\n", "## Stochastic Gradient Descent\n", "\n", - "The goal of an optimization algorithm is to nd the value of the parameter to make loss function very low. For some types of models, an optimization algorithm might find the global minimum value of loss function, but for neural network, the most efficient way to converge loss function to a local minimum is to minimize loss function according to each example.\n", + "The goal of an optimization algorithm is to find the value of the parameter to make loss function very low. For some types of models, an optimization algorithm might find the global minimum value of loss function, but for neural network, the most efficient way to converge loss function to a local minimum is to minimize loss function according to each example.\n", "\n", "Gradient descent uses the following update rule to minimize loss function:" ]