You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: notebooks/chapter19/Optimizer and Backpropagation.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@
10
10
"\n",
11
11
"## Stochastic Gradient Descent\n",
12
12
"\n",
13
-
"The goal of an optimization algorithm is to nd the value of the parameter to make loss function very low. For some types of models, an optimization algorithm might find the global minimum value of loss function, but for neural network, the most efficient way to converge loss function to a local minimum is to minimize loss function according to each example.\n",
13
+
"The goal of an optimization algorithm is to find the value of the parameter to make loss function very low. For some types of models, an optimization algorithm might find the global minimum value of loss function, but for neural network, the most efficient way to converge loss function to a local minimum is to minimize loss function according to each example.\n",
14
14
"\n",
15
15
"Gradient descent uses the following update rule to minimize loss function:"
0 commit comments