Skip to content

Commit f4b317c

Browse files
committed
Numpy post
1 parent 7471bd4 commit f4b317c

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+401
-4
lines changed

_posts/2019-05-27-illustrated-gradient-descent.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -601,12 +601,12 @@ When we start training the model, we do not know the best initial value of its p
601601
<div class="img-div-any-width" markdown="0">
602602
<image src="/images/grad/gradient-descent-random-init-detailed.png"/>
603603
<br />
604-
Steps #2 and #3 are optional. We don't really need them as we'll repeat them in the gradient descent step. I'm including them here to set up the following figure.
604+
Calculating MSE in this step is optional. We don't really need it as we'll repeat it in the gradient descent step. I'm including it here to set up the following figure.
605605
</div>
606606

607607

608608

609-
One common way to visualize gradient descent is to think of it as being teleported to a mountain and trying to climb down -- with lower altitudes corresponding to less error. Think of as this, where the X-axis is the parameter value, and the Y-axis is the Error/loss at that value. Our goal is minimize the error/loss function, which in this case is Mean Square Error (there are other loss functions).
609+
One common way to visualize gradient descent is to think of it as being teleported to a mountain and trying to climb down -- with lower altitudes corresponding to less error. Think of as this, where the X-axis is the parameter value, and the Y-axis is the Error/loss at that value. Our goal is to minimize the error/loss function, which in this case is Mean Square Error (there are other loss functions).
610610

611611

612612
<div class="img-div-any-width" markdown="0">

0 commit comments

Comments
 (0)