diff --git a/01_pytorch_workflow.ipynb b/01_pytorch_workflow.ipynb index 179f6ba2..f5cac863 100644 --- a/01_pytorch_workflow.ipynb +++ b/01_pytorch_workflow.ipynb @@ -901,7 +901,7 @@ "| ----- | ----- | ----- | ----- |\n", "| 1 | Forward pass | The model goes through all of the testing data once, performing its `forward()` function calculations. | `model(x_test)` |\n", "| 2 | Calculate the loss | The model's outputs (predictions) are compared to the ground truth and evaluated to see how wrong they are. | `loss = loss_fn(y_pred, y_test)` | \n", - "| 3 | Calulate evaluation metrics (optional) | Alongside the loss value you may want to calculate other evaluation metrics such as accuracy on the test set. | Custom functions |\n", + "| 3 | Calculate evaluation metrics (optional) | Alongside the loss value you may want to calculate other evaluation metrics such as accuracy on the test set. | Custom functions |\n", "\n", "Notice the testing loop doesn't contain performing backpropagation (`loss.backward()`) or stepping the optimizer (`optimizer.step()`), this is because no parameters in the model are being changed during testing, they've already been calculated. For testing, we're only interested in the output of the forward pass through the model.\n", "\n", @@ -2470,7 +2470,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.4" + "version": "3.12.7" } }, "nbformat": 4,