You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pytorch_tutorial/multilayer_perceptron/README.md
+9-5Lines changed: 9 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ math: true # Use default Marp engine for math rendering
15
15
16
16
## Scope and objective
17
17
18
-
This example trains a MultiLayer Perceptron (a feedforward neural network with one hidden layer) to classify 2D data. The complete sourse code is available [here](test_multilayer_perceptron.py).
18
+
This example trains a MultiLayer Perceptron (a feedforward neural network with one hidden layer) to classify 2D data. It is designed to mimic the experience of the [TensorFlow Playground](https://playground.tensorflow.org/#activation=tanh&batchSize=5&dataset=circle®Dataset=reg-plane&learningRate=0.1®ularizationRate=0&noise=0&networkShape=3&seed=0.94779&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false)). The complete sourse code is available [here](test_multilayer_perceptron.py).
A PyTorch model is defined by combining elementary blocks, known as *modules*. Here, we use the [Sequential](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html) class as a container for these blocks. Its ouput is a scalar value squashed into the $[0,1]$ range by the [Sigmoid](<https://pytorch.org/docs/stable/generated/torch.nn.Sigmoid.html>) activation function.
108
+
A PyTorch model is defined by combining elementary blocks, known as *modules*.
109
+
110
+
Here, we use the [Sequential](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html) class as a container of [Linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) layers. The model ouput is a scalar value squashed into the $[0,1]$ range by the [Sigmoid](<https://pytorch.org/docs/stable/generated/torch.nn.Sigmoid.html>) activation function.
111
+
112
+
### Model implementation
109
113
110
114
```python
111
115
# Create a MultiLayer Perceptron with 2 inputs, a hidden layer and 1 output
@@ -119,6 +123,9 @@ model = nn.Sequential(
119
123
# Activation function for the output layer
120
124
nn.Sigmoid(),
121
125
).to(device)
126
+
127
+
# Print model architecture
128
+
print(model)
122
129
```
123
130
124
131
### Parameter count
@@ -128,9 +135,6 @@ The total number of parameters for this model is obtained by summing the paramet
128
135
> The `get_parameter_count()` utility function was defined in a [previous example](../linear_regression/README.md#parameter-count).
129
136
130
137
```python
131
-
# Print model architecture
132
-
print(model)
133
-
134
138
# Compute and print parameter count
135
139
n_params = get_parameter_count(model)
136
140
print(f"Model has {n_params} trainable parameters")
0 commit comments