class: middle, center, title-slide
Lecture 2: Multi-layer perceptron
Prof. Gilles Louppe
[email protected]
Explain and motivate the basic constructs of neural networks.
- From linear discriminant analysis to logistic regression
- Stochastic gradient descent
- From logistic regression to the multi-layer perceptron
- Vanishing gradients and rectified networks
- Universal approximation theorem
class: middle
The Mark I Perceptron (Rosenblatt, 1960) is one of the earliest instances of a neural network.
.footnote[Credits: Frank Rosenblatt, Mark I Perceptron operators' manual, 1960.]
???
A perceptron is a signal transmission network consisting of sensory units (S units), association units (A units), and output or response units (R units). The ‘retina’ of the perceptron is an array of sensory elements (photocells). An S-unit produces a binary output depending on whether or not it is excited. A randomly selected set of retinal cells is connected to the next level of the network, the A units. As originally proposed there were extensive connections among the A units, the R units, and feedback between the R units and the A units.
In essence an association unit is also an MCP neuron which is 1 if a single specific pattern of inputs is received, and it is 0 for all other possible patterns of inputs. Each association unit will have a certain number of inputs which are selected from all the inputs to the perceptron. So the number of inputs to a particular association unit does not have to be the same as the total number of inputs to the perceptron, but clearly the number of inputs to an association unit must be less than or equal to the total number of inputs to the perceptron. Each association unit's output then becomes the input to a single MCP neuron, and the output from this single MCP neuron is the output of the perceptron. So a perceptron consists of a "layer" of MCP neurons, and all of these neurons send their output to a single MCP neuron.
class: middle, center, black-slide
.grid[
.kol-1-2[.width-100[]]
.kol-1-2[
.width-100[]]
]
The Mark I Percetron was implemented in hardware.
class: middle, center, black-slide
<iframe width="600" height="450" src="https://www.youtube.com/embed/cNxadbrN_aI" frameborder="0" allowfullscreen></iframe>The machine could learn to classify simple images.
class: middle
The Mark I Perceptron is composed of association and response units (or "perceptrons"), each acting as a binary classifier that computes a linear combination of its inputs and applies a step function to the result.
In the modern sense, given an input
class: middle
The classification rule can be rewritten as
class: middle
.grid[
.kol-3-5[.width-90[]]
.kol-2-5[
The computation of
- white nodes correspond to inputs and outputs;
- red nodes correspond to model parameters;
- blue nodes correspond to intermediate operations. ] ]
???
Draw the NN diagram.
class: middle
In terms of tensor operations,
???
Ask about their intuition on the intuitive meaning of
Consider training data
-
$\mathbf{x} \in \mathbb{R}^p$ , -
$y \in \{0,1\}$ .
Assume class populations are Gaussian, with same covariance matrix
???
Switch to blackboard.
Using the Bayes' rule, we have:
--
count: false
It follows that with
we get
class: middle
Therefore,
class: middle, center
count: false class: middle, center
count: false class: middle, center
class: middle
Note that the sigmoid function
Therefore, the overall model
class: middle, center
This unit is the main primitive of all neural networks!
Same model
But,
- ignore model assumptions (Gaussian class populations, homoscedasticity);
- instead, find
$\mathbf{w}, b$ that maximizes the likelihood of the data.
???
Switch to blackboard.
class: middle
We have,
This loss is an instance of the cross-entropy
So far we considered the logistic unit
These units can be composed in parallel to form a layer with
.center.width-70[]
???
Draw the NN diagram.
class: middle
Similarly, layers can be composed in series, such that:
$$\begin{aligned}
\mathbf{h}_0 &= \mathbf{x} \\
\mathbf{h}_1 &= \sigma(\mathbf{W}_1^T \mathbf{h}_0 + \mathbf{b}_1) \\
... \\
\mathbf{h}_L &= \sigma(\mathbf{W}_L^T \mathbf{h}_{L-1} + \mathbf{b}_L) \\
f(\mathbf{x}; \theta) = \hat{y} &= \mathbf{h}_L
\end{aligned}$$
where
This model is the multi-layer perceptron, also known as the fully connected feedforward network.
???
Draw the NN diagram.
class: middle
- For binary classification, the width
$q$ of the last layer$L$ is set to$1$ and the activation function is the sigmoid$\sigma(\cdot) = \frac{1}{1 + \exp(-\cdot)}$ , which results in a single output$h_L \in [0,1]$ that models the probability$p(y=1|\mathbf{x})$ . - For multi-class classification, the sigmoid activation
$\sigma$ in the last layer can be generalized to produce a vector$\mathbf{h}_L \in \bigtriangleup^C$ of probability estimates$p(y=i|\mathbf{x})$ . This activation is the$\text{Softmax}$ function, where its$i$ -th output is defined as$$\text{Softmax}(\mathbf{z})_i = \frac{\exp(z_i)}{\sum_{j=1}^C \exp(z_j)},$$ for$i=1, ..., C$ . - For regression, the width
$q$ of the last layer$L$ is set to the dimensionality of the output$d_\text{out}$ and the activation function is the identity$\sigma(\cdot) = \cdot$ , which results in a vector$\mathbf{h}_L \in \mathbb{R}^{d_\text{out}}$ .
???
Draw each.
class: middle, center
(demo)
class: middle
Let us consider the 1-hidden layer MLP
class: middle
class: middle count: false
class: middle count: false
class: middle count: false
class: middle count: false
class: middle count: false
class: middle count: false
class: middle count: false
class: middle count: false
class: middle count: false
class: middle count: false
class: middle count: false
class: middle
.bold[Universal approximation theorem.] (Cybenko 1989; Hornik et al, 1991) Let
- It guarantees that even a single hidden-layer network can represent any classification problem in which the boundary is locally linear (smooth);
- It does not inform about good/bad architectures, nor how they relate to the optimization procedure.
- The universal approximation theorem generalizes to any non-polynomial (possibly unbounded) activation function, including the ReLU (Leshno, 1993).
class: middle
The parameters (e.g.,
The loss function is derived from the likelihood:
- For classification, assuming a categorical likelihood, the loss is the cross-entropy
$\mathcal{L}(\theta) = -\frac{1}{N} \sum_{(\mathbf{x}_j, \mathbf{y}_j) \in \mathbf{d}} \sum_{i=1}^C y_{ji} \log f_{i}(\mathbf{x}_j; \theta)$ . - For regression, assuming a Gaussian likelihood, the loss is the mean squared error
$\mathcal{L}(\theta) = \frac{1}{N} \sum_{(\mathbf{x}_j, \mathbf{y}_j) \in \mathbf{d}} (\mathbf{y}_j - f(\mathbf{x}_j; \theta))^2$ .
???
Switch to blackboard.
To minimize
For
???
Switch to blackboard.
class: middle
A minimizer of the approximation
Therefore, model parameters can be updated iteratively using the update rule
-
$\theta_0$ are the initial parameters of the model; -
$\gamma$ is the learning rate; - both are critical for the convergence of the update rule.
class: center, middle
Example 1: Convergence to a local minima
count: false class: center, middle
Example 1: Convergence to a local minima
count: false class: center, middle
Example 1: Convergence to a local minima
count: false class: center, middle
Example 1: Convergence to a local minima
count: false class: center, middle
Example 1: Convergence to a local minima
count: false class: center, middle
Example 1: Convergence to a local minima
count: false class: center, middle
Example 1: Convergence to a local minima
count: false class: center, middle
Example 1: Convergence to a local minima
class: center, middle
Example 2: Convergence to the global minima
count: false class: center, middle
Example 2: Convergence to the global minima
count: false class: center, middle
Example 2: Convergence to the global minima
count: false class: center, middle
Example 2: Convergence to the global minima
count: false class: center, middle
Example 2: Convergence to the global minima
count: false class: center, middle
Example 2: Convergence to the global minima
count: false class: center, middle
Example 2: Convergence to the global minima
count: false class: center, middle
Example 2: Convergence to the global minima
class: center, middle
Example 3: Divergence due to a too large learning rate
count: false class: center, middle
Example 3: Divergence due to a too large learning rate
count: false class: center, middle
Example 3: Divergence due to a too large learning rate
count: false class: center, middle
Example 3: Divergence due to a too large learning rate
count: false class: center, middle
Example 3: Divergence due to a too large learning rate
count: false class: center, middle
Example 3: Divergence due to a too large learning rate
class: middle
In the empirical risk minimization setup,
class: middle
Since the empirical risk is already an approximation of the expected risk, it should not be necessary to carry out the minimization with great accuracy.
Instead, stochastic gradient descent uses as update rule:
- Iteration complexity is independent of
$N$ . - The stochastic process
$\{ \theta_t | t=1, ... \}$ depends on the examples$i(t)$ picked randomly at each iteration.
--
.grid.center.italic[
.kol-1-2[.width-100[]
Batch gradient descent]
.kol-1-2[.width-100[]
Stochastic gradient descent ] ]
class: middle
Why is stochastic gradient descent still a good idea?
- Informally, averaging the update
$$\theta_{t+1} = \theta_t - \gamma \nabla \ell(y_{i(t+1)}, f(\mathbf{x}_{i(t+1)}; \theta_t)) $$ over all choices$i(t+1)$ restores batch gradient descent. - Formally, if the gradient estimate is unbiased, that is, if $$\begin{aligned} \mathbb{E}_{i(t+1)}[\nabla \ell(y_{i(t+1)}, f(\mathbf{x}_{i(t+1)}; \theta_t))] &= \frac{1}{N} \sum_{\mathbf{x}_i, y_i \in \mathbf{d}} \nabla \ell(y_i, f(\mathbf{x}_i; \theta_t)) \\ &= \nabla \mathcal{L}(\theta_t) \end{aligned}$$ then the formal convergence of SGD can be proved, under appropriate assumptions.
- If training is limited to a single pass over the data, then SGD directly minimizes the expected risk.
class: middle
The excess error characterizes the expected risk discrepancy between the Bayes model and the approximate empirical risk minimizer. It can be decomposed as $$\begin{aligned} &\mathbb{E}\left[ R(\tilde{f}_*^\mathbf{d}) - R(f_B) \right] \\ &= \mathbb{E}\left[ R(f_*) - R(f_B) \right] + \mathbb{E}\left[ R(f_*^\mathbf{d}) - R(f_*) \right] + \mathbb{E}\left[ R(\tilde{f}_*^\mathbf{d}) - R(f_*^\mathbf{d}) \right] \\ &= \mathcal{E}_\text{app} + \mathcal{E}_\text{est} + \mathcal{E}_\text{opt} \end{aligned}$$ where
-
$\mathcal{E}_\text{app}$ is the approximation error due to the choice of an hypothesis space, -
$\mathcal{E}_\text{est}$ is the estimation error due to the empirical risk minimization principle, -
$\mathcal{E}_\text{opt}$ is the optimization error due to the approximate optimization algorithm.
class: middle
A fundamental result due to Bottou and Bousquet (2011) states that stochastic optimization algorithms (e.g., SGD) yield strong generalization performance (in terms of excess error) despite being poor optimization algorithms for minimizing the empirical risk.
To minimize
These derivatives can be evaluated automatically from the computational graph of
class: middle
- In Leibniz notations, the chain rule states that $$ \begin{aligned} \frac{\partial \ell}{\partial \theta_i} &= \sum_{k \in \text{parents}(\ell)} \frac{\partial \ell}{\partial u_k} \underbrace{\frac{\partial u_k}{\partial \theta_i}}_{\text{recursive case}} \end{aligned}$$
- Since a neural network is a composition of differentiable functions, the total derivatives of the loss can be evaluated backward, by applying the chain rule recursively over its computational graph.
- The implementation of this procedure is called reverse automatic differentiation (or backpropagation in the context of neural networks).
class: middle
Let us consider a simplified 1-hidden layer MLP and the following loss function:
$$\begin{aligned}
f(\mathbf{x}; \mathbf{W}_1, \mathbf{W}_2) &= \sigma\left( \mathbf{W}_2^T \sigma\left( \mathbf{W}_1^T \mathbf{x} \right)\right) \\
\mathcal{\ell}(y, \hat{y}; \mathbf{W}_1, \mathbf{W}_2) &= \text{cross\_ent}(y, \hat{y}) + \lambda \left( ||\mathbf{W}_1||_2 + ||\mathbf{W}_2||_2 \right)
\end{aligned}$$
for
class: middle
In the forward pass, intermediate values are all computed from inputs to outputs, which results in the annotated computational graph below:
class: middle
The partial derivatives can be computed through a backward pass, by walking through all paths from outputs to parameters in the computational graph and accumulating the terms.
For example, for
class: middle
Let us zoom in on the computation of the network output
-
Forward pass: values
$u_1$ ,$u_2$ ,$u_3$ and$\hat{y}$ are computed by traversing the graph from inputs to outputs given$\mathbf{x}$ ,$\mathbf{W}_1$ and$\mathbf{W}_2$ . - Backward pass: by the chain rule we have $$\begin{aligned} \frac{\partial \hat{y}}{\partial \mathbf{W}_1} &= \frac{\partial \hat{y}}{\partial u_3} \frac{\partial u_3}{\partial u_2} \frac{\partial u_2}{\partial u_1} \frac{\partial u_1}{\partial \mathbf{W}_1} \\ &= \frac{\partial \sigma(u_3)}{\partial u_3} \frac{\partial \mathbf{W}_2^T u_2}{\partial u_2} \frac{\partial \sigma(u_1)}{\partial u_1} \frac{\partial \mathbf{W}_1^T \mathbf{x}}{\partial \mathbf{W}_1} \end{aligned}$$ Note how evaluating the partial derivatives requires the intermediate values computed forward.
Training deep MLPs with many layers has for long (pre-2011) been very difficult due to the vanishing gradient problem.
- Small gradients slow down, and eventually block, stochastic gradient descent.
- This results in a limited capacity of learning.
.width-100[]
.caption[Backpropagated gradients normalized histograms (Glorot and Bengio, 2010).
Gradients for layers far from the output vanish to zero. ]
class: middle
Let us consider a simplified 2-hidden layer MLP, with
Under the hood, this would be evaluated as
$$\begin{aligned}
u_1 &= w_1 x \\
u_2 &= \sigma(u_1) \\
u_3 &= w_2 u_2 \\
u_4 &= \sigma(u_3) \\
u_5 &= w_3 u_4 \\
\hat{y} &= \sigma(u_5)
\end{aligned}$$
and its derivative
class: middle
The derivative of the sigmoid activation function
Notice that
class: middle
Assume that weights
Then,
This implies that the derivative
Hence the vanishing gradient problem.
- In general, bounded activation functions (sigmoid, tanh, etc) are prone to the vanishing gradient problem.
- Note the importance of a proper initialization scheme.
Instead of the sigmoid activation function, modern neural networks use the rectified linear unit (ReLU) activation function, defined as
class: middle
Note that the derivative of the ReLU function is
$$\frac{\partial }{\partial x} \text{ReLU}(x) = \begin{cases}
0 &\text{if } x \leq 0 \\
1 &\text{otherwise}
\end{cases}$$
.center[]
For
class: middle
Therefore,
This solves the vanishing gradient problem, even for deep networks! (provided proper initialization)
Note that:
- The ReLU unit dies when its input is negative, which might block gradient descent.
- This is actually a useful property to induce sparsity.
- This issue can also be solved using leaky ReLUs, defined as
$$\text{LeakyReLU}(x) = \max(\alpha x, x)$$ for a small$\alpha \in \mathbb{R}^+$ (e.g.,$\alpha=0.1$ ).
class: middle
Beyond preventing vanishing gradients, the choice of the activation function
.footnote[Credits: Simon J.D. Prince, 2023.]
class: middle, center
(demo)
???
Don't forget the magic trick!
class: middle
.italic[ People are now building a new kind of software by .bold[assembling networks of parameterized functional blocks] and by .bold[training them from examples using some form of gradient-based optimization]. ]
.pull-right[Yann LeCun, 2018.]
class: end-slide, center count: false
The end.