Skip to content

Commit ea82a79

Browse files
Update readme to equations rendering
1 parent b99a731 commit ea82a79

File tree

1 file changed

+12
-5
lines changed

1 file changed

+12
-5
lines changed

docs/Deep Learning/Learning rule in ANN/Learning-Rules.md

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,13 +28,13 @@ Where:
2828

2929
1. **Oja's Rule**: A modification of Hebbian learning that includes weight normalization:
3030

31-
$$ \Delta w_{ij} = \eta(x_i x_j - \alpha y_j^2 w_{ij}) $$
31+
$$\Delta w_{ij} = \eta(x_i x_j - \alpha y_j^2 w_{ij})$$
3232

3333
Where $y_j$ is the output of neuron $j$ and $\alpha$ is a forgetting factor.
3434

3535
2. **Generalized Hebbian Algorithm (GHA)**: Extends Oja's rule to multiple outputs:
3636

37-
$$ \Delta W = \eta(xy^T - \text{lower}(Wy^Ty)) $$
37+
$$\Delta W = \eta(xy^T - \text{lower}(Wy^Ty))$$
3838

3939
Where $\text{lower}()$ denotes the lower triangular part of a matrix.
4040

@@ -63,9 +63,16 @@ Where:
6363

6464
1. Initialize weights randomly
6565
2. For each training example:
66-
a. Calculate the output: $y = \mathbf{w}^T\mathbf{x}$
67-
b. Update weights: $\mathbf{w}_\text{new} = \mathbf{w}_\text{old} + \eta(d - y)\mathbf{x}$
68-
3. Repeat step 2 until convergence or a maximum number of epochs is reached
66+
67+
a. Calculate the output:
68+
69+
$y = \mathbf{w}^T\mathbf{x}$
70+
71+
b. Update weights:
72+
73+
$$w_{new} = w_{old} + \eta(d - y)x$$
74+
75+
4. Repeat step 2 until convergence or a maximum number of epochs is reached
6976

7077
### Comparison with Perceptron Learning
7178

0 commit comments

Comments
 (0)