File tree Expand file tree Collapse file tree 1 file changed +12
-5
lines changed
docs/Deep Learning/Learning rule in ANN Expand file tree Collapse file tree 1 file changed +12
-5
lines changed Original file line number Diff line number Diff line change @@ -28,13 +28,13 @@ Where:
28
28
29
29
1 . ** Oja's Rule** : A modification of Hebbian learning that includes weight normalization:
30
30
31
- $$ \Delta w_{ij} = \eta(x_i x_j - \alpha y_j^2 w_{ij}) $$
31
+ $$ \Delta w_{ij} = \eta(x_i x_j - \alpha y_j^2 w_{ij}) $$
32
32
33
33
Where $y_j$ is the output of neuron $j$ and $\alpha$ is a forgetting factor.
34
34
35
35
2 . ** Generalized Hebbian Algorithm (GHA)** : Extends Oja's rule to multiple outputs:
36
36
37
- $$ \Delta W = \eta(xy^T - \text{lower}(Wy^Ty)) $$
37
+ $$ \Delta W = \eta(xy^T - \text{lower}(Wy^Ty)) $$
38
38
39
39
Where $\text{lower}()$ denotes the lower triangular part of a matrix.
40
40
@@ -63,9 +63,16 @@ Where:
63
63
64
64
1 . Initialize weights randomly
65
65
2 . For each training example:
66
- a. Calculate the output: $y = \mathbf{w}^T\mathbf{x}$
67
- b. Update weights: $\mathbf{w}_ \text{new} = \mathbf{w}_ \text{old} + \eta(d - y)\mathbf{x}$
68
- 3 . Repeat step 2 until convergence or a maximum number of epochs is reached
66
+
67
+ a. Calculate the output:
68
+
69
+ $y = \mathbf{w}^T\mathbf{x}$
70
+
71
+ b. Update weights:
72
+
73
+ $$ w_{new} = w_{old} + \eta(d - y)x $$
74
+
75
+ 4 . Repeat step 2 until convergence or a maximum number of epochs is reached
69
76
70
77
### Comparison with Perceptron Learning
71
78
You can’t perform that action at this time.
0 commit comments