@@ -973,119 +973,56 @@ result which illustrates the result of the Neumann Series Lemma.
973
973
``` {exercise}
974
974
:label: eig1_ex1
975
975
976
- Power iteration is a method for finding the greatest absolute eigenvalue of a diagonalizable matrix.
976
+ Power iteration is an algorithm used to compute the dominant eigenvalue of a diagonalizable $n \times n$ matrix $A$.
977
977
978
- The method starts with a random vector $b_0$ and repeatedly applies the matrix $A$ to it
978
+ The method proceeds as follows:
979
979
980
- $$
981
- b_{k+1}=\frac{A b_k}{\left\|A b_k\right\|}
982
- $$
980
+ 1. Choose an initial random $n \times 1$ vector $b_0$.
981
+ 2. At the $k$-th iteration, multiply $A$ by $b_k$ to obtain $Ab_k$.
982
+ 3. Approximate the eigenvalue as $\lambda_k = \max|Ab_k|$.
983
+ 4. Normalize the result: $b_{k+1} = \frac{Ab_k}{\max|Ab_k|}$.
984
+ 5. Repeat steps 2-4 for $m$ iterations and $\lambda_k$, $b_k$ will converge to the dominant eigenvalue and its eigenvectors of $A$.
983
985
984
986
A thorough discussion of the method can be found [here](https://pythonnumericalmethods.berkeley.edu/notebooks/chapter15.02-The-Power-Method.html).
985
987
986
- In this exercise, first implement the power iteration method and use it to find the greatest absolute eigenvalue and its corresponding eigenvector.
988
+ Implement the power iteration method using the following matrix and initial vector:
989
+
990
+ $$
991
+ A = \begin{bmatrix}
992
+ 1 & 0 & 3 \\
993
+ 0 & 2 & 0 \\
994
+ 3 & 0 & 1
995
+ \end{bmatrix}, \quad
996
+ b_0 = \begin{bmatrix}
997
+ 1 \\
998
+ 1 \\
999
+ 1
1000
+ \end{bmatrix}
1001
+ $$
987
1002
988
- Then visualize the convergence .
1003
+ Set the number of iterations to $m = 20$ and compute the dominant eigenvalue of $A$ .
989
1004
```
990
1005
991
1006
``` {solution-start} eig1_ex1
992
1007
:class: dropdown
993
1008
```
994
1009
995
- Here is one solution.
996
-
997
- We start by looking into the distance between the eigenvector approximation and the true eigenvector.
998
-
999
1010
``` {code-cell} ipython3
1000
- ---
1001
- mystnb:
1002
- figure:
1003
- caption: Power iteration
1004
- name: pow-dist
1005
- ---
1006
- # Define a matrix A
1007
- A = np.array([[1, 0, 3],
1011
+ def power_iteration(A=np.array([[1, 0, 3],
1008
1012
[0, 2, 0],
1009
- [3, 0, 1]])
1010
-
1011
- num_iters = 20
1012
-
1013
- # Define a random starting vector b
1014
- b = np.random.rand(A.shape[1])
1015
-
1016
- # Get the leading eigenvector of matrix A
1017
- eigenvector = np.linalg.eig(A)[1][:, 0]
1018
-
1019
- errors = []
1020
- res = []
1021
-
1022
- # Power iteration loop
1023
- for i in range(num_iters):
1024
- # Multiply b by A
1025
- b = A @ b
1026
- # Normalize b
1027
- b = b / np.linalg.norm(b)
1028
- # Append b to the list of eigenvector approximations
1029
- res.append(b)
1030
- err = np.linalg.norm(np.array(b)
1031
- - eigenvector)
1032
- errors.append(err)
1033
-
1034
- greatest_eigenvalue = np.dot(A @ b, b) / np.dot(b, b)
1035
- print(f'The approximated greatest absolute eigenvalue is \
1036
- {greatest_eigenvalue:.2f}')
1037
- print('The real eigenvalue is', np.linalg.eig(A)[0])
1038
-
1039
- # Plot the eigenvector approximations for each iteration
1040
- plt.figure(figsize=(10, 6))
1041
- plt.xlabel('iterations')
1042
- plt.ylabel('error')
1043
- _ = plt.plot(errors)
1044
- ```
1045
-
1046
- +++ {"user_expressions": [ ] }
1047
-
1048
- Then we can look at the trajectory of the eigenvector approximation.
1049
-
1050
- ``` {code-cell} ipython3
1051
- ---
1052
- mystnb:
1053
- figure:
1054
- caption: Power iteration trajectory
1055
- name: pow-trajectory
1056
- ---
1057
- # Set up the figure and axis for 3D plot
1058
- fig = plt.figure()
1059
- ax = fig.add_subplot(111, projection='3d')
1060
-
1061
- # Plot the eigenvectors
1062
- ax.scatter(eigenvector[0],
1063
- eigenvector[1],
1064
- eigenvector[2],
1065
- color='r', s=80)
1066
-
1067
- for i, vec in enumerate(res):
1068
- ax.scatter(vec[0], vec[1], vec[2],
1069
- color='b',
1070
- alpha=(i+1)/(num_iters+1),
1071
- s=80)
1072
-
1073
- ax.set_xlabel('x')
1074
- ax.set_ylabel('y')
1075
- ax.set_zlabel('z')
1076
- ax.tick_params(axis='both', which='major', labelsize=7)
1077
-
1078
- points = [plt.Line2D([0], [0], linestyle='none',
1079
- c=i, marker='o') for i in ['r', 'b']]
1080
- ax.legend(points, ['actual eigenvector',
1081
- r'approximated eigenvector ($b_k$)'])
1082
- ax.set_box_aspect(aspect=None, zoom=0.8)
1083
-
1084
- plt.show()
1013
+ [3, 0, 1]]), b=np.array([1,1,1]), m=20):
1014
+
1015
+ for i in range(m):
1016
+ b = np.dot(A, b)
1017
+ lambda_1 = abs(b).max()
1018
+ b = b/ b.max()
1019
+
1020
+ print('Eigenvalue:', lambda_1)
1021
+ print('Eigenvector:', b)
1022
+
1023
+ power_iteration()
1085
1024
```
1086
1025
1087
- +++ {"user_expressions": [ ] }
1088
-
1089
1026
``` {solution-end}
1090
1027
```
1091
1028
0 commit comments