@@ -9,7 +9,7 @@ algorithm to allow computation of several eigenvalues simultaneously rather
9
9
than one at a time<sup id =" r2 " >[ 2] ( #f2 ) </sup >. The
10
10
purpose of this project is to illustrate the use of what is now called the
11
11
Davidson-Liu algorithm in the context of a
12
- [ CIS computation] ( https://github.com/CrawfordGroup/ProgrammingProjects/tree/master /Project%2312) .
12
+ [ CIS computation] ( .. /Project%2312) .
13
13
14
14
## The Basic Algorithm
15
15
@@ -29,26 +29,19 @@ from a well-chosen subspace of the full determinantal space.
29
29
30
30
Compute a representation of the Hamiltonian within the space of guess vectors,
31
31
32
- ```
33
- EQUATION
34
- G_{ij} \equiv \langle {\mathbf b}_i | H | {\mathbf b}_j \rangle =
35
- \langle {\mathbf b}_i | H {\mathbf b}_j \rangle = \langle {\mathbf b}_i | {\mathbf \sigma}_j \rangle,\ \ 1\leq i,j \leq L
36
- ```
32
+ <img src =" ./figures/guess-vector-hamiltonian.png " height =" 50 " >
33
+
37
34
and then diagonalize this so-called "subspace Hamiltonian",
38
35
39
- ```
40
- EQUATION
41
- {\mathbf G} \alpha^k = \lambda^k \alpha^k,\ \ k=1,2,\cdots,M
42
- ```
36
+ <img src =" ./figures/diag-subspace-hamiltonian.png " height =" 50 " >
37
+
43
38
where * M* is the number of roots of interest. The current estimate of each of
44
39
the * M* eigenvectors we want is a linear combination of the guess vectors,
45
40
with the &alpha ; <sup >k</sup > subspace eigenvectors providing the
46
41
coefficients, * viz.*
47
42
48
- ```
49
- EQUATION
50
- {\mathbf c}^k = \sum_i^L \alpha_i^k {\mathbf b}_i.
51
- ```
43
+ <img src =" ./figures/coefficients.png " height =" 50 " >
44
+
52
45
The dimension of *** G*** is typically very small (perhaps a dozen times the
53
46
number of guess vectors, * L* ), so one can used a standard diagonalization
54
47
package (such as DSYEV in LAPACK) for this task. Note that the most expensive
@@ -62,16 +55,12 @@ elements must be computed "on the fly" during the computation of each
62
55
63
56
Build a set of "correction vectors",
64
57
65
- ```
66
- EQUATION
67
- \delta^k_I \equiv \left(\lambda^k - H_{II}\right)^{-1} r_I^k, \ \ I=1,2,\cdots,N
68
- ```
58
+ <img src =" ./figures/correction-vectors.png " height =" 50 " >
59
+
69
60
where the "residual" vectors are defined as
70
61
71
- ```
72
- EQUATION
73
- {\mathbf r}^k \equiv \sum_{i=1}^L \alpha^k_i\left( {\mathbf H} - \lambda^k \right) {\mathbf b}_i,
74
- ```
62
+ <img src =" ./figures/residual-vectors.png " height =" 50 " >
63
+
75
64
and * N* is the dimension of the Hamiltonian (i.e. the number of determinants).
76
65
The inverse appearing in the definition of the correction vectors is commonly
77
66
referred to as the "preconditioner". Notice that the residual vectors are so
@@ -93,20 +82,17 @@ Return to step #2 and continue.
93
82
94
83
We will focus on the spin-adapted singlet formulation of CIS,
95
84
for which the <b >&sigma ; </b > = <b >H c</b >equation was given in
96
- [ Project 12] ( https://github.com/CrawfordGroup/ProgrammingProjects/tree/master /Project%2312) :
85
+ [ Project 12] ( .. /Project%2312) :
97
86
98
- ```
99
- EQUATION
100
- \sigma(m)_{ia} = \sum_{jb} H_{ia,jb} c_j^b(m) = \sum_{jb} \left[ f_{ab} \delta_{ij} - f_{ij} \delta_{ab} + 2 <aj|ib> - <aj|bi> \right] c_j^b(m).
101
- ```
87
+ <img src =" ./figures/spin-adapted-cis-eqn.png " height =" 50 " >
102
88
103
89
## Unit Guess Vectors
104
90
105
91
What should we choose for guess vectors? As noted above, the simplest choice
106
92
is probably a set of unit vectors, one for each eigenvalue you want. But in
107
93
what position of the vector should we put the 1? For a hint, look at the
108
94
structure of the
109
- [ spin-adapted singlet CIS Hamiltonian] ( https://github.com/CrawfordGroup/ProgrammingProjects/tree/master/ Project%2312/hints/hint1.2 )
95
+ [ spin-adapted singlet CIS Hamiltonian] ( ../ Project%2312/hints/hint2.md )
110
96
for the H<sub >2</sub >O STO-3G test case and note that it is
111
97
strongly diagonally dominant. Thus, if the diagonal elements are reasonable
112
98
approximations to the true eigenvalues, and we want to compute only the lowest
@@ -120,11 +106,7 @@ dimension to something more manageable before continuing the Davidson-Liu
120
106
algorithm. A typical choice is to collapse to the current best set of guesses
121
107
using the equation given above for the current final eigenvectors:
122
108
123
- ```
124
- EQUATION
125
- {\mathbf c}^k = \sum_i^L \alpha_i^k {\mathbf b}_i.
126
- ```
127
-
109
+ <img src =" ./figures/final-eigenvectors.png " height =" 50 " >
128
110
129
111
#### References
130
112
- <b id =" f1 " >1</b >: E.R. Davidson, "The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices," * J. Comput. Phys.* ** 17** , 87 (1975).[ up] ( #r1 )
0 commit comments