# Power Iteration and its Variants¶

In [1]:
import numpy as np
import numpy.linalg as la
np.set_printoptions(precision=3)


Let's prepare a matrix with some random or deliberately chosen eigenvalues:

In [2]:
n = 6

if 1:
np.random.seed(70)
eigvecs = np.random.randn(n, n)
eigvals = np.sort(np.random.randn(n))
# Uncomment for near-duplicate largest-magnitude eigenvalue
# eigvals[1] = eigvals[0] + 1e-3

A = eigvecs.dot(np.diag(eigvals)).dot(la.inv(eigvecs))
print(eigvals)

else:
# Complex eigenvalues
np.random.seed(40)
A = np.random.randn(n, n)
print(la.eig(A)[0])

[-2.668 -0.958 -0.33  -0.292 -0.186 -0.144]


Let's also pick an initial vector:

In [3]:
x0 = np.random.randn(n)
x0

Out[3]:
array([ 2.269,  0.664,  0.899, -0.366,  0.463,  0.08 ])

### Power iteration¶

In [4]:
x = x0


Now implement plain power iteration.

In [5]:
for i in range(20):
x = A @ x
print(x)

[-7.705 22.151  2.865 -4.648  4.043 11.651]
[ 19.254 -66.76  -10.275  13.745 -12.354 -31.851]
[-50.872 180.931  28.982 -38.003  32.878  84.804]
[ 135.725 -484.052  -78.458  103.027  -87.053 -226.183]
[-362.275 1292.214  210.269 -276.615  231.404  603.437]
[  966.69  -3447.945  -561.808   739.696  -616.452 -1609.861]
[-2579.07   9198.61   1499.54  -1974.989  1643.638  4294.662]
[  6880.334 -24539.336  -4001.041   5270.259  -4383.84  -11456.774]
[-18354.595  65463.009  10674.137 -14060.82   11693.775  30562.786]
[  48963.909 -174633.058  -28475.596   37510.904  -31194.161  -81530.956]
[-130618.864  465860.62    75963.646 -100067.48    83214.419  217496.239]
[  348445.777 -1242754.094  -202645.16    266946.529  -221986.341
-580204.159]
[-929531.949 3315234.721  540587.199 -712121.531  592181.424 1547782.299]
[ 2479667.045 -8843889.704 -1442098.59   1899693.011 -1579732.736
-4128943.082]
[-6614886.466 23592411.656  3847016.323 -5067719.196  4214175.002
11014579.211]
[ 17646208.664 -62936320.902 -10262497.484  13518907.353 -11241947.561
-29383053.282]
[-4.707e+07  1.679e+08  2.738e+07 -3.606e+07  2.999e+07  7.838e+07]
[ 1.256e+08 -4.479e+08 -7.303e+07  9.621e+07 -8.000e+07 -2.091e+08]
[-3.350e+08  1.195e+09  1.948e+08 -2.566e+08  2.134e+08  5.578e+08]
[ 8.936e+08 -3.187e+09 -5.197e+08  6.846e+08 -5.693e+08 -1.488e+09]

• What's the problem with this method?
• Does anything useful come of this?
• How do we fix it?

### Normalized power iteration¶

Back to the beginning: Reset to the initial vector.

In [6]:
x = x0/la.norm(x0)


Implement normalized power iteration.

In [7]:
for i in range(10):
x = A @ x
nrm = la.norm(x)
x = x/nrm
print(x)

print(nrm)

[-0.285  0.819  0.106 -0.172  0.149  0.431]
[ 0.243 -0.842 -0.13   0.173 -0.156 -0.402]
[-0.238  0.845  0.135 -0.177  0.153  0.396]
[ 0.237 -0.845 -0.137  0.18  -0.152 -0.395]
[-0.237  0.845  0.137 -0.181  0.151  0.395]
[ 0.237 -0.845 -0.138  0.181 -0.151 -0.394]
[-0.237  0.845  0.138 -0.181  0.151  0.394]
[ 0.237 -0.845 -0.138  0.181 -0.151 -0.394]
[-0.237  0.845  0.138 -0.181  0.151  0.394]
[ 0.237 -0.845 -0.138  0.181 -0.151 -0.394]
2.667662268743778

• What do you observe about the norm?
• What about the sign?
• What is the vector $x$ now?

Extensions:

• Now try the matrix variants above.
• Suggest a better way of estimating the eigenvalue. Hint

### Inverse iteration¶

What if we want the smallest eigenvalue (by magnitude)?

Once again, reset to the beginning.

In [8]:
x = x0/la.norm(x0)

In [9]:
for i in range(30):
x = la.solve(A, x)
nrm = la.norm(x)
x = x/nrm
print(x)

[ 0.344 -0.55  -0.094  0.23  -0.05  -0.718]
[-0.527  0.236  0.046 -0.123 -0.066  0.803]
[ 0.599 -0.01  -0.037 -0.031  0.127 -0.789]
[-0.617 -0.128  0.043  0.173 -0.153  0.74 ]
[ 0.611  0.213 -0.052 -0.284  0.161 -0.687]
[-0.597 -0.265  0.061  0.365 -0.161  0.64 ]
[ 0.583  0.299 -0.067 -0.424  0.16  -0.601]
[-0.57  -0.322  0.072  0.465 -0.157  0.571]
[ 0.558  0.337 -0.076 -0.496  0.155 -0.547]
[-0.55  -0.348  0.078  0.517 -0.153  0.529]
[ 0.543  0.356 -0.08  -0.533  0.151 -0.515]
[-0.537 -0.362  0.082  0.545 -0.15   0.504]
[ 0.533  0.366 -0.083 -0.554  0.149 -0.496]
[-0.529 -0.369  0.084  0.561 -0.148  0.49 ]
[ 0.527  0.371 -0.084 -0.566  0.147 -0.485]
[-0.525 -0.373  0.085  0.57  -0.147  0.481]
[ 0.523  0.375 -0.085 -0.573  0.146 -0.478]
[-0.522 -0.376  0.086  0.575 -0.146  0.476]
[ 0.521  0.376 -0.086 -0.577  0.146 -0.474]
[-0.521 -0.377  0.086  0.578 -0.146  0.473]
[ 0.52   0.377 -0.086 -0.579  0.146 -0.472]
[-0.52  -0.378  0.086  0.58  -0.145  0.471]
[ 0.519  0.378 -0.086 -0.581  0.145 -0.471]
[-0.519 -0.378  0.086  0.581 -0.145  0.47 ]
[ 0.519  0.379 -0.086 -0.582  0.145 -0.47 ]
[-0.519 -0.379  0.086  0.582 -0.145  0.47 ]
[ 0.519  0.379 -0.086 -0.582  0.145 -0.469]
[-0.518 -0.379  0.086  0.582 -0.145  0.469]
[ 0.518  0.379 -0.086 -0.582  0.145 -0.469]
[-0.518 -0.379  0.086  0.583 -0.145  0.469]

• What's the computational cost per iteration?
• Can we make this method search for a specific eigenvalue?

### Inverse Shift iteration¶

What if we want the eigenvalue closest to a give value $\sigma$?

Once again, reset to the beginning.

In [10]:
x = x0/la.norm(x0)

In [11]:
sigma = 1
A_sigma = A-sigma*np.eye(A.shape[0])
for i in range(30):
x = la.solve(A_sigma, x)
nrm = la.norm(x)
x = x/nrm
print(x)

[ 0.011 -0.809 -0.191  0.223 -0.186 -0.474]
[-0.127  0.775  0.149 -0.232  0.16   0.531]
[ 0.192 -0.728 -0.123  0.246 -0.135 -0.582]
[-0.246  0.674  0.103 -0.261  0.108  0.628]
[ 0.295 -0.616 -0.085  0.271 -0.08  -0.669]
[-0.338  0.556  0.07  -0.277  0.052  0.702]
[ 0.376 -0.497 -0.057  0.277 -0.026 -0.728]
[-0.41   0.441  0.047 -0.273  0.002  0.749]
[ 0.44  -0.387 -0.038  0.264  0.019 -0.765]
[-0.465  0.338  0.031 -0.252 -0.039  0.777]
[ 0.488 -0.291 -0.026  0.237  0.056 -0.786]
[-0.508  0.249  0.022 -0.219 -0.071  0.792]
[ 0.525 -0.209 -0.019  0.199  0.084 -0.796]
[-0.54   0.172  0.017 -0.178 -0.095  0.798]
[ 0.553 -0.138 -0.016  0.155  0.105 -0.799]
[-0.565  0.107  0.016 -0.132 -0.114  0.799]
[ 0.575 -0.078 -0.016  0.108  0.122 -0.798]
[-0.584  0.05   0.016 -0.084 -0.129  0.796]
[ 0.591 -0.025 -0.017  0.06   0.135 -0.792]
[-0.597  0.001  0.018 -0.035 -0.14   0.789]
[ 0.603  0.021 -0.02   0.011  0.144 -0.784]
[-0.607 -0.041  0.022  0.013 -0.148  0.779]
[ 0.611  0.061 -0.024 -0.036  0.151 -0.774]
[-0.614 -0.079  0.025  0.059 -0.154  0.768]
[ 0.616  0.096 -0.027 -0.082  0.156 -0.761]
[-0.618 -0.112  0.03   0.103 -0.158  0.755]
[ 0.619  0.126 -0.032 -0.124  0.16  -0.748]
[-0.619 -0.14   0.034  0.145 -0.161  0.741]
[ 0.619  0.154 -0.036 -0.164  0.162 -0.734]
[-0.619 -0.166  0.038  0.183 -0.163  0.726]


### Rayleigh quotient iteration¶

Can we feed an estimate of the current approximate eigenvalue back into the calculation? (Hint: Rayleigh quotient)

Reset once more.

In [12]:
x = x0/la.norm(x0)


Run this cell in-place (Ctrl-Enter) many times.

In [13]:
for i in range(10):
sigma = np.dot(x, np.dot(A, x))/np.dot(x, x)
x = la.solve(A-sigma*np.eye(n), x)
x = x/la.norm(x)
print(x)

[ 0.063 -0.792 -0.173  0.23  -0.173 -0.505]
[-0.191  0.726  0.126 -0.248  0.133  0.585]
[ 0.521 -0.254 -0.051  0.121  0.058 -0.802]
[-0.544 -0.35   0.081  0.533 -0.15   0.519]
[ 0.52   0.378 -0.086 -0.58   0.146 -0.472]
[-0.518 -0.379  0.086  0.583 -0.145  0.469]
[ 0.518  0.379 -0.086 -0.583  0.145 -0.469]
[-0.518 -0.379  0.086  0.583 -0.145  0.469]
[-0.518 -0.379  0.086  0.583 -0.145  0.469]
[ 0.518  0.379 -0.086 -0.583  0.145 -0.469]

• What's a reasonable stopping criterion?
• Computational downside of this iteration?