Power Iteration and its Variants

In [1]:
import numpy as np
import numpy.linalg as la

Let's prepare a matrix with some random or deliberately chosen eigenvalues:

In [2]:
n = 6

if 1:
    np.random.seed(70)
    eigvecs = np.random.randn(n, n)
    eigvals = np.sort(np.random.randn(n))
    # Uncomment for near-duplicate largest-magnitude eigenvalue
    # eigvals[1] = eigvals[0] + 1e-3

    A = eigvecs.dot(np.diag(eigvals)).dot(la.inv(eigvecs))
    print(eigvals)
    
else:
    # Complex eigenvalues
    np.random.seed(40)
    A = np.random.randn(n, n)
    print(la.eig(A)[0])
[-2.667651   -0.95797093 -0.33019549 -0.29151942 -0.18635343 -0.14418093]

Let's also pick an initial vector:

In [3]:
x0 = np.random.randn(n)
x0
Out[3]:
array([ 2.26930477,  0.66356156,  0.8991019 , -0.36580094,  0.46269004,
        0.079874  ])

Power iteration

In [4]:
x = x0

Now implement plain power iteration.

Run the below cell in-place (Ctrl-Enter) many times.

In [5]:
x = A @ x
x
Out[5]:
array([ -7.70532247,  22.15077824,   2.86452055,  -4.64791114,
         4.0426268 ,  11.65071267])
  • What's the problem with this method?
  • Does anything useful come of this?
  • How do we fix it?

Normalized power iteration

Back to the beginning: Reset to the initial vector.

In [6]:
x = x0/la.norm(x0)

Implement normalized power iteration.

Run this cell in-place (Ctrl-Enter) many times.

In [28]:
x = A @ x
nrm = la.norm(x)
x = x/nrm

print(nrm)
print(x)
2.66765099566
[ 0.23686305 -0.84478709 -0.13775234  0.18146277 -0.15089938 -0.39440539]
  • What do you observe about the norm?
  • What about the sign?
  • What is the vector $x$ now?

Extensions:

  • Now try the matrix variants above.
  • Suggest a better way of estimating the eigenvalue. Hint

What if we want the smallest eigenvalue (by magnitude)?

Once again, reset to the beginning.

In [29]:
x = x0/la.norm(x0)

Run the cell below in-place many times.

In [40]:
x = la.solve(A, x)
nrm = la.norm(x)
x = x/nrm

print(1/nrm)
print(x)
0.14190382209
[ 0.5425893   0.35585848 -0.08034459 -0.5334476   0.15108771 -0.51489077]
  • What's the computational cost per iteration?
  • Can we make this method search for a specific eigenvalue?
  • What is this method called?

Can we feed an estimate of the current approximate eigenvalue back into the calculation? (Hint: Rayleigh quotient)

Reset once more.

In [ ]:
x = x0/la.norm(x0)

Run this cell in-place (Ctrl-Enter) many times.

In [53]:
sigma = np.dot(x, np.dot(A, x))/np.dot(x, x)
x = la.solve(A-sigma*np.eye(n), x)
x = x/la.norm(x)

print(sigma)
print(x)
-0.14418092561
[ 0.51811072  0.37911595 -0.08643954 -0.58285615  0.14511777 -0.46859378]
  • What's this method called?
  • What's a reasonable stopping criterion?
  • Computational downside of this iteration?