# Newton's method in $n$ dimensions¶

In [2]:
import numpy as np
import numpy.linalg as la

In [3]:
def f(xvec):
x, y = xvec
return np.array([
x + 2*y -2,
x**2 + 4*y**2 - 4
])

def Jf(xvec):
x, y = xvec
return np.array([
[1, 2],
[2*x, 8*y]
])


Pick an initial guess.

In [20]:
x = np.array([1, 2])


Now implement Newton's method.

In [27]:
x = x - la.solve(Jf(x), f(x))
print(x)

[  1.50295992e-16   1.00000000e+00]


Check if that's really a solution:

In [30]:
f(x)

Out[30]:
array([ 0.,  0.])
• What's the cost of one iteration?
• Is there still something like quadratic convergence?

Let's keep an error history and check.

In [32]:
xtrue = np.array([0, 1])
errors = []
x = np.array([1, 2])

In [41]:
x = x - la.solve(Jf(x), f(x))
errors.append(la.norm(x-xtrue))
print(x)

[  1.50295992e-16   1.00000000e+00]

In [42]:
for e in errors:
print(e)

0.931694990625
0.211748861506
0.0168589857887
0.000125221235922
7.01168369152e-09
1.50295991741e-16
1.50295991741e-16
1.50295991741e-16
1.50295991741e-16

In [43]:
for i in range(len(errors)-1):
print(errors[i+1]/errors[i]**2)

0.243934688455
0.376001239529
0.440570178174
0.447163497456
3.05705157878
6.65353738591e+15
6.65353738591e+15
6.65353738591e+15