# Newton's method in $n$ dimensions¶

In [2]:
import numpy as np
import numpy.linalg as la

In [3]:
def f(xvec):
x, y = xvec
return np.array([
x + 2*y -2,
x**2 + 4*y**2 - 4
])

In [4]:
def Jf(xvec):
x, y = xvec
return np.array([
[1, 2],
[2*x, 8*y]
])


Pick an initial guess.

In [5]:
x = np.array([1, 2])


Now implement Newton's method.

In [6]:
x = x - la.solve(Jf(x), f(x))
print(x)

[-0.83333333  1.41666667]


Check if that's really a solution:

In [7]:
f(x)

Out[7]:
array([0.        , 4.72222222])
• What's the cost of one iteration?
• Is there still something like quadratic convergence?

Let's keep an error history and check.

In [8]:
xtrue = np.array([0, 1])
errors = []
x = np.array([1, 2])

In [19]:
x = x - la.solve(Jf(x), f(x))
errors.append(la.norm(x-xtrue))
print(x)

[1.50295992e-16 1.00000000e+00]

In [20]:
for e in errors:
print(e)

0.931694990624912
0.21174886150566186
0.016858985788667225
0.0001252212359224592
7.011683691522178e-09
1.5029599174076677e-16
1.5029599174076677e-16
1.5029599174076677e-16
1.5029599174076677e-16
1.5029599174076677e-16

In [21]:
for i in range(len(errors)-1):
print(errors[i+1]/errors[i]**2)

0.24393468845452265
0.37600123952862446
0.4405701781738273
0.4471634974555712
3.0570515787795163
6653537385912580.0
6653537385912580.0
6653537385912580.0
6653537385912580.0