Rates of Convergence

In [2]:
import numpy as np
In [3]:
C = 1/2
e0 = 0.1*np.random.rand()

order = 1
In [4]:
e = e0
for i in range(20):
    print(e)
    e = C*e**order
0.0325355469290095
0.01626777346450475
0.008133886732252375
0.0040669433661261875
0.0020334716830630937
0.0010167358415315469
0.0005083679207657734
0.0002541839603828867
0.00012709198019144336
6.354599009572168e-05
3.177299504786084e-05
1.588649752393042e-05
7.94324876196521e-06
3.971624380982605e-06
1.9858121904913025e-06
9.929060952456512e-07
4.964530476228256e-07
2.482265238114128e-07
1.241132619057064e-07
6.20566309528532e-08
  • What do you observe about the number of iterations it takes to decrease the error by a factor of 10 for rate=1,2,3?
  • Is there a point to faster than cubic convergence?

Now let's see if we can figure out the convergence order from the data.

Here's a function that spits out some fake errors of a process that converges to $q$th order:

In [5]:
def make_rate_q_errors(q, e0, n=10, C=0.7):
    errors = []
    e = e0
    for i in range(n):
        errors.append(e)
        e = C*e**q
        
    return errors
In [6]:
errors = make_rate_q_errors(1, e0)
In [7]:
for e in errors:
    print(e)
0.0325355469290095
0.022774882850306648
0.015942417995214654
0.011159692596650258
0.00781178481765518
0.005468249372358625
0.0038277745606510377
0.002679442192455726
0.001875609534719008
0.0013129266743033055
In [8]:
for i in range(len(errors)-1):
    print(errors[i+1]/errors[i])
0.7
0.7000000000000001
0.7
0.7
0.7
0.7
0.7
0.7
0.7
In [ ]: