Rates of Convergence

In [1]:
import numpy as np
In [2]:
C = 1/2
e0 = 0.1*np.random.rand()

rate = 1
In [3]:
e = e0
for i in range(20):
    print(e)
    e = C*e**rate
0.00216056864310018
0.00108028432155009
0.000540142160775045
0.0002700710803875225
0.00013503554019376126
6.751777009688063e-05
3.3758885048440315e-05
1.6879442524220158e-05
8.439721262110079e-06
4.219860631055039e-06
2.1099303155275197e-06
1.0549651577637599e-06
5.274825788818799e-07
2.6374128944093996e-07
1.3187064472046998e-07
6.593532236023499e-08
3.2967661180117495e-08
1.6483830590058748e-08
8.241915295029374e-09
4.120957647514687e-09
  • What do you observe about the number of iterations it takes to decrease the error by a factor of 10 for rate=1,2,3?
  • Is there a point to faster than cubic convergence?

Now let's see if we can figure out the convergence order from the data.

Here's a function that spits out some fake errors of a process that converges to $q$th order:

In [4]:
def make_rate_q_errors(q, e0, n=10, C=0.7):
    errors = []
    e = e0
    for i in range(n):
        errors.append(e)
        e = C*e**q
        
    return errors
In [5]:
errors = make_rate_q_errors(1, e0)
In [6]:
for e in errors:
    print(e)
0.00216056864310018
0.001512398050170126
0.0010586786351190881
0.0007410750445833616
0.0005187525312083531
0.00036312677184584713
0.00025418874029209295
0.00017793211820446505
0.00012455248274312553
8.718673792018786e-05
In [7]:
for i in range(len(errors)-1):
    print(errors[i+1]/errors[i])
0.7
0.7
0.7
0.6999999999999998
0.7
0.6999999999999998
0.7
0.7
0.7