5.1.3.4. numdifftools.extrapolation.Richardson

class Richardson(step_ratio=2.0, step=1, order=1, num_terms=2)[source]

Extrapolates a sequence with Richardsons method

Parameters
step_ratio: real scalar

Ratio between sequential steps, h, generated.

step: scalar integer

Defines the step between exponents in the error polynomial, i.e., step = k_1 - k_0 = k_2 - k_1 = … = k_{i+1} - k_i

order: scalar integer

Leading order of truncation error.

num_terms: scalar integer

Number of terms used in the polynomial fit.

Notes

Suppose f(h) is an approximation of L (exact value) that depends on a positive step size h described with a sequence of the form

L = f(h) + a0 * h^k_0 + a1 * h^k_1+ a2 * h^k_2 + …

where the ai are unknown constants and the k_i are known constants such that h^k_i > h^(k_i+1).

If we evaluate the right hand side for different stepsizes h we can fit a polynomial to that sequence of approximations. This is exactly what this class does. Here k_0 is the leading order step size behavior of truncation error as L = f(h)+O(h^k_0) (f(h) -> L as h -> 0, but f(0) != L) and k_i = order + step * i .

Examples

>>> import numpy as np
>>> import numdifftools as nd
>>> n = 3
>>> Ei = np.zeros((n,1))
>>> h = np.zeros((n,1))
>>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1)
>>> for k in np.arange(n):
...    x = linfun(k)
...    h[k] = x[1]
...    Ei[k] = np.trapz(np.sin(x),x)
>>> En, err, step = nd.Richardson(step=1, order=1)(Ei, h)
>>> truErr = np.abs(En-1.)
>>> np.all(truErr < err)
True
>>> np.all(np.abs(Ei-1)<1e-3)
True
>>> np.allclose(En, 1)
True
__init__(step_ratio=2.0, step=1, order=1, num_terms=2)[source]

Methods

__init__([step_ratio, step, order, num_terms])

extrapolate(sequence, steps)

Extrapolate sequence

rule([sequence_length])

Returns extrapolation rule.