5.1.1.3. numdifftools.core.Jacobian

class Jacobian(fun, step=None, method='central', order=2, n=1, **options)[source]

Calculate Jacobian with finite difference approximation

Parameters
funfunction

function of one array fun(x, *args, **kwds)

stepfloat, array-like or StepGenerator object, optional

Defines the spacing used in the approximation. Default is MinStepGenerator(**step_options) if method in in [‘complex’, ‘multicomplex’], otherwise

MaxStepGenerator(**step_options)

The results are extrapolated if the StepGenerator generate more than 3 steps.

method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}

defines the method used in the approximation

orderint, optional

defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.

richardson_terms: scalar integer, default 2.

number of terms used in the Richardson extrapolation.

full_outputbool, optional

If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.

**step_options:

options to pass on to the XXXStepGenerator used.

Returns
jacobarray

Jacobian

Notes

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.

If fun returns a 1d array, it returns a Jacobian. If a 2d array is returned by fun (e.g., with a value for each observation), it returns a 3d array with the Jacobian of each observation with shape xk x nobs x xk. I.e., the Jacobian of the first observation would be [:, 0, :]

References

Ridout, M.S. (2009) Statistical applications of the complex-step method

of numerical differentiation. The American Statistician, 63, 66-74

K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step

derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.

Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical

Differentiation. Numerische Mathematik.

Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for

Integrals of Derivatives. Numerische Mathematik.

Examples

>>> import numpy as np
>>> import numdifftools as nd

#(nonlinear least squares)

>>> xdata = np.arange(0,1,0.1)
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> np.allclose(fun([1, 2, 0.75]).shape,  (10,))
True
>>> jfun = nd.Jacobian(fun)
>>> val = jfun([1, 2, 0.75])
>>> np.allclose(val, np.zeros((10,3)))
True
>>> fun2 = lambda x : x[0]*x[1]*x[2]**2
>>> jfun2 = nd.Jacobian(fun2)
>>> np.allclose(jfun2([1.,2.,3.]), [[18., 9., 12.]])
True
>>> fun3 = lambda x : np.vstack((x[0]*x[1]*x[2]**2, x[0]*x[1]*x[2]))
>>> jfun3 = nd.Jacobian(fun3)
>>> np.allclose(jfun3([1., 2., 3.]), [[[18.], [9.], [12.]], [[6.], [3.], [2.]]])
True
>>> np.allclose(jfun3([4., 5., 6.]), [[[180.], [144.], [240.]], [[30.], [24.], [20.]]])
True
>>> np.allclose(jfun3(np.array([[1.,2.,3.]]).T), [[[18.], [9.], [12.]], [[6.], [3.], [2.]]])
True

Methods

__call__(x, *args, **kwds)

Call self as a function.

__init__(fun, step=None, method='central', order=2, n=1, **options)

Methods

__init__(fun[, step, method, order, n])

set_richardson_rule(step_ratio[, num_terms])

Set Richardson exptrapolation options

Attributes

method

Defines the method used in the finite difference approximation.

method_order

Defines the leading order of the error term in the Richardson extrapolation method.

n

Order of the derivative.

order

Defines the order of the error term in the Taylor approximation used.

step

The step spacing(s) used in the approximation