5.1.1.5. numdifftools.core.Hessian

class Hessian(f, step=None, method='central', order=None, **options)[source]

Calculate Hessian with finite difference approximation

Parameters
funfunction

function of one array fun(x, *args, **kwds)

stepfloat, array-like or StepGenerator object, optional

Defines the spacing used in the approximation. Default is MinStepGenerator(**step_options) if method in in [‘complex’, ‘multicomplex’], otherwise

MaxStepGenerator(**step_options)

The results are extrapolated if the StepGenerator generate more than 3 steps.

method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}

defines the method used in the approximation

richardson_terms: scalar integer, default 2.

number of terms used in the Richardson extrapolation.

full_outputbool, optional

If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.

**step_options:

options to pass on to the XXXStepGenerator used.

Returns
hessndarray

array of partial second derivatives, Hessian

See also

Derivative, Hessian

Notes

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Computes the Hessian according to method as: ‘forward’ (4.7), ‘central’ (4.9) and ‘complex’ (4.10):

(5.1)\[\quad ((f(x + d_j e_j + d_k e_k) + f(x) - f(x + d_j e_j) - f(x + d_k e_k))) / (d_j d_k)\]
(5.2)\[\quad ((f(x + d_j e_j + d_k e_k) - f(x + d_j e_j - d_k e_k)) - (f(x - d_j e_j + d_k e_k) - f(x - d_j e_j - d_k e_k)) / (4 d_j d_k)\]
(5.3)\[imag(f(x + i d_j e_j + d_k e_k) - f(x + i d_j e_j - d_k e_k)) / (2 d_j d_k)\]

where \(e_j\) is a vector with element \(j\) is one and the rest are zero and \(d_j\) is a scalar spacing \(steps_j\).

References

Ridout, M.S. (2009) Statistical applications of the complex-step method

of numerical differentiation. The American Statistician, 63, 66-74

K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step

derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.

Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical

Differentiation. Numerische Mathematik.

Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for

Integrals of Derivatives. Numerische Mathematik.

Examples

>>> import numpy as np
>>> import numdifftools as nd

# Rosenbrock function, minimized at [1,1]

>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2
>>> Hfun = nd.Hessian(rosen)
>>> h = Hfun([1, 1])
>>> h
array([[ 842., -420.],
       [-420.,  210.]])

# cos(x-y), at (0,0)

>>> cos = np.cos
>>> fun = lambda xy : cos(xy[0]-xy[1])
>>> Hfun2 = nd.Hessian(fun)
>>> h2 = Hfun2([0, 0])
>>> h2
array([[-1.,  1.],
       [ 1., -1.]])

Methods

__call__(x, *args, **kwds)

Call self as a function.

__init__(f, step=None, method='central', order=None, **options)[source]

Methods

__init__(f[, step, method, order])

set_richardson_rule(step_ratio[, num_terms])

Set Richardson exptrapolation options

Attributes

method

Defines the method used in the finite difference approximation.

method_order

Defines the leading order of the error term in the Richardson extrapolation method.

n

Order of the derivative.

order

Defines the order of the error term in the Taylor approximation used.

step

The step spacing(s) used in the approximation