5.1.7.1. numdifftools.nd_scipy.Gradient

class Gradient(fun, step=None, method='central', order=2, bounds=(-inf, inf), sparsity=None)[source]

Calculate Gradient with finite difference approximation

Parameters
funfunction

function of one array fun(x, *args, **kwds)

stepfloat, optional

Stepsize, if None, optimal stepsize is used, i.e., x * _EPS for method==`complex` x * _EPS**(1/2) for method==`forward` x * _EPS**(1/3) for method==`central`.

method{‘central’, ‘complex’, ‘forward’}

defines the method used in the approximation.

See also

Hessian, Jacobian

Examples

>>> import numpy as np
>>> import numdifftools.nd_scipy as nd
>>> fun = lambda x: np.sum(x**2)
>>> dfun = nd.Gradient(fun)
>>> np.allclose(dfun([1,2,3]), [ 2.,  4.,  6.])
True

# At [x,y] = [1,1], compute the numerical gradient # of the function sin(x-y) + y*exp(x)

>>> sin = np.sin; exp = np.exp
>>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0])
>>> dz = nd.Gradient(z)
>>> grad2 = dz([1, 1])
>>> np.allclose(grad2, [ 3.71828183,  1.71828183])
True

# At the global minimizer (1,1) of the Rosenbrock function, # compute the gradient. It should be essentially zero.

>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2
>>> rd = nd.Gradient(rosen)
>>> grad3 = rd([1,1])
>>> np.allclose(grad3,[0, 0], atol=1e-7)
True
__init__(fun, step=None, method='central', order=2, bounds=(-inf, inf), sparsity=None)

Methods

__init__(fun[, step, method, order, bounds, ...])