5.2. Numdifftools package details¶
5.2.1. Subpackages¶
- 5.2.1.1. numdifftools.tests package
- 5.2.1.1.1. Submodules
- 5.2.1.1.2. numdifftools.tests.conftest module
- 5.2.1.1.3. numdifftools.tests.test_extrapolation module
- 5.2.1.1.4. numdifftools.tests.test_hessian module
- 5.2.1.1.5. numdifftools.tests.test_limits module
- 5.2.1.1.6. numdifftools.tests.test_multicomplex module
- 5.2.1.1.7. numdifftools.tests.test_nd_algopy module
- 5.2.1.1.8. numdifftools.tests.test_numdifftools module
- 5.2.1.1.9. numdifftools.tests.test_numdifftools_docstrings module
- 5.2.1.1.10. Module contents
5.2.2. Submodules¶
5.2.2.1. Core module¶
numerical differentiation functions:
Derivative, Gradient, Jacobian, and Hessian
Author: Per A. Brodtkorb Created: 01.08.2008 Copyright: (c) pab 2008 Licence: New BSD
-
dea3
(v0, v1, v2, symmetric=False)[source]¶ Extrapolate a slowly convergent sequence
Parameters: - v0, v1, v2 : array-like
3 values of a convergent sequence to extrapolate
Returns: - result : array-like
extrapolated value
- abserr : array-like
absolute error estimate
See also
dea
Notes
DEA3 attempts to extrapolate nonlinearly to a better estimate of the sequence’s limiting value, thus improving the rate of convergence. The routine is based on the epsilon algorithm of P. Wynn, see [1].
References
[1] C. Brezinski and M. Redivo Zaglia (1991) “Extrapolation Methods. Theory and Practice”, North-Holland. [2] C. Brezinski (1977) “Acceleration de la convergence en analyse numerique”, “Lecture Notes in Math.”, vol. 584, Springer-Verlag, New York, 1977. [3] E. J. Weniger (1989) “Nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series” Computer Physics Reports Vol. 10, 189 - 371 http://arxiv.org/abs/math/0306302v1 Examples
# integrate sin(x) from 0 to pi/2
>>> import numpy as np >>> import numdifftools as nd >>> Ei= np.zeros(3) >>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1) >>> for k in np.arange(3): ... x = linfun(k) ... Ei[k] = np.trapz(np.sin(x),x) >>> [En, err] = nd.dea3(Ei[0], Ei[1], Ei[2]) >>> truErr = np.abs(En-1.) >>> np.all(truErr < err) True >>> np.allclose(En, 1) True >>> np.all(np.abs(Ei-1)<1e-3) True
-
class
Derivative
(fun, step=None, method='central', order=2, n=1, full_output=False, **step_options)[source]¶ Bases:
numdifftools.limits._Limit
Calculate n-th derivative with finite difference approximation
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- step : float, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None,
num_extrap=0, **step_options)
if step or method in in [‘complex’, ‘multicomplex’], otherwise
MaxStepGenerator(step_ratio=None, num_extrap=14, **step_options)
The results are extrapolated if the StepGenerator generate more than 3 steps.
- method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- order : int, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
- n : int, optional
Order of the derivative.
- full_output : bool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
- **step_options:
options to pass on to the XXXStepGenerator used.
Returns: - der : ndarray
array of derivatives
Notes
Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.
For all methods one should be careful in decreasing the step size too much due to round-off errors.
Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
- of numerical differentiation. The American Statistician, 63, 66-74
- K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
- derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
- Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
- Differentiation. Numerische Mathematik.
- Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
- Integrals of Derivatives. Numerische Mathematik.
Examples
>>> import numpy as np >>> import numdifftools as nd
# 1’st derivative of exp(x), at x == 1
>>> fd = nd.Derivative(np.exp) >>> np.allclose(fd(1), 2.71828183) True
>>> d2 = fd([1, 2]) >>> np.allclose(d2, [ 2.71828183, 7.3890561 ]) True
>>> def f(x): ... return x**3 + x**2
>>> df = nd.Derivative(f) >>> np.allclose(df(1), 5) True >>> ddf = nd.Derivative(f, n=2) >>> np.allclose(ddf(1), 8) True
Methods
__call__ (callable with the following parameters:) x : array_like value at which function derivative is evaluated args : tuple Arguments for function fun. kwds : dict Keyword arguments for function fun. -
class
info
¶ Bases:
tuple
-
count
()¶
-
error_estimate
¶ Alias for field number 0
-
final_step
¶ Alias for field number 1
-
index
¶ Alias for field number 2
-
-
n
¶ !! processed by numpydoc !!
-
step
¶ !! processed by numpydoc !!
-
class
Jacobian
(fun, step=None, method='central', order=2, n=1, full_output=False, **step_options)[source]¶ Bases:
numdifftools.core.Derivative
Calculate Jacobian with finite difference approximation
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- step : float, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None,
num_extrap=0, **step_options)
if step or method in in [‘complex’, ‘multicomplex’], otherwise
MaxStepGenerator(step_ratio=None, num_extrap=14, **step_options)
The results are extrapolated if the StepGenerator generate more than 3 steps.
- method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- order : int, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
- full_output : bool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
- **step_options:
options to pass on to the XXXStepGenerator used.
Returns: - jacob : array
Jacobian
See also
Notes
Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.
For all methods one should be careful in decreasing the step size too much due to round-off errors.
Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.
If fun returns a 1d array, it returns a Jacobian. If a 2d array is returned by fun (e.g., with a value for each observation), it returns a 3d array with the Jacobian of each observation with shape xk x nobs x xk. I.e., the Jacobian of the first observation would be [:, 0, :]
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
- of numerical differentiation. The American Statistician, 63, 66-74
- K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
- derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
- Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
- Differentiation. Numerische Mathematik.
- Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
- Integrals of Derivatives. Numerische Mathematik.
Examples
>>> import numdifftools as nd
#(nonlinear least squares)
>>> xdata = np.arange(0,1,0.1) >>> ydata = 1+2*np.exp(0.75*xdata) >>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2 >>> np.allclose(fun([1, 2, 0.75]).shape, (10,)) True
>>> jfun = nd.Jacobian(fun) >>> val = jfun([1, 2, 0.75]) >>> np.allclose(val, np.zeros((10,3))) True
>>> fun2 = lambda x : x[0]*x[1]*x[2]**2 >>> jfun2 = nd.Jacobian(fun2) >>> np.allclose(jfun2([1.,2.,3.]), [[18., 9., 12.]]) True
>>> fun3 = lambda x : np.vstack((x[0]*x[1]*x[2]**2, x[0]*x[1]*x[2])) >>> jfun3 = nd.Jacobian(fun3) >>> np.allclose(jfun3([1., 2., 3.]), [[18., 9., 12.], [6., 3., 2.]]) True >>> np.allclose(jfun3([4., 5., 6.]), [[180., 144., 240.], [30., 24., 20.]]) True >>> np.allclose(jfun3(np.array([[1.,2.,3.]]).T), [[ 18., 9., 12.], ... [ 6., 3., 2.]]) True
Methods
__call__ (callable with the following parameters:) x : array_like value at which function derivative is evaluated args : tuple Arguments for function fun. kwds : dict Keyword arguments for function fun. -
class
info
¶ Bases:
tuple
-
count
()¶
-
error_estimate
¶ Alias for field number 0
-
final_step
¶ Alias for field number 1
-
index
¶ Alias for field number 2
-
-
n
¶ !! processed by numpydoc !!
-
set_richardson_rule
(self, step_ratio, num_terms=2)¶
-
step
¶ !! processed by numpydoc !!
-
class
Gradient
(fun, step=None, method='central', order=2, n=1, full_output=False, **step_options)[source]¶ Bases:
numdifftools.core.Jacobian
Calculate Gradient with finite difference approximation
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- step : float, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None,
num_extrap=0, **step_options)
if step or method in in [‘complex’, ‘multicomplex’], otherwise
MaxStepGenerator(step_ratio=None, num_extrap=14, **step_options)
The results are extrapolated if the StepGenerator generate more than 3 steps.
- method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- order : int, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
- full_output : bool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
- **step_options:
options to pass on to the XXXStepGenerator used.
Returns: - grad : array
gradient
See also
Notes
Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.
For all methods one should be careful in decreasing the step size too much due to round-off errors.
Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.
If x0 is an n x m array, then fun is assumed to be a function of n * m variables.
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
- of numerical differentiation. The American Statistician, 63, 66-74
- K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
- derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
- Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
- Differentiation. Numerische Mathematik.
- Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
- Integrals of Derivatives. Numerische Mathematik.
Examples
>>> import numpy as np >>> import numdifftools as nd >>> fun = lambda x: np.sum(x**2) >>> dfun = nd.Gradient(fun) >>> np.allclose(dfun([1,2,3]), [ 2., 4., 6.]) True
# At [x,y] = [1,1], compute the numerical gradient # of the function sin(x-y) + y*exp(x)
>>> sin = np.sin; exp = np.exp >>> x, y = 1, 1 >>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0]) >>> dz = nd.Gradient(z) >>> dz_dx, dz_dy = dz([x, y]) >>> np.allclose([dz_dx, dz_dy], ... [ 3.7182818284590686, 1.7182818284590162]) True
# At the global minimizer (1,1) of the Rosenbrock function, # compute the gradient. It should be essentially zero.
>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2 >>> grad_rosen = nd.Gradient(rosen) >>> df_dx, df_dy = grad_rosen([x, y]) >>> np.allclose([df_dx, df_dy], [0, 0]) True
Methods
__call__ (callable with the following parameters:) x : array_like value at which function derivative is evaluated args : tuple Arguments for function fun. kwds : dict Keyword arguments for function fun. -
class
info
¶ Bases:
tuple
-
count
()¶
-
error_estimate
¶ Alias for field number 0
-
final_step
¶ Alias for field number 1
-
index
¶ Alias for field number 2
-
-
n
¶ !! processed by numpydoc !!
-
set_richardson_rule
(self, step_ratio, num_terms=2)¶
-
step
¶ !! processed by numpydoc !!
-
class
Hessian
(f, step=None, method='central', order=2, full_output=False, **step_options)[source]¶ Bases:
numdifftools.core.Hessdiag
Calculate Hessian with finite difference approximation
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- step : float, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None,
num_extrap=0, **step_options)
if step or method in in [‘complex’, ‘multicomplex’], otherwise
MaxStepGenerator(step_ratio=None, num_extrap=14, **step_options)
The results are extrapolated if the StepGenerator generate more than 3 steps.
- method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- full_output : bool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
- **step_options:
options to pass on to the XXXStepGenerator used.
Returns: - hess : ndarray
array of partial second derivatives, Hessian
See also
Notes
Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.
For all methods one should be careful in decreasing the step size too much due to round-off errors.
Computes the Hessian according to method as: ‘forward’ (1), ‘central’ (2) and ‘complex’ (3):
\[\quad ((f(x + d_j e_j + d_k e_k) - f(x + d_j e_j))) / (d_j d_k)\]\[\quad ((f(x + d_j e_j + d_k e_k) - f(x + d_j e_j - d_k e_k)) - (f(x - d_j e_j + d_k e_k) - f(x - d_j e_j - d_k e_k)) / (4 d_j d_k)\]\[imag(f(x + i d_j e_j + d_k e_k) - f(x + i d_j e_j - d_k e_k)) / (2 d_j d_k)\]where \(e_j\) is a vector with element \(j\) is one and the rest are zero and \(d_j\) is a scalar spacing \(steps_j\).
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
- of numerical differentiation. The American Statistician, 63, 66-74
- K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
- derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
- Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
- Differentiation. Numerische Mathematik.
- Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
- Integrals of Derivatives. Numerische Mathematik.
Examples
>>> import numpy as np >>> import numdifftools as nd
# Rosenbrock function, minimized at [1,1]
>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> Hfun = nd.Hessian(rosen) >>> h = Hfun([1, 1]) >>> h array([[ 842., -420.], [-420., 210.]])
# cos(x-y), at (0,0)
>>> cos = np.cos >>> fun = lambda xy : cos(xy[0]-xy[1]) >>> Hfun2 = nd.Hessian(fun) >>> h2 = Hfun2([0, 0]) >>> h2 array([[-1., 1.], [ 1., -1.]])
Methods
__call__ (callable with the following parameters:) x : array_like value at which function derivative is evaluated args : tuple Arguments for function fun. kwds : dict Keyword arguments for function fun. -
class
info
¶ Bases:
tuple
-
count
()¶
-
error_estimate
¶ Alias for field number 0
-
final_step
¶ Alias for field number 1
-
index
¶ Alias for field number 2
-
-
n
¶ !! processed by numpydoc !!
-
order
¶ !! processed by numpydoc !!
-
set_richardson_rule
(self, step_ratio, num_terms=2)¶
-
step
¶ !! processed by numpydoc !!
-
class
Hessdiag
(f, step=None, method='central', order=2, full_output=False, **step_options)[source]¶ Bases:
numdifftools.core.Derivative
Calculate Hessian diagonal with finite difference approximation
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- step : float, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None,
num_extrap=0, **step_options)
if step or method in in [‘complex’, ‘multicomplex’], otherwise
MaxStepGenerator(step_ratio=None, num_extrap=14, **step_options)
The results are extrapolated if the StepGenerator generate more than 3 steps.
- method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximationorder : int, optional defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
- full_output : bool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
- **step_options:
options to pass on to the XXXStepGenerator used.
Returns: - hessdiag : array
hessian diagonal
See also
Notes
Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.
For all methods one should be careful in decreasing the step size too much due to round-off errors.
Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
- of numerical differentiation. The American Statistician, 63, 66-74
- K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
- derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
- Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
- Differentiation. Numerische Mathematik.
- Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
- Integrals of Derivatives. Numerische Mathematik.
Examples
>>> import numpy as np >>> import numdifftools as nd >>> fun = lambda x : x[0] + x[1]**2 + x[2]**3 >>> Hfun = nd.Hessdiag(fun, full_output=True) >>> hd, info = Hfun([1,2,3]) >>> np.allclose(hd, [0., 2., 18.]) True
>>> np.all(info.error_estimate < 1e-11) True
Methods
__call__ (callable with the following parameters:) x : array_like value at which function derivative is evaluated args : tuple Arguments for function fun. kwds : dict Keyword arguments for function fun. -
class
info
¶ Bases:
tuple
-
count
()¶
-
error_estimate
¶ Alias for field number 0
-
final_step
¶ Alias for field number 1
-
index
¶ Alias for field number 2
-
-
n
¶ !! processed by numpydoc !!
-
set_richardson_rule
(self, step_ratio, num_terms=2)¶
-
step
¶ !! processed by numpydoc !!
-
class
MinStepGenerator
(base_step=None, step_ratio=2.0, num_steps=None, step_nom=None, offset=0, num_extrap=0, use_exact_steps=True, check_num_steps=True, scale=None)[source]¶ Bases:
object
Generates a sequence of steps
- where
- steps = step_nom * base_step * step_ratio ** (i + offset)
for i = num_steps-1,… 1, 0.
Parameters: - base_step : float, array-like, optional
Defines the minimum step, if None, the value is set to EPS**(1/scale)
- step_ratio : real scalar, optional, default 2
Ratio between sequential steps generated. Note: Ratio > 1 If None then step_ratio is 2 for n=1 otherwise step_ratio is 1.6
- num_steps : scalar integer, optional, default min_num_steps + num_extrap
defines number of steps generated. It should be larger than min_num_steps = (n + order - 1) / fact where fact is 1, 2 or 4 depending on differentiation method used.
- step_nom : default maximum(log(1+|x|), 1)
Nominal step where x is supplied at runtime through the __call__ method.
- offset : real scalar, optional, default 0
offset to the base step
- num_extrap : scalar integer, default 0
number of points used for extrapolation
- check_num_steps : boolean, default True
If True make sure num_steps is larger than the minimum required steps.
- use_exact_steps : boolean, default True
If true make sure exact steps are generated
- scale : real scalar, optional
scale used in base step. If not None it will override the default computed with the default_scale function.
-
base_step
¶ !! processed by numpydoc !!
-
min_num_steps
¶ !! processed by numpydoc !!
-
num_steps
¶ !! processed by numpydoc !!
-
scale
¶ !! processed by numpydoc !!
-
step_nom
¶ !! processed by numpydoc !!
-
step_ratio
¶ !! processed by numpydoc !!
-
class
MaxStepGenerator
(base_step=2.0, step_ratio=2.0, num_steps=15, step_nom=None, offset=0, num_extrap=0, use_exact_steps=False, check_num_steps=True, scale=500)[source]¶ Bases:
numdifftools.step_generators.MinStepGenerator
Generates a sequence of steps
- where
- steps = step_nom * base_step * step_ratio ** (-i + offset)
for i = 0, 1, …, num_steps-1.
Parameters: - base_step : float, array-like, default 2.0
Defines the maximum step, if None, the value is set to EPS**(1/scale)
- step_ratio : real scalar, optional, default 2
Ratio between sequential steps generated. Note: Ratio > 1 If None then step_ratio is 2 for n=1 otherwise step_ratio is 1.6
- num_steps : scalar integer, optional, default min_num_steps + num_extrap
defines number of steps generated. It should be larger than min_num_steps = (n + order - 1) / fact where fact is 1, 2 or 4 depending on differentiation method used.
- step_nom : default maximum(log(1+|x|), 1)
Nominal step where x is supplied at runtime through the __call__ method.
- offset : real scalar, optional, default 0
offset to the base step
- num_extrap : scalar integer, default 0
number of points used for extrapolation
- check_num_steps : boolean, default True
If True make sure num_steps is larger than the minimum required steps.
- use_exact_steps : boolean, default True
If true make sure exact steps are generated
- scale : real scalar, default 500
scale used in base step.
-
base_step
¶ !! processed by numpydoc !!
-
min_num_steps
¶ !! processed by numpydoc !!
-
num_steps
¶ !! processed by numpydoc !!
-
scale
¶ !! processed by numpydoc !!
-
step_generator_function
(self, x, method='forward', n=1, order=2)¶
-
step_nom
¶ !! processed by numpydoc !!
-
step_ratio
¶ !! processed by numpydoc !!
-
class
Richardson
(step_ratio=2.0, step=1, order=1, num_terms=2)[source]¶ Bases:
object
Extrapolates as sequence with Richardsons method
Notes
Suppose you have series expansion that goes like this
L = f(h) + a0 * h^p_0 + a1 * h^p_1+ a2 * h^p_2 + …
where p_i = order + step * i and f(h) -> L as h -> 0, but f(0) != L.
If we evaluate the right hand side for different stepsizes h we can fit a polynomial to that sequence of approximations. This is exactly what this class does.
Examples
>>> import numpy as np >>> import numdifftools as nd >>> n = 3 >>> Ei = np.zeros((n,1)) >>> h = np.zeros((n,1)) >>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1) >>> for k in np.arange(n): ... x = linfun(k) ... h[k] = x[1] ... Ei[k] = np.trapz(np.sin(x),x) >>> En, err, step = nd.Richardson(step=1, order=1)(Ei, h) >>> truErr = np.abs(En-1.) >>> np.all(truErr < err) True >>> np.all(np.abs(Ei-1)<1e-3) True >>> np.allclose(En, 1) True
-
directionaldiff
(f, x0, vec, **options)[source]¶ Return directional derivative of a function of n variables
Parameters: - f: function
analytical function to differentiate.
- x0: array
vector location at which to differentiate ‘f’. If x0 is an nXm array, then ‘f’ is assumed to be a function of n*m variables.
- vec: array
vector defining the line along which to take the derivative. It should be the same size as x0, but need not be a vector of unit length.
- **options:
optional arguments to pass on to Derivative.
Returns: - dder: scalar
estimate of the first derivative of ‘f’ in the specified direction.
See also
Examples
At the global minimizer (1,1) of the Rosenbrock function, compute the directional derivative in the direction [1 2]
>>> import numpy as np >>> import numdifftools as nd >>> vec = np.r_[1, 2] >>> rosen = lambda x: (1-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> dd, info = nd.directionaldiff(rosen, [1, 1], vec, full_output=True) >>> np.allclose(dd, 0) True >>> np.abs(info.error_estimate)<1e-14 True
5.2.2.2. Extrapolation module¶
Created on 28. aug. 2015
@author: pab
-
class
Dea
(limexp=3)[source]¶ Bases:
object
Extrapolate a slowly convergent sequence
LIMEXP is the maximum number of elements the epsilon table data can contain. The epsilon table is stored in the first (LIMEXP+2) entries of EPSTAB.
Notes
List of major variables:
- E0,E1,E2,E3 - DOUBLE PRECISION
- The 4 elements on which the computation of a new element in the epsilon table is based.
- NRES - INTEGER
- Number of extrapolation results actually generated by the epsilon algorithm in prior calls to the routine.
- NEWELM - INTEGER
- Number of elements to be computed in the new diagonal of the epsilon table. The condensed epsilon table is computed. Only those elements needed for the computation of the next diagonal are preserved.
- RES - DOUBLE PREISION
- New element in the new diagonal of the epsilon table.
- ERROR - DOUBLE PRECISION
- An estimate of the absolute error of RES. Routine decides whether RESULT=RES or RESULT=SVALUE by comparing ERROR with abserr from the previous call.
- RES3LA - DOUBLE PREISION
- Vector of DIMENSION 3 containing at most the last 3 results.
-
limexp
¶ !! processed by numpydoc !!
-
class
EpsAlg
(limexp=3)[source]¶ Bases:
object
Extrapolate a slowly convergent sequence
This implementaion is from [R9678172c97e0-1]
References
[1] E. J. Weniger (1989) “Nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series” Computer Physics Reports Vol. 10, 189 - 371 http://arxiv.org/abs/math/0306302v1
-
class
Richardson
(step_ratio=2.0, step=1, order=1, num_terms=2)[source]¶ Bases:
object
Extrapolates as sequence with Richardsons method
Notes
Suppose you have series expansion that goes like this
L = f(h) + a0 * h^p_0 + a1 * h^p_1+ a2 * h^p_2 + …
where p_i = order + step * i and f(h) -> L as h -> 0, but f(0) != L.
If we evaluate the right hand side for different stepsizes h we can fit a polynomial to that sequence of approximations. This is exactly what this class does.
Examples
>>> import numpy as np >>> import numdifftools as nd >>> n = 3 >>> Ei = np.zeros((n,1)) >>> h = np.zeros((n,1)) >>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1) >>> for k in np.arange(n): ... x = linfun(k) ... h[k] = x[1] ... Ei[k] = np.trapz(np.sin(x),x) >>> En, err, step = nd.Richardson(step=1, order=1)(Ei, h) >>> truErr = np.abs(En-1.) >>> np.all(truErr < err) True >>> np.all(np.abs(Ei-1)<1e-3) True >>> np.allclose(En, 1) True
-
convolve
(sequence, rule, **kwds)[source]¶ Wrapper around scipy.ndimage.convolve1d that allows complex input.
-
dea3
(v0, v1, v2, symmetric=False)[source]¶ Extrapolate a slowly convergent sequence
Parameters: - v0, v1, v2 : array-like
3 values of a convergent sequence to extrapolate
Returns: - result : array-like
extrapolated value
- abserr : array-like
absolute error estimate
See also
dea
Notes
DEA3 attempts to extrapolate nonlinearly to a better estimate of the sequence’s limiting value, thus improving the rate of convergence. The routine is based on the epsilon algorithm of P. Wynn, see [1].
References
[1] C. Brezinski and M. Redivo Zaglia (1991) “Extrapolation Methods. Theory and Practice”, North-Holland. [2] C. Brezinski (1977) “Acceleration de la convergence en analyse numerique”, “Lecture Notes in Math.”, vol. 584, Springer-Verlag, New York, 1977. [3] E. J. Weniger (1989) “Nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series” Computer Physics Reports Vol. 10, 189 - 371 http://arxiv.org/abs/math/0306302v1 Examples
# integrate sin(x) from 0 to pi/2
>>> import numpy as np >>> import numdifftools as nd >>> Ei= np.zeros(3) >>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1) >>> for k in np.arange(3): ... x = linfun(k) ... Ei[k] = np.trapz(np.sin(x),x) >>> [En, err] = nd.dea3(Ei[0], Ei[1], Ei[2]) >>> truErr = np.abs(En-1.) >>> np.all(truErr < err) True >>> np.allclose(En, 1) True >>> np.all(np.abs(Ei-1)<1e-3) True
-
dea_demo
()[source]¶ >>> from numdifftools.extrapolation import dea_demo >>> dea_demo() NO. PANELS TRAP. APPROX APPROX W/EA abserr 1 0.78539816 0.78539816 0.78539816 2 0.94805945 0.94805945 0.97596771 4 0.98711580 0.99945672 0.21405856 8 0.99678517 0.99996674 0.00306012 16 0.99919668 0.99999988 0.00115259 32 0.99979919 1.00000000 0.00057665 64 0.99994980 1.00000000 0.00003338 128 0.99998745 1.00000000 0.00000012 256 0.99999686 1.00000000 0.00000000 512 0.99999922 1.00000000 0.00000000 1024 0.99999980 1.00000000 0.00000000 2048 0.99999995 1.00000000 0.00000000
-
epsalg_demo
()[source]¶ >>> from numdifftools.extrapolation import epsalg_demo >>> epsalg_demo() NO. PANELS TRAP. APPROX APPROX W/EA abserr 1 0.78539816 0.78539816 0.21460184 2 0.94805945 0.94805945 0.05194055 4 0.98711580 0.99945672 0.00054328 8 0.99678517 0.99996674 0.00003326 16 0.99919668 0.99999988 0.00000012 32 0.99979919 1.00000000 0.00000000 64 0.99994980 1.00000000 0.00000000 128 0.99998745 1.00000000 0.00000000 256 0.99999686 1.00000000 0.00000000 512 0.99999922 1.00000000 0.00000000
5.2.2.3. Fornberg finite difference approximations¶
-
class
Taylor
(fun, n=1, r=0.0061, num_extrap=3, step_ratio=1.6, **kwds)[source]¶ Bases:
object
Return Taylor coefficients of complex analytic function using FFT
Parameters: - fun : callable
function to differentiate
- z0 : real or complex scalar at which to evaluate the derivatives
- n : scalar integer, default 1
Number of taylor coefficents to compute. Maximum number is 100.
- r : real scalar, default 0.0061
Initial radius at which to evaluate. For well-behaved functions, the computation should be insensitive to the initial radius to within about four orders of magnitude.
- num_extrap : scalar integer, default 3
number of extrapolation steps used in the calculation
- step_ratio : real scalar, default 1.6
Initial grow/shrinking factor for finding the best radius.
- max_iter : scalar integer, default 30
Maximum number of iterations
- min_iter : scalar integer, default max_iter // 2
Minimum number of iterations before the solution may be deemed degenerate. A larger number allows the algorithm to correct a bad initial radius.
- full_output : bool, optional
If full_output is False, only the coefficents is returned (default). If full_output is True, then (coefs, status) is returned
Returns: - coefs : ndarray
array of taylor coefficents
- status: Optional object into which output information is written:
degenerate: True if the algorithm was unable to bound the error iterations: Number of iterations executed function_count: Number of function calls final_radius: Ending radius of the algorithm failed: True if the maximum number of iterations was reached error_estimate: approximate bounds of the rounding error.
Notes
This module uses the method of Fornberg to compute the Taylor series coefficients of a complex analytic function along with error bounds. The method uses a Fast Fourier Transform to invert function evaluations around a circle into Taylor series coefficients and uses Richardson Extrapolation to improve and bound the estimate. Unlike real-valued finite differences, the method searches for a desirable radius and so is reasonably insensitive to the initial radius-to within a number of orders of magnitude at least. For most cases, the default configuration is likely to succeed.
Restrictions
The method uses the coefficients themselves to control the truncation error, so the error will not be properly bounded for functions like low-order polynomials whose Taylor series coefficients are nearly zero. If the error cannot be bounded, degenerate flag will be set to true, and an answer will still be computed and returned but should be used with caution.
References
- [1] Fornberg, B. (1981).
- Numerical Differentiation of Analytic Functions. ACM Transactions on Mathematical Software (TOMS), 7(4), 512-526. http://doi.org/10.1145/355972.355979
Examples
Compute the first 6 taylor coefficients 1 / (1 - z) expanded round z0 = 0: >>> import numdifftools.fornberg as ndf >>> import numpy as np >>> c, info = ndf.Taylor(lambda x: 1./(1-x), n=6, full_output=True)(z0=0) >>> np.allclose(c, np.ones(8)) True >>> np.all(info.error_estimate < 1e-9) True >>> (info.function_count, info.iterations, info.failed) == (144, 18, False) True
-
derivative
(fun, z0, n=1, **kwds)[source]¶ Calculate n-th derivative of complex analytic function using FFT
Parameters: - fun : callable
function to differentiate
- z0 : real or complex scalar at which to evaluate the derivatives
- n : scalar integer, default 1
Number of derivatives to compute where 0 represents the value of the function and n represents the nth derivative. Maximum number is 100.
- r : real scalar, default 0.0061
Initial radius at which to evaluate. For well-behaved functions, the computation should be insensitive to the initial radius to within about four orders of magnitude.
- max_iter : scalar integer, default 30
Maximum number of iterations
- min_iter : scalar integer, default max_iter // 2
Minimum number of iterations before the solution may be deemed degenerate. A larger number allows the algorithm to correct a bad initial radius.
- step_ratio : real scalar, default 1.6
Initial grow/shrinking factor for finding the best radius.
- num_extrap : scalar integer, default 3
number of extrapolation steps used in the calculation
- full_output : bool, optional
If full_output is False, only the derivative is returned (default). If full_output is True, then (der, status) is returned der is the derivative, and status is a Results object.
Returns: - der : ndarray
array of derivatives
- status: Optional object into which output information is written. Fields:
degenerate: True if the algorithm was unable to bound the error iterations: Number of iterations executed function_count: Number of function calls final_radius: Ending radius of the algorithm failed: True if the maximum number of iterations was reached error_estimate: approximate bounds of the rounding error.
- This module uses the method of Fornberg to compute the derivatives of a
- complex analytic function along with error bounds. The method uses a
- Fast Fourier Transform to invert function evaluations around a circle into
- Taylor series coefficients, uses Richardson Extrapolation to improve
- and bound the estimate, then multiplies by a factorial to compute the
- derivatives. Unlike real-valued finite differences, the method searches for
- a desirable radius and so is reasonably insensitive to the initial
- radius-to within a number of orders of magnitude at least. For most cases,
- the default configuration is likely to succeed.
- Restrictions
- The method uses the coefficients themselves to control the truncation
- error, so the error will not be properly bounded for functions like
- low-order polynomials whose Taylor series coefficients are nearly zero.
- If the error cannot be bounded, degenerate flag will be set to true, and
- an answer will still be computed and returned but should be used with
- caution.
References
- [1] Fornberg, B. (1981).
- Numerical Differentiation of Analytic Functions. ACM Transactions on Mathematical Software (TOMS), 7(4), 512-526. http://doi.org/10.1145/355972.355979
Examples
To compute the first five derivatives of 1 / (1 - z) at z = 0: Compute the first 6 taylor derivatives of 1 / (1 - z) at z0 = 0: >>> import numdifftools.fornberg as ndf >>> import numpy as np >>> def fun(x): … return 1./(1-x) >>> c, info = ndf.derivative(fun, z0=0, n=6, full_output=True) >>> np.allclose(c, [1, 1, 2, 6, 24, 120, 720, 5040]) True >>> np.all(info.error_estimate < 1e-9*c.real) True >>> (info.function_count, info.iterations, info.failed) == (144, 18, False) True
-
fd_derivative
(fx, x, n=1, m=2)[source]¶ Return the n’th derivative for all points using Finite Difference method.
Parameters: - fx : vector
function values which are evaluated on x i.e. fx[i] = f(x[i])
- x : vector
abscissas on which fx is evaluated. The x values can be arbitrarily spaced but must be distinct and len(x) > n.
- n : scalar integer
order of derivative.
- m : scalar integer
defines the stencil size. The stencil size is of 2 * mm + 1 points in the interior, and 2 * mm + 2 points for each of the 2 * mm boundary points where mm = n // 2 + m.
- fd_derivative evaluates an approximation for the n’th derivative of the
- vector function f(x) using the Fornberg finite difference method.
- Restrictions: 0 < n < len(x) and 2*mm+2 <= len(x)
See also
Examples
>>> import numpy as np >>> import numdifftools.fornberg as ndf >>> x = np.linspace(-1, 1, 25) >>> fx = np.exp(x) >>> df = ndf.fd_derivative(fx, x, n=1) >>> np.allclose(df, fx) True
-
fd_weights
(x, x0=0, n=1)[source]¶ Return finite difference weights for the n’th derivative.
Parameters: - x : vector
abscissas used for the evaluation for the derivative at x0.
- x0 : scalar
location where approximations are to be accurate
- n : scalar integer
order of derivative. Note for n=0 this can be used to evaluate the interpolating polynomial itself.
See also
Examples
>>> import numpy as np >>> import numdifftools.fornberg as ndf >>> x = np.linspace(-1, 1, 5) * 1e-3 >>> w = ndf.fd_weights(x, x0=0, n=1) >>> df = np.dot(w, np.exp(x)) >>> np.allclose(df, 1) True
-
fd_weights_all
(x, x0=0, n=1)[source]¶ Return finite difference weights for derivatives of all orders up to n.
Parameters: - x : vector, length m
x-coordinates for grid points
- x0 : scalar
location where approximations are to be accurate
- n : scalar integer
highest derivative that we want to find weights for
Returns: - weights : array, shape n+1 x m
contains coefficients for the j’th derivative in row j (0 <= j <= n)
See also
Notes
The x values can be arbitrarily spaced but must be distinct and len(x) > n.
The Fornberg algorithm is much more stable numerically than regular vandermonde systems for large values of n.
References
B. Fornberg (1998) “Calculation of weights_and_points in finite difference formulas”, SIAM Review 40, pp. 685-691.
http://www.scholarpedia.org/article/Finite_difference_method
-
taylor
(fun, z0=0, n=1, r=0.0061, num_extrap=3, step_ratio=1.6, **kwds)[source]¶ Return Taylor coefficients of complex analytic function using FFT
Parameters: - fun : callable
function to differentiate
- z0 : real or complex scalar at which to evaluate the derivatives
- n : scalar integer, default 1
Number of taylor coefficents to compute. Maximum number is 100.
- r : real scalar, default 0.0061
Initial radius at which to evaluate. For well-behaved functions, the computation should be insensitive to the initial radius to within about four orders of magnitude.
- num_extrap : scalar integer, default 3
number of extrapolation steps used in the calculation
- step_ratio : real scalar, default 1.6
Initial grow/shrinking factor for finding the best radius.
- max_iter : scalar integer, default 30
Maximum number of iterations
- min_iter : scalar integer, default max_iter // 2
Minimum number of iterations before the solution may be deemed degenerate. A larger number allows the algorithm to correct a bad initial radius.
- full_output : bool, optional
If full_output is False, only the coefficents is returned (default). If full_output is True, then (coefs, status) is returned
Returns: - coefs : ndarray
array of taylor coefficents
- status: Optional object into which output information is written:
degenerate: True if the algorithm was unable to bound the error iterations: Number of iterations executed function_count: Number of function calls final_radius: Ending radius of the algorithm failed: True if the maximum number of iterations was reached error_estimate: approximate bounds of the rounding error.
Notes
This module uses the method of Fornberg to compute the Taylor series coefficents of a complex analytic function along with error bounds. The method uses a Fast Fourier Transform to invert function evaluations around a circle into Taylor series coefficients and uses Richardson Extrapolation to improve and bound the estimate. Unlike real-valued finite differences, the method searches for a desirable radius and so is reasonably insensitive to the initial radius-to within a number of orders of magnitude at least. For most cases, the default configuration is likely to succeed.
Restrictions
The method uses the coefficients themselves to control the truncation error, so the error will not be properly bounded for functions like low-order polynomials whose Taylor series coefficients are nearly zero. If the error cannot be bounded, degenerate flag will be set to true, and an answer will still be computed and returned but should be used with caution.
References
- [1] Fornberg, B. (1981).
- Numerical Differentiation of Analytic Functions. ACM Transactions on Mathematical Software (TOMS), 7(4), 512-526. http://doi.org/10.1145/355972.355979
Examples
Compute the first 6 taylor coefficients 1 / (1 - z) expanded round z0 = 0: >>> import numdifftools.fornberg as ndf >>> import numpy as np >>> c, info = ndf.taylor(lambda x: 1./(1-x), z0=0, n=6, full_output=True) >>> np.allclose(c, np.ones(8)) True >>> np.all(info.error_estimate < 1e-9) True >>> (info.function_count, info.iterations, info.failed) == (144, 18, False) True
5.2.2.4. Limits module¶
Created on 27. aug. 2015
@author: pab Author: John D’Errico e-mail: woodchips@rochester.rr.com Release: 1.0 Release date: 5/23/2008
-
class
CStepGenerator
(base_step=None, step_ratio=4.0, num_steps=None, step_nom=None, offset=0, scale=1.2, use_exact_steps=True, path='radial', dtheta=0.39269908169872414, **kwds)[source]¶ Bases:
numdifftools.step_generators.MinStepGenerator
Generates a sequence of steps
- where
- steps = base_step * step_nom *
- (exp(1j*dtheta) * step_ratio) ** (i + offset)
for i = 0, 1, …, num_steps-1
Parameters: - base_step : float, array-like, default None
Defines the minimum step, if None, the value is set to EPS**(1/scale)
- step_ratio : real scalar, optional, default 4.0
Ratio between sequential steps generated.
- num_steps : scalar integer, optional,
defines number of steps generated. If None the value is 2 * int(round(16.0/log(abs(step_ratio)))) + 1
- step_nom : default maximum(log(1+|x|), 1)
Nominal step where x is supplied at runtime through the __call__ method.
- offset : real scalar, optional, default 0
offset to the base step
- use_exact_steps : boolean
If true make sure exact steps are generated
- scale : real scalar, default 1.2
scale used in base step.
- path : ‘spiral’ or ‘radial’
Specifies the type of path to take the limit along.
- dtheta: real scalar
If the path is spiral it will follow an exponential spiral into the limit, with angular steps at dtheta radians.
-
dtheta
¶ !! processed by numpydoc !!
-
num_steps
¶ !! processed by numpydoc !!
-
step_ratio
¶ !! processed by numpydoc !!
-
class
Limit
(fun, step=None, method='above', order=4, full_output=False, **options)[source]¶ Bases:
numdifftools.limits._Limit
Compute limit of a function at a given point
Parameters: - fun : callable
function fun(z, *args, **kwds) to compute the limit for z->z0. The function, fun, is assumed to return a result of the same shape and size as its input, z.
- step: float, complex, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is CStepGenerator(base_step=step, **options)
- method : {‘above’, ‘below’}
defines if the limit is taken from above or below
- order: positive scalar integer, optional.
defines the order of approximation used to find the specified limit. The order must be member of [1 2 3 4 5 6 7 8]. 4 is a good compromise.
- full_output: bool
If true return additional info.
- options:
options to pass on to CStepGenerator
Returns: - limit_fz: array like
estimated limit of f(z) as z –> z0
- info:
Only given if full_output is True and contains the following:
- error estimate: ndarray
95 % uncertainty estimate around the limit, such that abs(limit_fz - lim z->z0 f(z)) < error_estimate
- final_step: ndarray
final step used in approximation
Notes
Limit computes the limit of a given function at a specified point, z0. When the function is evaluable at the point in question, this is a simple task. But when the function cannot be evaluated at that location due to a singularity, you may need a tool to compute the limit. Limit does this, as well as produce an uncertainty estimate in the final result.
The methods used by Limit are Richardson extrapolation in a combination with Wynn’s epsilon algorithm which also yield an error estimate. The user can specify the method order, as well as the path into z0. z0 may be real or complex. Limit uses a proportionally cascaded series of function evaluations, moving away from your point of evaluation along a path along the real line (or in the complex plane for complex z0 or step.) The step_ratio is the ratio used between sequential steps. The sign of step allows you to specify a limit from above or below. Negative values of step will cause the limit to be taken approaching z0 from below.
A smaller step_ratio means that Limit will take more function evaluations to evaluate the limit, but the result will potentially be less accurate. The step_ratio MUST be a scalar larger than 1. A value in the range [2,100] is recommended. 4 seems a good compromise.
>>> import numpy as np >>> from numdifftools.limits import Limit >>> def f(x): return np.sin(x)/x >>> lim_f0, err = Limit(f, full_output=True)(0) >>> np.allclose(lim_f0, 1) True >>> np.allclose(err.error_estimate, 1.77249444610966e-15) True
Compute the derivative of cos(x) at x == pi/2. It should be -1. The limit will be taken as a function of the differential parameter, dx.
>>> x0 = np.pi/2; >>> def g(x): return (np.cos(x0+x)-np.cos(x0))/x >>> lim_g0, err = Limit(g, full_output=True)(0) >>> np.allclose(lim_g0, -1) True >>> err.error_estimate < 1e-14 True
Compute the residue at a first order pole at z = 0 The function 1./(1-exp(2*z)) has a pole at z == 0. The residue is given by the limit of z*fun(z) as z –> 0. Here, that residue should be -0.5.
>>> def h(z): return -z/(np.expm1(2*z)) >>> lim_h0, err = Limit(h, full_output=True)(0) >>> np.allclose(lim_h0, -0.5) True >>> err.error_estimate < 1e-14 True
Compute the residue of function 1./sin(z)**2 at z = 0. This pole is of second order thus the residue is given by the limit of z**2*fun(z) as z –> 0.
>>> def g(z): return z**2/(np.sin(z)**2) >>> lim_gpi, err = Limit(g, full_output=True)(0) >>> np.allclose(lim_gpi, 1) True >>> err.error_estimate < 1e-14 True
A more difficult limit is one where there is significant subtractive cancellation at the limit point. In the following example, the cancellation is second order. The true limit should be 0.5.
>>> def k(x): return (x*np.exp(x)-np.expm1(x))/x**2 >>> lim_k0,err = Limit(k, full_output=True)(0) >>> np.allclose(lim_k0, 0.5) True >>> err.error_estimate < 1.0e-8 True
>>> def h(x): return (x-np.sin(x))/x**3 >>> lim_h0, err = Limit(h, full_output=True)(0) >>> np.allclose(lim_h0, 1./6) True >>> err.error_estimate < 1e-8 True
-
class
Residue
(f, step=None, method='above', order=None, pole_order=1, full_output=False, **options)[source]¶ Bases:
numdifftools.limits.Limit
Compute residue of a function at a given point
Parameters: - fun : callable
function fun(z, *args, **kwds) to compute the Residue at z=z0. The function, fun, is assumed to return a result of the same shape and size as its input, z.
- step: float, complex, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is CStepGenerator(base_step=step, **options)
- method : {‘above’, ‘below’}
defines if the limit is taken from above or below
- order: positive scalar integer, optional.
defines the order of approximation used to find the specified limit. The order must be member of [1 2 3 4 5 6 7 8]. 4 is a good compromise.
- pole_order : scalar integer
specifies the order of the pole at z0.
- full_output: bool
If true return additional info.
- options:
options to pass on to CStepGenerator
Returns: - res_fz: array like
estimated residue, i.e., limit of f(z)*(z-z0)**pole_order as z –> z0 When the residue is estimated as approximately zero,
the wrong order pole may have been specified.
- info: namedtuple,
Only given if full_output is True and contains the following:
- error estimate: ndarray
95 % uncertainty estimate around the residue, such that abs(res_fz - lim z->z0 f(z)*(z-z0)**pole_order) < error_estimate Large uncertainties here suggest that the wrong order pole was specified for f(z0).
- final_step: ndarray
final step used in approximation
Notes
Residue computes the residue of a given function at a simple first order pole, or at a second order pole.
The methods used by residue are polynomial extrapolants, which also yield an error estimate. The user can specify the method order, as well as the order of the pole.
- z0 - scalar point at which to compute the residue. z0 may be
- real or complex.
See the document DERIVEST.pdf for more explanation of the algorithms behind the parameters of Residue. In most cases, the user should never need to specify anything other than possibly the PoleOrder.
Examples
A first order pole at z = 0
>>> from numdifftools.limits import Residue >>> def f(z): return -1./(np.expm1(2*z)) >>> res_f, info = Residue(f, full_output=True)(0) >>> np.allclose(res_f, -0.5) True >>> info.error_estimate < 1e-14 True
A second order pole around z = 0 and z = pi >>> def h(z): return 1.0/np.sin(z)**2 >>> res_h, info = Residue(h, full_output=True, pole_order=2)([0, np.pi]) >>> np.allclose(res_h, 1) True >>> (info.error_estimate < 1e-10).all() True
5.2.2.5. Numdifftools Algopy module¶
5.2.2.5.1. Numdifftools.nd_algopy¶
This module provide an easy to use interface to derivatives calculated with AlgoPy. Algopy stands for Algorithmic Differentiation in Python.
The purpose of AlgoPy is the evaluation of higher-order derivatives in the forward and reverse mode of Algorithmic Differentiation (AD) of functions that are implemented as Python programs. Particular focus are functions that contain numerical linear algebra functions as they often appear in statistically motivated functions. The intended use of AlgoPy is for easy prototyping at reasonable execution speeds. More precisely, for a typical program a directional derivative takes order 10 times as much time as time as the function evaluation. This is approximately also true for the gradient.
5.2.2.5.2. Algoritmic differentiation¶
Algorithmic differentiation (AD) is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
Algorithmic differentiation is not:
Symbolic differentiation, nor Numerical differentiation (the method of finite differences). These classical methods run into problems: symbolic differentiation leads to inefficient code (unless carefully done) and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce round-off errors in the discretization process and cancellation. Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Algoritmic differentiation solves all of these problems.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
https://pythonhosted.org/algopy/index.html
-
class
Derivative
(fun, n=1, method='forward', full_output=False)[source]¶ Bases:
numdifftools.nd_algopy._Derivative
Calculate n-th derivative with Algorithmic Differentiation method
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- n : int, optional
Order of the derivative.
- method : string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
Returns: - der : ndarray
array of derivatives
Notes
Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
Examples
# 1’st and 2’nd derivative of exp(x), at x == 1
>>> import numpy as np >>> import numdifftools.nd_algopy as nda >>> fd = nda.Derivative(np.exp) # 1'st derivative >>> np.allclose(fd(1), 2.718281828459045) True >>> fd5 = nda.Derivative(np.exp, n=5) # 5'th derivative >>> np.allclose(fd5(1), 2.718281828459045) True
# 1’st derivative of x^3+x^4, at x = [0,1]
>>> fun = lambda x: x**3 + x**4 >>> fd3 = nda.Derivative(fun) >>> np.allclose(fd3([0,1]), [ 0., 7.]) True
Methods
__call__ (callable with the following parameters:) x : array_like value at which function derivative is evaluated args : tuple Arguments for function fun. kwds : dict Keyword arguments for function fun.
-
class
Gradient
(fun, n=1, method='forward', full_output=False)[source]¶ Bases:
numdifftools.nd_algopy._Derivative
Calculate Gradient with Algorithmic Differentiation method
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- method : string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
Returns: - grad : array
gradient
See also
Notes
Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
Examples
>>> import numdifftools.nd_algopy as nda >>> fun = lambda x: np.sum(x**2) >>> df = nda.Gradient(fun, method='reverse') >>> np.allclose(df([1,2,3]), [ 2., 4., 6.]) True
#At [x,y] = [1,1], compute the numerical gradient #of the function sin(x-y) + y*exp(x)
>>> sin = np.sin; exp = np.exp >>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0]) >>> dz = nda.Gradient(z) >>> grad2 = dz([1, 1]) >>> np.allclose(grad2, [ 3.71828183, 1.71828183]) True
#At the global minimizer (1,1) of the Rosenbrock function, #compute the gradient. It should be essentially zero.
>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2 >>> rd = nda.Gradient(rosen) >>> grad3 = rd([1,1]) >>> np.allclose(grad3, [ 0., 0.]) True
Methods
__call__ (callable with the following parameters:) x : array_like value at which function derivative is evaluated args : tuple Arguments for function fun. kwds : dict Keyword arguments for function fun.
-
class
Hessdiag
(f, method='forward', full_output=False)[source]¶ Bases:
numdifftools.nd_algopy.Hessian
Calculate Hessian diagonal with Algorithmic Differentiation method
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- method : string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
Returns: - hessdiag : ndarray
Hessian diagonal array of partial second order derivatives.
See also
Notes
Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
Examples
>>> import numdifftools.nd_algopy as nda
# Rosenbrock function, minimized at [1,1]
>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> Hfun = nda.Hessdiag(rosen) >>> h = Hfun([1, 1]) # h =[ 842, 210] >>> np.allclose(h, [ 842., 210.]) True
# cos(x-y), at (0,0)
>>> cos = np.cos >>> fun = lambda xy : cos(xy[0]-xy[1]) >>> Hfun2 = nda.Hessdiag(fun) >>> h2 = Hfun2([0, 0]) # h2 = [-1, -1] >>> np.allclose(h2, [-1., -1.]) True
>>> Hfun3 = nda.Hessdiag(fun, method='reverse') >>> h3 = Hfun3([0, 0]) # h2 = [-1, -1]; >>> np.allclose(h3, [-1., -1.]) True
Methods
__call__ (callable with the following parameters:) x : array_like value at which function derivative is evaluated args : tuple Arguments for function fun. kwds : dict Keyword arguments for function fun.
-
class
Hessian
(f, method='forward', full_output=False)[source]¶ Bases:
numdifftools.nd_algopy._Derivative
Calculate Hessian with Algorithmic Differentiation method
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- method : string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
Returns: - hess : ndarray
array of partial second derivatives, Hessian
See also
Notes
Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
Examples
>>> import numdifftools.nd_algopy as nda
# Rosenbrock function, minimized at [1,1]
>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> Hf = nda.Hessian(rosen) >>> h = Hf([1, 1]) # h =[ 842 -420; -420, 210]; >>> np.allclose(h, [[ 842., -420.], ... [-420., 210.]]) True
# cos(x-y), at (0,0)
>>> cos = np.cos >>> fun = lambda xy : cos(xy[0]-xy[1]) >>> Hfun2 = nda.Hessian(fun) >>> h2 = Hfun2([0, 0]) # h2 = [-1 1; 1 -1] >>> np.allclose(h2, [[-1., 1.], ... [ 1., -1.]]) True
>>> Hfun3 = nda.Hessian(fun, method='reverse') >>> h3 = Hfun3([0, 0]) # h2 = [-1, 1; 1, -1]; >>> np.allclose(h3, [[-1., 1.], ... [ 1., -1.]]) True
Methods
__call__ (callable with the following parameters:) x : array_like value at which function derivative is evaluated args : tuple Arguments for function fun. kwds : dict Keyword arguments for function fun.
-
class
Jacobian
(fun, n=1, method='forward', full_output=False)[source]¶ Bases:
numdifftools.nd_algopy.Gradient
Calculate Jacobian with Algorithmic Differentiation method
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- method : string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
Returns: - jacob : array
Jacobian
See also
Notes
Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
Examples
>>> import numdifftools.nd_algopy as nda
#(nonlinear least squares)
>>> xdata = np.arange(0,1,0.1) >>> ydata = 1+2*np.exp(0.75*xdata) >>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
Jfun = nda.Jacobian(fun) # Todo: This does not work Jfun([1,2,0.75]).T # should be numerically zero array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])>>> Jfun2 = nda.Jacobian(fun, method='reverse') >>> res = Jfun2([1,2,0.75]).T # should be numerically zero >>> np.allclose(res, ... [[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], ... [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], ... [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) True
>>> f2 = lambda x : x[0]*x[1]*x[2]**2 >>> Jfun2 = nda.Jacobian(f2) >>> np.allclose(Jfun2([1., 2., 3.]), [[ 18., 9., 12.]]) True
>>> Jfun21 = nda.Jacobian(f2, method='reverse') >>> np.allclose(Jfun21([1., 2., 3.]), [[ 18., 9., 12.]]) True
>>> def fun3(x): ... n = int(np.prod(np.shape(x[0]))) ... out = nda.algopy.zeros((2, n), dtype=x) ... out[0] = x[0]*x[1]*x[2]**2 ... out[1] = x[0]*x[1]*x[2] ... return out >>> Jfun3 = nda.Jacobian(fun3)
>>> np.allclose(Jfun3([1., 2., 3.]), [[[18., 9., 12.]], [[6., 3., 2.]]]) True >>> np.allclose(Jfun3([4., 5., 6.]), [[[180., 144., 240.]], ... [[30., 24., 20.]]]) True >>> np.allclose(Jfun3(np.array([[1.,2.,3.], [4., 5., 6.]]).T), ... [[[18., 0., 9., 0., 12., 0.], ... [0., 180., 0., 144., 0., 240.]], ... [[6., 0., 3., 0., 2., 0.], ... [0., 30., 0., 24., 0., 20.]]]) True
Methods
__call__ (callable with the following parameters:) x : array_like value at which function derivative is evaluated args : tuple Arguments for function fun. kwds : dict Keyword arguments for function fun.
-
directionaldiff
(f, x0, vec, **options)[source]¶ Return directional derivative of a function of n variables
Parameters: - fun: callable
analytical function to differentiate.
- x0: array
vector location at which to differentiate fun. If x0 is an nxm array, then fun is assumed to be a function of n*m variables.
- vec: array
vector defining the line along which to take the derivative. It should be the same size as x0, but need not be a vector of unit length.
- **options:
optional arguments to pass on to Derivative.
Returns: - dder: scalar
estimate of the first derivative of fun in the specified direction.
See also
Examples
At the global minimizer (1,1) of the Rosenbrock function, compute the directional derivative in the direction [1 2]
>>> import numpy as np >>> import numdifftools as nd >>> vec = np.r_[1, 2] >>> rosen = lambda x: (1-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> dd, info = nd.directionaldiff(rosen, [1, 1], vec, full_output=True) >>> np.allclose(dd, 0) True >>> np.abs(info.error_estimate)<1e-14 True
5.2.2.6. Numdifftools Scipy module¶
-
class
Gradient
(fun, step=None, method='central', order=2, bounds=(-inf, inf), sparsity=None)[source]¶ Bases:
numdifftools.nd_scipy.Jacobian
Calculate Gradient with finite difference approximation
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- step : float, optional
Stepsize, if None, optimal stepsize is used, i.e., x * _EPS for method==`complex` x * _EPS**(1/2) for method==`forward` x * _EPS**(1/3) for method==`central`.
- method : {‘central’, ‘complex’, ‘forward’}
defines the method used in the approximation.
See also
Hessian
,Jacobian
Examples
>>> import numpy as np >>> import numdifftools.nd_scipy as nd >>> fun = lambda x: np.sum(x**2) >>> dfun = nd.Gradient(fun) >>> np.allclose(dfun([1,2,3]), [ 2., 4., 6.]) True
# At [x,y] = [1,1], compute the numerical gradient # of the function sin(x-y) + y*exp(x)
>>> sin = np.sin; exp = np.exp >>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0]) >>> dz = nd.Gradient(z) >>> grad2 = dz([1, 1]) >>> np.allclose(grad2, [ 3.71828183, 1.71828183]) True
# At the global minimizer (1,1) of the Rosenbrock function, # compute the gradient. It should be essentially zero.
>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2 >>> rd = nd.Gradient(rosen) >>> grad3 = rd([1,1]) >>> np.allclose(grad3,[0, 0], atol=1e-7) True
-
class
Jacobian
(fun, step=None, method='central', order=2, bounds=(-inf, inf), sparsity=None)[source]¶ Bases:
numdifftools.nd_scipy._Common
Calculate Jacobian with finite difference approximation
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- step : float, optional
Stepsize, if None, optimal stepsize is used, i.e., x * _EPS for method==`complex` x * _EPS**(1/2) for method==`forward` x * _EPS**(1/3) for method==`central`.
- method : {‘central’, ‘complex’, ‘forward’}
defines the method used in the approximation.
Examples
>>> import numdifftools.nd_scipy as nd
#(nonlinear least squares)
>>> xdata = np.arange(0,1,0.1) >>> ydata = 1+2*np.exp(0.75*xdata) >>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2 >>> np.allclose(fun([1, 2, 0.75]).shape, (10,)) True >>> dfun = nd.Jacobian(fun) >>> np.allclose(dfun([1, 2, 0.75]), np.zeros((10,3))) True
>>> fun2 = lambda x : x[0]*x[1]*x[2]**2 >>> dfun2 = nd.Jacobian(fun2) >>> np.allclose(dfun2([1.,2.,3.]), [[18., 9., 12.]]) True
>>> fun3 = lambda x : np.vstack((x[0]*x[1]*x[2]**2, x[0]*x[1]*x[2]))
TODO: The following does not work: der3 = nd.Jacobian(fun3)([1., 2., 3.]) np.allclose(der3, … [[18., 9., 12.], [6., 3., 2.]]) True np.allclose(nd.Jacobian(fun3)([4., 5., 6.]), … [[180., 144., 240.], [30., 24., 20.]]) True
np.allclose(nd.Jacobian(fun3)(np.array([[1.,2.,3.], [4., 5., 6.]]).T), … [[[ 18., 180.], … [ 9., 144.], … [ 12., 240.]], … [[ 6., 30.], … [ 3., 24.], … [ 2., 20.]]]) True
5.2.2.7. Numdifftools Statsmodels module¶
5.2.2.7.1. Numdifftools.nd_statsmodels¶
This module provides an easy to use interface to derivatives calculated with statsmodels.numdiff.
-
class
Gradient
(fun, step=None, method='central', order=None)[source]¶ Bases:
numdifftools.nd_statsmodels.Jacobian
Calculate Gradient with finite difference approximation
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- step : float, optional
Stepsize, if None, optimal stepsize is used, i.e., x * _EPS for method==`complex` x * _EPS**(1/2) for method==`forward` x * _EPS**(1/3) for method==`central`.
- method : {‘central’, ‘complex’, ‘forward’, ‘backward’}
defines the method used in the approximation.
Examples
>>> import numpy as np >>> import numdifftools.nd_statsmodels as nd >>> fun = lambda x: np.sum(x**2) >>> dfun = nd.Gradient(fun) >>> np.allclose(dfun([1,2,3]), [ 2., 4., 6.]) True
# At [x,y] = [1,1], compute the numerical gradient # of the function sin(x-y) + y*exp(x)
>>> sin = np.sin; exp = np.exp >>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0]) >>> dz = nd.Gradient(z) >>> grad2 = dz([1, 1]) >>> np.allclose(grad2, [ 3.71828183, 1.71828183]) True
# At the global minimizer (1,1) of the Rosenbrock function, # compute the gradient. It should be essentially zero.
>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2 >>> rd = nd.Gradient(rosen) >>> grad3 = rd([1,1]) >>> np.allclose(grad3,[0, 0]) True
-
class
Hessian
(fun, step=None, method='central', order=None)[source]¶ Bases:
numdifftools.nd_statsmodels._Common
Calculate Hessian with finite difference approximation
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- step : float, optional
Stepsize, if None, optimal stepsize is used, i.e., x * _EPS**(1/3) for method==`forward`, complex or central2 x * _EPS**(1/4) for method==`central`.
- method : {‘central’, ‘complex’, ‘forward’, ‘backward’}
defines the method used in the approximation.
Examples
>>> import numpy as np >>> import numdifftools.nd_statsmodels as nd
# Rosenbrock function, minimized at [1,1]
>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> Hfun = nd.Hessian(rosen) >>> h = Hfun([1, 1]) >>> np.allclose(h, [[ 842., -420.], [-420., 210.]]) True
# cos(x-y), at (0,0)
>>> cos = np.cos >>> fun = lambda xy : cos(xy[0]-xy[1]) >>> Hfun2 = nd.Hessian(fun) >>> h2 = Hfun2([0, 0]) >>> np.allclose(h2, [[-1., 1.], [ 1., -1.]]) True
-
n
¶ !! processed by numpydoc !!
-
class
Jacobian
(fun, step=None, method='central', order=None)[source]¶ Bases:
numdifftools.nd_statsmodels._Common
Calculate Jacobian with finite difference approximation
Parameters: - fun : function
function of one array fun(x, *args, **kwds)
- step : float, optional
Stepsize, if None, optimal stepsize is used, i.e., x * _EPS for method==`complex` x * _EPS**(1/2) for method==`forward` x * _EPS**(1/3) for method==`central`.
- method : {‘central’, ‘complex’, ‘forward’, ‘backward’}
defines the method used in the approximation.
Examples
>>> import numdifftools.nd_statsmodels as nd
#(nonlinear least squares)
>>> xdata = np.arange(0,1,0.1) >>> ydata = 1+2*np.exp(0.75*xdata) >>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2 >>> np.allclose(fun([1, 2, 0.75]).shape, (10,)) True >>> dfun = nd.Jacobian(fun) >>> np.allclose(dfun([1, 2, 0.75]), np.zeros((10,3))) True
>>> fun2 = lambda x : x[0]*x[1]*x[2]**2 >>> dfun2 = nd.Jacobian(fun2) >>> np.allclose(dfun2([1.,2.,3.]), [[18., 9., 12.]]) True
>>> fun3 = lambda x : np.vstack((x[0]*x[1]*x[2]**2, x[0]*x[1]*x[2])) >>> np.allclose(nd.Jacobian(fun3)([1., 2., 3.]), ... [[18., 9., 12.], [6., 3., 2.]]) True >>> np.allclose(nd.Jacobian(fun3)([4., 5., 6.]), ... [[180., 144., 240.], [30., 24., 20.]]) True
>>> np.allclose(nd.Jacobian(fun3)(np.array([[1.,2.,3.], [4., 5., 6.]]).T), ... [[[ 18., 180.], ... [ 9., 144.], ... [ 12., 240.]], ... [[ 6., 30.], ... [ 3., 24.], ... [ 2., 20.]]]) True
-
approx_fprime
(x, f, epsilon=None, args=(), kwargs=None, centered=True)[source]¶ Gradient of function, or Jacobian if function fun returns 1d array
Parameters: - x : array
parameters at which the derivative is evaluated
- fun : function
fun(*((x,)+args), **kwargs) returning either one value or 1d array
- epsilon : float, optional
Stepsize, if None, optimal stepsize is used. This is _EPS**(1/2)*x for centered == False and _EPS**(1/3)*x for centered == True.
- args : tuple
Tuple of additional arguments for function fun.
- kwargs : dict
Dictionary of additional keyword arguments for function fun.
- centered : bool
Whether central difference should be returned. If not, does forward differencing.
Returns: - grad : array
gradient or Jacobian
Notes
If fun returns a 1d array, it returns a Jacobian. If a 2d array is returned by fun (e.g., with a value for each observation), it returns a 3d array with the Jacobian of each observation with shape xk x nobs x xk. I.e., the Jacobian of the first observation would be [:, 0, :]