5.2. Numdifftools package details¶
- 5.2.1. numdifftools.tests package
- 5.2.1.1. numdifftools.tests.hamiltonian module
- 5.2.1.2. numdifftools.tests.test_extrapolation module
- 5.2.1.3. numdifftools.tests.test_fornberg module
ExampleFunctions
ExampleFunctions.fun0()
ExampleFunctions.fun1()
ExampleFunctions.fun10()
ExampleFunctions.fun11()
ExampleFunctions.fun12()
ExampleFunctions.fun13()
ExampleFunctions.fun14()
ExampleFunctions.fun2()
ExampleFunctions.fun3()
ExampleFunctions.fun4()
ExampleFunctions.fun5()
ExampleFunctions.fun6()
ExampleFunctions.fun7()
ExampleFunctions.fun8()
ExampleFunctions.fun9()
test_all_weights()
test_fd_derivative()
test_high_order_derivative()
test_low_order_derivative_on_example_functions()
test_weights()
- 5.2.1.4. numdifftools.tests.test_limits module
- 5.2.1.5. numdifftools.tests.test_multicomplex module
TestBicomplex
TestBicomplex.test_add()
TestBicomplex.test_arccos()
TestBicomplex.test_arcsin()
TestBicomplex.test_arg_c()
TestBicomplex.test_assign()
TestBicomplex.test_conjugate()
TestBicomplex.test_cos()
TestBicomplex.test_der_abs()
TestBicomplex.test_der_arccos()
TestBicomplex.test_der_arccosh()
TestBicomplex.test_der_arctan()
TestBicomplex.test_der_cos()
TestBicomplex.test_der_log()
TestBicomplex.test_division()
TestBicomplex.test_dot()
TestBicomplex.test_eq()
TestBicomplex.test_flat()
TestBicomplex.test_ge()
TestBicomplex.test_gt()
TestBicomplex.test_init()
TestBicomplex.test_le()
TestBicomplex.test_lt()
TestBicomplex.test_mod_c()
TestBicomplex.test_multiplication()
TestBicomplex.test_neg()
TestBicomplex.test_norm()
TestBicomplex.test_pow()
TestBicomplex.test_rdivision()
TestBicomplex.test_repr()
TestBicomplex.test_rpow()
TestBicomplex.test_rsub()
TestBicomplex.test_shape()
TestBicomplex.test_sub()
TestBicomplex.test_subsref()
TestDerivative
- 5.2.1.6. numdifftools.tests.test_nd_algopy module
TestDerivative
TestDerivative.test_derivative_cube()
TestDerivative.test_derivative_exp()
TestDerivative.test_derivative_on_log()
TestDerivative.test_derivative_on_sinh()
TestDerivative.test_derivative_sin()
TestDerivative.test_directional_diff()
TestDerivative.test_fun_with_additional_parameters()
TestDerivative.test_high_order_derivative_cos()
TestGradient
TestHessdiag
TestHessian
TestJacobian
- 5.2.1.7. numdifftools.tests.test_nd_scipy module
- 5.2.1.8. numdifftools.tests.test_numdifftools module
TestDerivative
TestDerivative.test_backward_derivative_on_sinh()
TestDerivative.test_central_and_forward_derivative_on_log()
TestDerivative.test_default_scale()
TestDerivative.test_derivative_cube()
TestDerivative.test_derivative_exp()
TestDerivative.test_derivative_of_cos_x()
TestDerivative.test_derivative_sin()
TestDerivative.test_derivative_with_step_options()
TestDerivative.test_directional_diff()
TestDerivative.test_fun_with_additional_parameters()
TestDerivative.test_high_order_derivative_cos()
TestDerivative.test_infinite_functions()
TestGradient
TestHessdiag
TestHessian
TestJacobian
TestRichardson
- 5.2.1.9. numdifftools.tests.test_scripts module
- 5.2.1.10. numdifftools.tests.test_step_generators module
5.2.2. numdifftools.core module¶
numerical differentiation functions:
Derivative, Gradient, Jacobian, and Hessian
Author: Per A. Brodtkorb Created: 01.08.2008 Copyright: (c) pab 2008 Licence: New BSD
- class Derivative(fun, step=None, method='central', order=2, n=1, **options)[source]¶
Bases:
_Limit
Calculate n-th derivative with finite difference approximation
- Parameters
- funfunction
function of one array fun(x, *args, **kwds)
- stepfloat, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(**step_options) if method in in [‘complex’, ‘multicomplex’], otherwise
MaxStepGenerator(**step_options)
The results are extrapolated if the StepGenerator generate more than 3 steps.
- method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- orderint, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
- nint, optional
Order of the derivative.
- richardson_terms: scalar integer, default 2.
number of terms used in the Richardson extrapolation.
- full_outputbool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
- **step_options:
options to pass on to the XXXStepGenerator used.
- Returns
- derndarray
array of derivatives
Notes
Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.
For all methods one should be careful in decreasing the step size too much due to round-off errors.
Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
- K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
- Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
Differentiation. Numerische Mathematik.
- Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
Integrals of Derivatives. Numerische Mathematik.
Examples
>>> import numpy as np >>> import numdifftools as nd
# 1’st derivative of exp(x), at x == 1
>>> fd = nd.Derivative(np.exp) >>> np.allclose(fd(1), 2.71828183) True
>>> d2 = fd([1, 2]) >>> np.allclose(d2, [ 2.71828183, 7.3890561 ]) True
>>> def f(x): ... return x**3 + x**2
>>> df = nd.Derivative(f) >>> np.allclose(df(1), 5) True >>> ddf = nd.Derivative(f, n=2) >>> np.allclose(ddf(1), 8) True
Methods
__call__
(x, *args, **kwds)Call self as a function.
- class info(f_value, error_estimate, final_step, index)¶
Bases:
tuple
- count(value, /)¶
Return number of occurrences of value.
- property error_estimate¶
Alias for field number 1
- property f_value¶
Alias for field number 0
- property final_step¶
Alias for field number 2
- property index¶
Alias for field number 3
- property method¶
Defines the method used in the finite difference approximation.
- property method_order¶
Defines the leading order of the error term in the Richardson extrapolation method.
- property n¶
Order of the derivative.
- property order¶
Defines the order of the error term in the Taylor approximation used.
- property step¶
The step spacing(s) used in the approximation
- class Gradient(fun, step=None, method='central', order=2, n=1, **options)[source]¶
Bases:
Jacobian
Calculate Gradient with finite difference approximation
- Parameters
- funfunction
function of one array fun(x, *args, **kwds)
- stepfloat, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(**step_options) if method in in [‘complex’, ‘multicomplex’], otherwise
MaxStepGenerator(**step_options)
The results are extrapolated if the StepGenerator generate more than 3 steps.
- method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- orderint, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
- richardson_terms: scalar integer, default 2.
number of terms used in the Richardson extrapolation.
- full_outputbool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
- **step_options:
options to pass on to the XXXStepGenerator used.
- Returns
- gradarray
gradient
See also
Notes
Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.
For all methods one should be careful in decreasing the step size too much due to round-off errors.
Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.
If x0 is an n x m array, then fun is assumed to be a function of n * m variables.
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
- K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
- Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
Differentiation. Numerische Mathematik.
- Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
Integrals of Derivatives. Numerische Mathematik.
Examples
>>> import numpy as np >>> import numdifftools as nd >>> fun = lambda x: np.sum(x**2) >>> dfun = nd.Gradient(fun) >>> np.allclose(dfun([1,2,3]), [ 2., 4., 6.]) True
# At [x,y] = [1,1], compute the numerical gradient # of the function sin(x-y) + y*exp(x)
>>> sin = np.sin; exp = np.exp >>> x, y = 1, 1 >>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0]) >>> dz = nd.Gradient(z) >>> dz_dx, dz_dy = dz([x, y]) >>> np.allclose([dz_dx, dz_dy], ... [ 3.7182818284590686, 1.7182818284590162]) True
# At the global minimizer (1,1) of the Rosenbrock function, # compute the gradient. It should be essentially zero.
>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2 >>> grad_rosen = nd.Gradient(rosen) >>> df_dx, df_dy = grad_rosen([x, y]) >>> np.allclose([df_dx, df_dy], [0, 0]) True
Methods
__call__
(x, *args, **kwds)Call self as a function.
- class info(f_value, error_estimate, final_step, index)¶
Bases:
tuple
- count(value, /)¶
Return number of occurrences of value.
- property error_estimate¶
Alias for field number 1
- property f_value¶
Alias for field number 0
- property final_step¶
Alias for field number 2
- property index¶
Alias for field number 3
- property method¶
Defines the method used in the finite difference approximation.
- property method_order¶
Defines the leading order of the error term in the Richardson extrapolation method.
- property n¶
Order of the derivative.
- property order¶
Defines the order of the error term in the Taylor approximation used.
- set_richardson_rule(step_ratio, num_terms=2)¶
Set Richardson exptrapolation options
- property step¶
The step spacing(s) used in the approximation
- class Hessdiag(f, step=None, method='central', order=2, **options)[source]¶
Bases:
Derivative
Calculate Hessian diagonal with finite difference approximation
- Parameters
- funfunction
function of one array fun(x, *args, **kwds)
- stepfloat, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(**step_options) if method in in [‘complex’, ‘multicomplex’], otherwise
MaxStepGenerator(**step_options)
The results are extrapolated if the StepGenerator generate more than 3 steps.
- method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximationorder : int, optional defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
- richardson_terms: scalar integer, default 2.
number of terms used in the Richardson extrapolation.
- full_outputbool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
- **step_options:
options to pass on to the XXXStepGenerator used.
- Returns
- hessdiagarray
hessian diagonal
See also
Notes
Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.
For all methods one should be careful in decreasing the step size too much due to round-off errors.
Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
- K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
- Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
Differentiation. Numerische Mathematik.
- Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
Integrals of Derivatives. Numerische Mathematik.
Examples
>>> import numpy as np >>> import numdifftools as nd >>> fun = lambda x : x[0] + x[1]**2 + x[2]**3 >>> Hfun = nd.Hessdiag(fun, full_output=True) >>> hd, info = Hfun([1,2,3]) >>> np.allclose(hd, [0., 2., 18.]) True
>>> np.all(info.error_estimate < 1e-11) True
Methods
__call__
(x, *args, **kwds)Call self as a function.
- class info(f_value, error_estimate, final_step, index)¶
Bases:
tuple
- count(value, /)¶
Return number of occurrences of value.
- property error_estimate¶
Alias for field number 1
- property f_value¶
Alias for field number 0
- property final_step¶
Alias for field number 2
- property index¶
Alias for field number 3
- property method¶
Defines the method used in the finite difference approximation.
- property method_order¶
Defines the leading order of the error term in the Richardson extrapolation method.
- property n¶
Order of the derivative.
- property order¶
Defines the order of the error term in the Taylor approximation used.
- set_richardson_rule(step_ratio, num_terms=2)¶
Set Richardson exptrapolation options
- property step¶
The step spacing(s) used in the approximation
- class Hessian(f, step=None, method='central', order=None, **options)[source]¶
Bases:
Hessdiag
Calculate Hessian with finite difference approximation
- Parameters
- funfunction
function of one array fun(x, *args, **kwds)
- stepfloat, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(**step_options) if method in in [‘complex’, ‘multicomplex’], otherwise
MaxStepGenerator(**step_options)
The results are extrapolated if the StepGenerator generate more than 3 steps.
- method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- richardson_terms: scalar integer, default 2.
number of terms used in the Richardson extrapolation.
- full_outputbool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
- **step_options:
options to pass on to the XXXStepGenerator used.
- Returns
- hessndarray
array of partial second derivatives, Hessian
See also
Notes
Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.
For all methods one should be careful in decreasing the step size too much due to round-off errors.
Computes the Hessian according to method as: ‘forward’ (4.7), ‘central’ (4.9) and ‘complex’ (4.10):
(5.4)¶\[\quad ((f(x + d_j e_j + d_k e_k) + f(x) - f(x + d_j e_j) - f(x + d_k e_k))) / (d_j d_k)\](5.5)¶\[\quad ((f(x + d_j e_j + d_k e_k) - f(x + d_j e_j - d_k e_k)) - (f(x - d_j e_j + d_k e_k) - f(x - d_j e_j - d_k e_k)) / (4 d_j d_k)\](5.6)¶\[imag(f(x + i d_j e_j + d_k e_k) - f(x + i d_j e_j - d_k e_k)) / (2 d_j d_k)\]where \(e_j\) is a vector with element \(j\) is one and the rest are zero and \(d_j\) is a scalar spacing \(steps_j\).
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
- K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
- Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
Differentiation. Numerische Mathematik.
- Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
Integrals of Derivatives. Numerische Mathematik.
Examples
>>> import numpy as np >>> import numdifftools as nd
# Rosenbrock function, minimized at [1,1]
>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> Hfun = nd.Hessian(rosen) >>> h = Hfun([1, 1]) >>> h array([[ 842., -420.], [-420., 210.]])
# cos(x-y), at (0,0)
>>> cos = np.cos >>> fun = lambda xy : cos(xy[0]-xy[1]) >>> Hfun2 = nd.Hessian(fun) >>> h2 = Hfun2([0, 0]) >>> h2 array([[-1., 1.], [ 1., -1.]])
Methods
__call__
(x, *args, **kwds)Call self as a function.
- class info(f_value, error_estimate, final_step, index)¶
Bases:
tuple
- count(value, /)¶
Return number of occurrences of value.
- property error_estimate¶
Alias for field number 1
- property f_value¶
Alias for field number 0
- property final_step¶
Alias for field number 2
- property index¶
Alias for field number 3
- property method¶
Defines the method used in the finite difference approximation.
- property method_order¶
Defines the leading order of the error term in the Richardson extrapolation method.
- property n¶
Order of the derivative.
- property order¶
Defines the order of the error term in the Taylor approximation used.
- set_richardson_rule(step_ratio, num_terms=2)¶
Set Richardson exptrapolation options
- property step¶
The step spacing(s) used in the approximation
- class Jacobian(fun, step=None, method='central', order=2, n=1, **options)[source]¶
Bases:
Derivative
Calculate Jacobian with finite difference approximation
- Parameters
- funfunction
function of one array fun(x, *args, **kwds)
- stepfloat, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is MinStepGenerator(**step_options) if method in in [‘complex’, ‘multicomplex’], otherwise
MaxStepGenerator(**step_options)
The results are extrapolated if the StepGenerator generate more than 3 steps.
- method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- orderint, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
- richardson_terms: scalar integer, default 2.
number of terms used in the Richardson extrapolation.
- full_outputbool, optional
If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.
- **step_options:
options to pass on to the XXXStepGenerator used.
- Returns
- jacobarray
Jacobian
See also
Notes
Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if fun(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.
For all methods one should be careful in decreasing the step size too much due to round-off errors.
Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.
If fun returns a 1d array, it returns a Jacobian. If a 2d array is returned by fun (e.g., with a value for each observation), it returns a 3d array with the Jacobian of each observation with shape xk x nobs x xk. I.e., the Jacobian of the first observation would be [:, 0, :]
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
- K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), New complex step
derivative approximations with application to second-order kalman filtering, AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
- Lyness, J. M., Moler, C. B. (1966). Vandermonde Systems and Numerical
Differentiation. Numerische Mathematik.
- Lyness, J. M., Moler, C. B. (1969). Generalized Romberg Methods for
Integrals of Derivatives. Numerische Mathematik.
Examples
>>> import numpy as np >>> import numdifftools as nd
#(nonlinear least squares)
>>> xdata = np.arange(0,1,0.1) >>> ydata = 1+2*np.exp(0.75*xdata) >>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2 >>> np.allclose(fun([1, 2, 0.75]).shape, (10,)) True
>>> jfun = nd.Jacobian(fun) >>> val = jfun([1, 2, 0.75]) >>> np.allclose(val, np.zeros((10,3))) True
>>> fun2 = lambda x : x[0]*x[1]*x[2]**2 >>> jfun2 = nd.Jacobian(fun2) >>> np.allclose(jfun2([1.,2.,3.]), [[18., 9., 12.]]) True
>>> fun3 = lambda x : np.vstack((x[0]*x[1]*x[2]**2, x[0]*x[1]*x[2])) >>> jfun3 = nd.Jacobian(fun3)
>>> np.allclose(jfun3([1., 2., 3.]), [[[18.], [9.], [12.]], [[6.], [3.], [2.]]]) True >>> np.allclose(jfun3([4., 5., 6.]), [[[180.], [144.], [240.]], [[30.], [24.], [20.]]]) True >>> np.allclose(jfun3(np.array([[1.,2.,3.]]).T), [[[18.], [9.], [12.]], [[6.], [3.], [2.]]]) True
Methods
__call__
(x, *args, **kwds)Call self as a function.
- class info(f_value, error_estimate, final_step, index)¶
Bases:
tuple
- count(value, /)¶
Return number of occurrences of value.
- property error_estimate¶
Alias for field number 1
- property f_value¶
Alias for field number 0
- property final_step¶
Alias for field number 2
- property index¶
Alias for field number 3
- property method¶
Defines the method used in the finite difference approximation.
- property method_order¶
Defines the leading order of the error term in the Richardson extrapolation method.
- property n¶
Order of the derivative.
- property order¶
Defines the order of the error term in the Taylor approximation used.
- set_richardson_rule(step_ratio, num_terms=2)¶
Set Richardson exptrapolation options
- property step¶
The step spacing(s) used in the approximation
- class MaxStepGenerator(base_step=2.0, step_ratio=None, num_steps=15, step_nom=None, offset=0, num_extrap=9, use_exact_steps=False, check_num_steps=True, scale=500)[source]¶
Bases:
MinStepGenerator
Generates a sequence of steps
- where
steps = step_nom * base_step * step_ratio ** (-i + offset)
for i = 0, 1, …, num_steps-1.
- Parameters
- base_stepfloat, array-like, default 2.0
Defines the maximum step, if None, the value is set to EPS**(1/scale)
- step_ratioreal scalar, optional, default 2 or 1.6
Ratio between sequential steps generated. Note: Ratio > 1 If None then step_ratio is 2 for n=1 otherwise step_ratio is 1.6
- num_stepsscalar integer, optional, default min_num_steps + num_extrap
defines number of steps generated. It should be larger than min_num_steps = (n + order - 1) / fact where fact is 1, 2 or 4 depending on differentiation method used.
- step_nomdefault maximum(log(exp(1)+|x|), 1)
Nominal step where x is supplied at runtime through the __call__ method.
- offsetreal scalar, optional, default 0
offset to the base step
- num_extrapscalar integer, default 0
number of points used for extrapolation
- check_num_stepsboolean, default True
If True make sure num_steps is larger than the minimum required steps.
- use_exact_stepsboolean, default True
If true make sure exact steps are generated
- scalereal scalar, default 500
scale used in base step.
- property base_step¶
Base step defines the minimum or maximum step when offset==0.
- property min_num_steps¶
Minimum number of steps required given the differentiation method and order.
- property num_steps¶
Defines number of steps generated
- property scale¶
Scale used in base step.
- step_generator_function(x, method='forward', n=1, order=2)¶
Step generator function
- property step_nom¶
Nominal step
- property step_ratio¶
Ratio between sequential steps generated
- class MinStepGenerator(base_step=None, step_ratio=None, num_steps=None, step_nom=None, offset=0, num_extrap=0, use_exact_steps=True, check_num_steps=True, scale=None)[source]¶
Bases:
object
Generates a sequence of steps
- where
steps = step_nom * base_step * step_ratio ** (i + offset)
for i = num_steps-1,… 1, 0.
- Parameters
- base_stepfloat, array-like, optional
Defines the minimum step, if None, the value is set to EPS**(1/scale)
- step_ratioreal scalar, optional, default 2
Ratio between sequential steps generated. Note: Ratio > 1 If None then step_ratio is 2 for n=1 otherwise step_ratio is 1.6
- num_stepsscalar integer, optional, default min_num_steps + num_extrap
defines number of steps generated. It should be larger than min_num_steps = (n + order - 1) / fact where fact is 1, 2 or 4 depending on differentiation method used.
- step_nomdefault maximum(log(exp(1)+|x|), 1)
Nominal step where x is supplied at runtime through the __call__ method.
- offsetreal scalar, optional, default 0
offset to the base step
- num_extrapscalar integer, default 0
number of points used for extrapolation
- check_num_stepsboolean, default True
If True make sure num_steps is larger than the minimum required steps.
- use_exact_stepsboolean, default True
If true make sure exact steps are generated
- scalereal scalar, optional
scale used in base step. If not None it will override the default computed with the default_scale function.
- property base_step¶
Base step defines the minimum or maximum step when offset==0.
- property min_num_steps¶
Minimum number of steps required given the differentiation method and order.
- property num_steps¶
Defines number of steps generated
- property scale¶
Scale used in base step.
- property step_nom¶
Nominal step
- property step_ratio¶
Ratio between sequential steps generated
- class Richardson(step_ratio=2.0, step=1, order=1, num_terms=2)[source]¶
Bases:
object
Extrapolates a sequence with Richardsons method
- Parameters
- step_ratio: real scalar
Ratio between sequential steps, h, generated.
- step: scalar integer
Defines the step between exponents in the error polynomial, i.e., step = k_1 - k_0 = k_2 - k_1 = … = k_{i+1} - k_i
- order: scalar integer
Leading order of truncation error.
- num_terms: scalar integer
Number of terms used in the polynomial fit.
Notes
Suppose f(h) is an approximation of L (exact value) that depends on a positive step size h described with a sequence of the form
L = f(h) + a0 * h^k_0 + a1 * h^k_1+ a2 * h^k_2 + …
where the ai are unknown constants and the k_i are known constants such that h^k_i > h^(k_i+1).
If we evaluate the right hand side for different stepsizes h we can fit a polynomial to that sequence of approximations. This is exactly what this class does. Here k_0 is the leading order step size behavior of truncation error as L = f(h)+O(h^k_0) (f(h) -> L as h -> 0, but f(0) != L) and k_i = order + step * i .
Examples
>>> import numpy as np >>> import numdifftools as nd >>> n = 3 >>> Ei = np.zeros((n,1)) >>> h = np.zeros((n,1)) >>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1) >>> for k in np.arange(n): ... x = linfun(k) ... h[k] = x[1] ... Ei[k] = np.trapz(np.sin(x),x) >>> En, err, step = nd.Richardson(step=1, order=1)(Ei, h) >>> truErr = np.abs(En-1.) >>> np.all(truErr < err) True >>> np.all(np.abs(Ei-1)<1e-3) True >>> np.allclose(En, 1) True
- dea3(v_0, v_1, v_2, symmetric=False)[source]¶
Extrapolate a slowly convergent sequence using Shanks transformations.
- Parameters
- v_0, v_1, v_2array-like
3 values of a convergent sequence to extrapolate
- Returns
- resultarray-like
extrapolated value
- abserrarray-like
absolute error estimate
See also
Dea
Notes
DEA3 attempts to extrapolate nonlinearly by Shanks transformations to a better estimate of the sequence’s limiting value based on only three values. The epsilon algorithm of P. Wynn, see [Rf7ab399ffe8b-1], is used to perform the non-linear Shanks transformations. The routine is a vectorized translation of the DQELG function found in the QUADPACK fortran library for LIMEXP=3, see [Rf7ab399ffe8b-2] and [Rf7ab399ffe8b-3].
References
- 1
Wynn, P. (1956) “On a Device for Computing the em(Sn) Transformation”, Mathematical Tables and Other Aids to Computation, 10, 91-96.
- 2
R. Piessens, E. De Doncker-Kapenga and C. W. Uberhuber (1983), “QUADPACK: a subroutine package for automatic integration”, Springer, ISBN: 3-540-12553-1, 1983.
- 3
- 4
Examples
# integrate sin(x) from 0 to pi/2
>>> import numpy as np >>> import numdifftools as nd >>> Ei= np.zeros(3) >>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1) >>> for k in np.arange(3): ... x = linfun(k) ... Ei[k] = np.trapz(np.sin(x),x) >>> [En, err] = nd.dea3(Ei[0], Ei[1], Ei[2]) >>> truErr = np.abs(En-1.) >>> np.all(truErr < err) True >>> np.allclose(En, 1) True >>> np.all(np.abs(Ei-1)<1e-3) True
- directionaldiff(f, x0, vec, **options)[source]¶
Return directional derivative of a function of n variables
- Parameters
- f: function
analytical function to differentiate.
- x0: array
vector location at which to differentiate ‘f’. If x0 is an nXm array, then ‘f’ is assumed to be a function of n*m variables.
- vec: array
vector defining the line along which to take the derivative. It should be the same size as x0, but need not be a vector of unit length.
- **options:
optional arguments to pass on to Derivative.
- Returns
- dder: scalar
estimate of the first derivative of ‘f’ in the specified direction.
See also
Examples
At the global minimizer (1,1) of the Rosenbrock function, compute the directional derivative in the direction [1 2]
>>> import numpy as np >>> import numdifftools as nd >>> vec = np.r_[1, 2] >>> rosen = lambda x: (1-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> dd, info = nd.directionaldiff(rosen, [1, 1], vec, full_output=True) >>> np.allclose(dd, 0) True >>> np.abs(info.error_estimate)<1e-14 True
5.2.3. numdifftools.extrapolation module¶
Created on 28. aug. 2015
@author: pab
- class Dea(limexp=50)[source]¶
Bases:
object
Extrapolate a slowly convergent sequence using repeated Shanks transformations.
Notes
DEA attempts to extrapolate nonlinearly by Shanks transformations to a better estimate of the sequence’s limiting value, thus improving the rate of convergence. The epsilon algorithm of P. Wynn, see [1]_, is used to perform the non-linear Shanks transformations. The routine is a translation of the DQELG function found in the QUADPACK fortran library, see [2]_ and [3]_.
List of major variables:
- LIMEXP: scalar integer
The maximum number of elements the epsilon table data can contain. The epsilon table is stored in the first (LIMEXP+2) entries of EPSTAB.
- EPSTAB: real vector or size (LIMEXP+2+3)
The first LIMEXP+2 elements contains the two lower diagonals of the triangular epsilon table. The elements are numbered starting at the right-hand corner of the
triangle.
- E0,E1,E2,E3: real scalars
The 4 elements on which the computation of a new element in the epsilon table is based.
- NRES: scalar integer
Number of extrapolation results actually generated by the epsilon algorithm in prior calls to the routine.
- NEWELM: scalar integer
Number of elements to be computed in the new diagonal of the epsilon table. The condensed epsilon table is computed. Only those elements needed for the computation of the next diagonal are preserved.
- RES: real scalar
New element in the new diagonal of the epsilon table.
- ERROR: real scalar
An estimate of the absolute error of RES. The routine decides whether RESULT=RES or RESULT=SVALUE by comparing ERROR with abserr from the previous call.
- RES3LA: real vector of size 3
Contains at most the last 3 results.
- property limexp¶
Maximum number of elements the epsilon table data.
- class EpsAlg[source]¶
Bases:
object
Extrapolate a slowly convergent sequence using Shanks transformation.
Notes
The iterated Shanks transformation is computed using the Wynn epsilon algorithm (see equation 4.3-10a to 4.3-10c given on page 25 in [R9678172c97e0-1]).
References
- 1
E. J. Weniger (1989) “Nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series” Computer Physics Reports Vol. 10, 189 - 371 http://arxiv.org/abs/math/0306302v1
- 2
- class Richardson(step_ratio=2.0, step=1, order=1, num_terms=2)[source]¶
Bases:
object
Extrapolates a sequence with Richardsons method
- Parameters
- step_ratio: real scalar
Ratio between sequential steps, h, generated.
- step: scalar integer
Defines the step between exponents in the error polynomial, i.e., step = k_1 - k_0 = k_2 - k_1 = … = k_{i+1} - k_i
- order: scalar integer
Leading order of truncation error.
- num_terms: scalar integer
Number of terms used in the polynomial fit.
Notes
Suppose f(h) is an approximation of L (exact value) that depends on a positive step size h described with a sequence of the form
L = f(h) + a0 * h^k_0 + a1 * h^k_1+ a2 * h^k_2 + …
where the ai are unknown constants and the k_i are known constants such that h^k_i > h^(k_i+1).
If we evaluate the right hand side for different stepsizes h we can fit a polynomial to that sequence of approximations. This is exactly what this class does. Here k_0 is the leading order step size behavior of truncation error as L = f(h)+O(h^k_0) (f(h) -> L as h -> 0, but f(0) != L) and k_i = order + step * i .
Examples
>>> import numpy as np >>> import numdifftools as nd >>> n = 3 >>> Ei = np.zeros((n,1)) >>> h = np.zeros((n,1)) >>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1) >>> for k in np.arange(n): ... x = linfun(k) ... h[k] = x[1] ... Ei[k] = np.trapz(np.sin(x),x) >>> En, err, step = nd.Richardson(step=1, order=1)(Ei, h) >>> truErr = np.abs(En-1.) >>> np.all(truErr < err) True >>> np.all(np.abs(Ei-1)<1e-3) True >>> np.allclose(En, 1) True
- convolve(sequence, rule, **kwds)[source]¶
Wrapper around scipy.ndimage.convolve1d that allows complex input.
- dea3(v_0, v_1, v_2, symmetric=False)[source]¶
Extrapolate a slowly convergent sequence using Shanks transformations.
- Parameters
- v_0, v_1, v_2array-like
3 values of a convergent sequence to extrapolate
- Returns
- resultarray-like
extrapolated value
- abserrarray-like
absolute error estimate
See also
Notes
DEA3 attempts to extrapolate nonlinearly by Shanks transformations to a better estimate of the sequence’s limiting value based on only three values. The epsilon algorithm of P. Wynn, see [Rc8bfc08f7c28-1], is used to perform the non-linear Shanks transformations. The routine is a vectorized translation of the DQELG function found in the QUADPACK fortran library for LIMEXP=3, see [Rc8bfc08f7c28-2] and [Rc8bfc08f7c28-3].
References
- 1
Wynn, P. (1956) “On a Device for Computing the em(Sn) Transformation”, Mathematical Tables and Other Aids to Computation, 10, 91-96.
- 2
R. Piessens, E. De Doncker-Kapenga and C. W. Uberhuber (1983), “QUADPACK: a subroutine package for automatic integration”, Springer, ISBN: 3-540-12553-1, 1983.
- 3
- 4
Examples
# integrate sin(x) from 0 to pi/2
>>> import numpy as np >>> import numdifftools as nd >>> Ei= np.zeros(3) >>> linfun = lambda i : np.linspace(0, np.pi/2., 2**(i+5)+1) >>> for k in np.arange(3): ... x = linfun(k) ... Ei[k] = np.trapz(np.sin(x),x) >>> [En, err] = nd.dea3(Ei[0], Ei[1], Ei[2]) >>> truErr = np.abs(En-1.) >>> np.all(truErr < err) True >>> np.allclose(En, 1) True >>> np.all(np.abs(Ei-1)<1e-3) True
- dea_demo()[source]¶
>>> from numdifftools.extrapolation import dea_demo >>> dea_demo() NO. PANELS TRAP. APPROX APPROX W/EA abserr 1 0.78539816 0.78539816 0.78539816 2 0.94805945 0.94805945 0.97596771 4 0.98711580 0.99945672 0.21405856 8 0.99678517 0.99996674 0.05190729 16 0.99919668 0.99999988 0.00057629 32 0.99979919 1.00000000 0.00057665 64 0.99994980 1.00000000 0.00003338 128 0.99998745 1.00000000 0.00000012 256 0.99999686 1.00000000 0.00000000 512 0.99999922 1.00000000 0.00000000 1024 0.99999980 1.00000000 0.00000000 2048 0.99999995 1.00000000 0.00000000
- epsalg_demo()[source]¶
>>> from numdifftools.extrapolation import epsalg_demo >>> epsalg_demo() NO. PANELS TRAP. APPROX APPROX W/EA abserr 1 0.78539816 0.78539816 0.21460184 2 0.94805945 0.94805945 0.05194055 4 0.98711580 0.99945672 0.00054328 8 0.99678517 0.99996674 0.00003326 16 0.99919668 0.99999988 0.00000012 32 0.99979919 1.00000000 0.00000000 64 0.99994980 1.00000000 0.00000000 128 0.99998745 1.00000000 0.00000000 256 0.99999686 1.00000000 0.00000000 512 0.99999922 1.00000000 0.00000000
- richardson_demo()[source]¶
>>> from numdifftools.extrapolation import richardson_demo >>> richardson_demo() NO. PANELS TRAP. APPROX APPROX W/R abserr 1 0.78539816 0.78539816 0.21460184 2 0.94805945 1.11072073 0.11072073 4 0.98711580 0.99798929 0.00201071 8 0.99678517 0.99988201 0.00011799 16 0.99919668 0.99999274 0.00000726 32 0.99979919 0.99999955 0.00000045 64 0.99994980 0.99999997 0.00000003 128 0.99998745 1.00000000 0.00000000 256 0.99999686 1.00000000 0.00000000 512 0.99999922 1.00000000 0.00000000
5.2.4. numdifftools.finite_difference module¶
Finite difference methods module.
- class HessdiagDifferenceFunctions[source]¶
Bases:
object
Class defining Hessdiag difference functions
References
- Ridout, M.S. (2009) Statistical applications of the complex-step method
of numerical differentiation. The American Statistician, 63, 66-74
- class HessianDifferenceFunctions[source]¶
Bases:
object
Class defining Hessian difference functions
References
Ridout, M.S. (2009) “Statistical applications of the complex-step method of numerical differentiation”, The American Statistician, 63, 66-74
- class JacobianDifferenceFunctions[source]¶
Bases:
object
Class defining Jacobian difference functions
- class LogHessdiagRule(n=1, method='central', order=2)[source]¶
Bases:
LogRule
Log spaced finite difference Hessdiag rule class
- Parameters
- n2
Order of the derivative.
- method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- orderint, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
Examples
>>> import numpy as np >>> from numdifftools.finite_difference import LogHessdiagRule as Rule
>>> np.allclose(Rule(method='central', order=2).rule(step_ratio=2.0), 2) True >>> np.allclose(Rule(method='central', order=4).rule(step_ratio=2.), ... [-0.66666667, 10.66666667]) True >>> np.allclose(Rule(method='central', order=6).rule(step_ratio=2.), ... [ 4.44444444e-02, -3.55555556e+00, 4.55111111e+01]) True >>> np.allclose(Rule(method='forward', order=2).rule(step_ratio=2.), [ -4., 40., -64.]) True >>> np.allclose(Rule(method='forward', order=4).rule(step_ratio=2.), ... [-1.90476190e-01, 1.10476190e+01, -1.92000000e+02, ... 1.12152381e+03, -1.56038095e+03]) True >>> np.allclose(Rule(method='forward', order=6).rule(step_ratio=2.), ... [-4.09626216e-04, 1.02406554e-01, -8.33015873e+00, 2.76317460e+02, ... -3.84893968e+03, 2.04024004e+04, -2.74895500e+04]) True >>> step_ratio=2.0 >>> fd_rule = Rule(method='forward', order=4) >>> steps = 0.002*(1./step_ratio)**np.arange(6) >>> x0 = np.array([0., 0.]) >>> f = lambda xy : np.cos(xy[0]-xy[1]) >>> f_x0 = f(x0) >>> f_del = [f(x0+h) - f_x0 for h in steps] # forward difference >>> f_del = [fd_rule.diff(f, f_x0, x0, h) for h in steps] # or alternatively >>> fder, h, shape = fd_rule.apply(f_del, steps, step_ratio)
>>> np.allclose(fder, [[-1., -1.], [-1., -1.]]) True
- property n¶
- class LogHessianRule(n=1, method='central', order=2)[source]¶
Bases:
LogRule
Log spaced finite difference Hessian rule class
- Parameters
- n2
Order of the derivative.
- method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- orderint, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
- apply(sequence, steps, step_ratio=2.0)[source]¶
Apply finite difference rule along the first axis.
Return derivative estimates of fun at x0 for a sequence of stepsizes h
- Parameters
- sequence: finite differences
- steps: steps
- property n¶
- property order¶
The order of the error term in the Taylor approximation used
- class LogJacobianRule(n=1, method='central', order=2)[source]¶
Bases:
LogRule
Log spaced finite difference Jacobian rule class
- Parameters
- n1
Order of the derivative.
- method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- orderint, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
Examples
>>> from numdifftools.finite_difference import LogJacobianRule as Rule >>> np.allclose(Rule(n=1, method='central', order=2).rule(step_ratio=2.0), 1) True >>> np.allclose(Rule(n=1, method='central', order=4).rule(step_ratio=2.), ... [-0.33333333, 2.66666667]) True >>> np.allclose(Rule(n=1, method='central', order=6).rule(step_ratio=2.), ... [ 0.02222222, -0.88888889, 5.68888889]) True
>>> np.allclose(Rule(n=1, method='forward', order=2).rule(step_ratio=2.), [-1., 4.]) True
>>> np.allclose(Rule(n=1, method='forward', order=4).rule(step_ratio=2.), ... [ -0.04761905, 1.33333333, -10.66666667, 24.38095238]) True >>> np.allclose(Rule(n=1, method='forward', order=6).rule(step_ratio=2.), ... [ -1.02406554e-04, 1.26984127e-02, -5.07936508e-01, ... 8.12698413e+00, -5.20126984e+01, 1.07381055e+02]) True >>> step_ratio=2.0 >>> fd_rule = Rule(n=1, method='forward', order=4) >>> steps = 0.002*(1./step_ratio)**np.arange(6)
>>> x0 = np.atleast_1d(1.) >>> f = np.exp >>> f_x0 = f(x0) >>> f_del = [f(x0+h) - f_x0 for h in steps] # forward difference >>> f_del = [fd_rule.diff(f, f_x0, x0, h) for h in steps[:, None]] # or alternatively >>> fder, h, shape = fd_rule.apply(f_del, steps, step_ratio) >>> np.allclose(fder, f(x0)) True
- class LogRule(n=1, method='central', order=2)[source]¶
Bases:
object
Log spaced finite difference rule class
- Parameters
- nint, optional
Order of the derivative.
- method{‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}
defines the method used in the approximation
- orderint, optional
defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.
Examples
>>> from numdifftools.finite_difference import LogRule >>> np.allclose(LogRule(n=1, method='central', order=2).rule(step_ratio=2.0), 1) True >>> np.allclose(LogRule(n=1, method='central', order=4).rule(step_ratio=2.), ... [-0.33333333, 2.66666667]) True >>> np.allclose(LogRule(n=1, method='central', order=6).rule(step_ratio=2.), ... [ 0.02222222, -0.88888889, 5.68888889]) True
>>> np.allclose(LogRule(n=1, method='forward', order=2).rule(step_ratio=2.), [-1., 4.]) True
>>> np.allclose(LogRule(n=1, method='forward', order=4).rule(step_ratio=2.), ... [ -0.04761905, 1.33333333, -10.66666667, 24.38095238]) True >>> np.allclose(LogRule(n=1, method='forward', order=6).rule(step_ratio=2.), ... [ -1.02406554e-04, 1.26984127e-02, -5.07936508e-01, ... 8.12698413e+00, -5.20126984e+01, 1.07381055e+02]) True >>> step_ratio=2.0 >>> fd_rule = LogRule(n=2, method='forward', order=4) >>> h = 0.002*(1./step_ratio)**np.arange(6) >>> x0 = 1. >>> f = np.exp >>> f_x0 = f(x0) >>> f_del = f(x0+h) - f_x0 # forward difference >>> f_del = fd_rule.diff(f, f_x0, x0, h) # or alternatively >>> fder, h, shape = fd_rule.apply(f_del, h, step_ratio) >>> np.allclose(fder, f(x0)) True
- apply(sequence, steps, step_ratio=2.0)[source]¶
Apply finite difference rule along the first axis.
Return derivative estimates of fun at x0 for a sequence of stepsizes h
- Parameters
- sequence: finite differences
- steps: steps
- property diff¶
The difference function
- property eval_first_condition¶
True if f(x0) needs to be evaluated given the differentiation method.
- property method_order¶
The leading order of the truncation error of the Richardson extrapolation.
- property richardson_step¶
The step between exponents in the error polynomial of the Richardson extrapolation.
- rule(step_ratio=2.0)[source]¶
Return finite differencing rule.
- Parameters
- step_ratioreal scalar, optional, default 2.0
Ratio between sequential steps generated.
Notes
The rule is for a nominal unit step size, and must be scaled later to reflect the local step size.
Member method used: _fd_matrix
Member variables used: n order method
5.2.5. numdifftools.fornberg module¶
- class Taylor(fun, n=1, r=0.0059, num_extrap=3, step_ratio=1.6, **kwds)[source]¶
Bases:
object
Return Taylor coefficients of complex analytic function using FFT
- Parameters
- funcallable
function to differentiate
- z0real or complex scalar at which to evaluate the derivatives
- nscalar integer, default 1
Number of taylor coefficents to compute. Maximum number is 100.
- rreal scalar, default 0.0059
Initial radius at which to evaluate. For well-behaved functions, the computation should be insensitive to the initial radius to within about four orders of magnitude.
- num_extrapscalar integer, default 3
number of extrapolation steps used in the calculation
- step_ratioreal scalar, default 1.6
Initial grow/shrinking factor for finding the best radius.
- max_iterscalar integer, default 30
Maximum number of iterations
- min_iterscalar integer, default max_iter // 2
Minimum number of iterations before the solution may be deemed degenerate. A larger number allows the algorithm to correct a bad initial radius.
- full_outputbool, optional
If full_output is False, only the coefficents is returned (default). If full_output is True, then (coefs, status) is returned
- Returns
- coefsndarray
array of taylor coefficents
- status: Optional object into which output information is written:
degenerate: True if the algorithm was unable to bound the error iterations: Number of iterations executed function_count: Number of function calls final_radius: Ending radius of the algorithm failed: True if the maximum number of iterations was reached error_estimate: approximate bounds of the rounding error.
Notes
This module uses the method of Fornberg to compute the Taylor series coefficients of a complex analytic function along with error bounds. The method uses a Fast Fourier Transform to invert function evaluations around a circle into Taylor series coefficients and uses Richardson Extrapolation to improve and bound the estimate. Unlike real-valued finite differences, the method searches for a desirable radius and so is reasonably insensitive to the initial radius-to within a number of orders of magnitude at least. For most cases, the default configuration is likely to succeed.
Restrictions: The method uses the coefficients themselves to control the truncation error, so the error will not be properly bounded for functions like low-order polynomials whose Taylor series coefficients are nearly zero. If the error cannot be bounded, degenerate flag will be set to true, and an answer will still be computed and returned but should be used with caution.
References
- [1] Fornberg, B. (1981).
Numerical Differentiation of Analytic Functions. ACM Transactions on Mathematical Software (TOMS), 7(4), 512-526. http://doi.org/10.1145/355972.355979
Examples
Compute the first 6 taylor coefficients 1 / (1 - z) expanded round z0 = 0:
>>> import numdifftools.fornberg as ndf >>> import numpy as np >>> c, info = ndf.Taylor(lambda x: 1./(1-x), n=6, full_output=True)(z0=0) >>> np.allclose(c, np.ones(8)) True >>> np.all(info.error_estimate < 1e-9) True >>> (info.function_count, info.iterations, info.failed) == (136, 17, False) True
- derivative(fun, z0, n=1, **kwds)[source]¶
Calculate n-th derivative of complex analytic function using FFT
- Parameters
- funcallable
function to differentiate
- z0real or complex scalar at which to evaluate the derivatives
- nscalar integer, default 1
Number of derivatives to compute where 0 represents the value of the function and n represents the nth derivative. Maximum number is 100.
- rreal scalar, default 0.0061
Initial radius at which to evaluate. For well-behaved functions, the computation should be insensitive to the initial radius to within about four orders of magnitude.
- max_iterscalar integer, default 30
Maximum number of iterations
- min_iterscalar integer, default max_iter // 2
Minimum number of iterations before the solution may be deemed degenerate. A larger number allows the algorithm to correct a bad initial radius.
- step_ratioreal scalar, default 1.6
Initial grow/shrinking factor for finding the best radius.
- num_extrapscalar integer, default 3
number of extrapolation steps used in the calculation
- full_outputbool, optional
If full_output is False, only the derivative is returned (default). If full_output is True, then (der, status) is returned der is the derivative, and status is a Results object.
- Returns
- derndarray
array of derivatives
- status: Optional object into which output information is written. Fields:
degenerate: True if the algorithm was unable to bound the error iterations: Number of iterations executed function_count: Number of function calls final_radius: Ending radius of the algorithm failed: True if the maximum number of iterations was reached error_estimate: approximate bounds of the rounding error.
Notes
This module uses the method of Fornberg to compute the derivatives of a complex analytic function along with error bounds. The method uses a Fast Fourier Transform to invert function evaluations around a circle into Taylor series coefficients, uses Richardson Extrapolation to improve and bound the estimate, then multiplies by a factorial to compute the derivatives. Unlike real-valued finite differences, the method searches for a desirable radius and so is reasonably insensitive to the initial radius-to within a number of orders of magnitude at least. For most cases, the default configuration is likely to succeed.
Restrictions: The method uses the coefficients themselves to control the truncation error, so the error will not be properly bounded for functions like low-order polynomials whose Taylor series coefficients are nearly zero. If the error cannot be bounded, degenerate flag will be set to true, and an answer will still be computed and returned but should be used with caution.
References
- [1] Fornberg, B. (1981).
Numerical Differentiation of Analytic Functions. ACM Transactions on Mathematical Software (TOMS), 7(4), 512-526. http://doi.org/10.1145/355972.355979
Examples
To compute the first five derivatives of 1 / (1 - z) at z = 0: Compute the first 6 taylor derivatives of 1 / (1 - z) at z0 = 0:
>>> import numdifftools.fornberg as ndf >>> import numpy as np >>> def fun(x): ... return 1./(1-x) >>> c, info = ndf.derivative(fun, z0=0, n=6, full_output=True) >>> np.allclose(c, [1, 1, 2, 6, 24, 120, 720, 5040]) True >>> np.all(info.error_estimate < 1e-9*c.real) True >>> (info.function_count, info.iterations, info.failed) == (136, 17, False) True
- fd_derivative(fx, x, n=1, m=2)[source]¶
Return the n’th derivative for all points using Finite Difference method.
- Parameters
- fxvector
function values which are evaluated on x i.e. fx[i] = f(x[i])
- xvector
abscissas on which fx is evaluated. The x values can be arbitrarily spaced but must be distinct and len(x) > n.
- nscalar integer
order of derivative.
- mscalar integer
defines the stencil size. The stencil size is of 2 * mm + 1 points in the interior, and 2 * mm + 2 points for each of the 2 * mm boundary points where mm = n // 2 + m.
- fd_derivative evaluates an approximation for the n’th derivative of the
- vector function f(x) using the Fornberg finite difference method.
- Restrictions: 0 < n < len(x) and 2*mm+2 <= len(x)
See also
Examples
>>> import numpy as np >>> import numdifftools.fornberg as ndf >>> x = np.linspace(-1, 1, 25) >>> fx = np.exp(x) >>> df = ndf.fd_derivative(fx, x, n=1) >>> np.allclose(df, fx) True
- fd_weights(x, x0=0, n=1)[source]¶
Return finite difference weights for the n’th derivative.
- Parameters
- xvector
abscissas used for the evaluation for the derivative at x0.
- x0scalar
location where approximations are to be accurate
- nscalar integer
order of derivative. Note for n=0 this can be used to evaluate the interpolating polynomial itself.
See also
Examples
>>> import numpy as np >>> import numdifftools.fornberg as ndf >>> x = np.linspace(-1, 1, 5) * 1e-3 >>> w = ndf.fd_weights(x, x0=0, n=1) >>> df = np.dot(w, np.exp(x)) >>> np.allclose(df, 1) True
- fd_weights_all(x, x0=0, n=1)[source]¶
Return finite difference weights for derivatives of all orders up to n.
- Parameters
- xvector, length m
x-coordinates for grid points
- x0scalar
location where approximations are to be accurate
- nscalar integer
highest derivative that we want to find weights for
- Returns
- weightsarray, shape n+1 x m
contains coefficients for the j’th derivative in row j (0 <= j <= n)
See also
Notes
The x values can be arbitrarily spaced but must be distinct and len(x) > n.
The Fornberg algorithm is much more stable numerically than regular vandermonde systems for large values of n.
References
B. Fornberg (1998) “Calculation of weights_and_points in finite difference formulas”, SIAM Review 40, pp. 685-691.
http://www.scholarpedia.org/article/Finite_difference_method
- taylor(fun, z0=0, n=1, r=0.0059, num_extrap=3, step_ratio=1.6, **kwds)[source]¶
Return Taylor coefficients of complex analytic function using FFT
- Parameters
- funcallable
function to differentiate
- z0real or complex scalar at which to evaluate the derivatives
- nscalar integer, default 1
Number of taylor coefficents to compute. Maximum number is 100.
- rreal scalar, default 0.0059
Initial radius at which to evaluate. For well-behaved functions, the computation should be insensitive to the initial radius to within about four orders of magnitude.
- num_extrapscalar integer, default 3
number of extrapolation steps used in the calculation
- step_ratioreal scalar, default 1.6
Initial grow/shrinking factor for finding the best radius.
- max_iterscalar integer, default 30
Maximum number of iterations
- min_iterscalar integer, default max_iter // 2
Minimum number of iterations before the solution may be deemed degenerate. A larger number allows the algorithm to correct a bad initial radius.
- full_outputbool, optional
If full_output is False, only the coefficents is returned (default). If full_output is True, then (coefs, status) is returned
- Returns
- coefsndarray
array of taylor coefficents
- status: Optional object into which output information is written:
degenerate: True if the algorithm was unable to bound the error iterations: Number of iterations executed function_count: Number of function calls final_radius: Ending radius of the algorithm failed: True if the maximum number of iterations was reached error_estimate: approximate bounds of the rounding error.
Notes
This module uses the method of Fornberg to compute the Taylor series coefficents of a complex analytic function along with error bounds. The method uses a Fast Fourier Transform to invert function evaluations around a circle into Taylor series coefficients and uses Richardson Extrapolation to improve and bound the estimate. Unlike real-valued finite differences, the method searches for a desirable radius and so is reasonably insensitive to the initial radius-to within a number of orders of magnitude at least. For most cases, the default configuration is likely to succeed.
Restrictions: The method uses the coefficients themselves to control the truncation error, so the error will not be properly bounded for functions like low-order polynomials whose Taylor series coefficients are nearly zero. If the error cannot be bounded, degenerate flag will be set to true, and an answer will still be computed and returned but should be used with caution.
References
- [1] Fornberg, B. (1981).
Numerical Differentiation of Analytic Functions. ACM Transactions on Mathematical Software (TOMS), 7(4), 512-526. http://doi.org/10.1145/355972.355979
Examples
Compute the first 6 taylor coefficients 1 / (1 - z) expanded round z0 = 0:
>>> import numdifftools.fornberg as ndf >>> import numpy as np >>> c, info = ndf.taylor(lambda x: 1./(1-x), z0=0, n=6, full_output=True) >>> np.allclose(c, np.ones(8)) True >>> np.all(info.error_estimate < 1e-9) True >>> (info.function_count, info.iterations, info.failed) == (136, 17, False) True
5.2.6. numdifftools.limits module¶
Created on 27. aug. 2015
@author: pab Author: John D’Errico e-mail: woodchips@rochester.rr.com Release: 1.0 Release date: 5/23/2008
- class CStepGenerator(base_step=None, step_ratio=4.0, num_steps=None, step_nom=None, offset=0, scale=1.2, **options)[source]¶
Bases:
MinStepGenerator
Generates a sequence of steps
- where
steps = base_step * step_nom * (exp(1j*dtheta) * step_ratio) ** (i + offset)
for i = 0, 1, …, num_steps-1
- Parameters
- base_stepfloat, array-like, default None
Defines the minimum step, if None, the value is set to EPS**(1/scale)
- step_ratioreal scalar, optional, default 4.0
Ratio between sequential steps generated.
- num_stepsscalar integer, optional,
defines number of steps generated. If None the value is 2 * int(round(16.0/log(abs(step_ratio)))) + 1
- step_nomdefault maximum(log(exp(1)+|x|), 1)
Nominal step where x is supplied at runtime through the __call__ method.
- offsetreal scalar, optional, default 0
offset to the base step
- use_exact_stepsboolean, default True.
If true make sure exact steps are generated.
- scalereal scalar, default 1.2
scale used in base step.
- path‘radial’ or ‘spiral’
Specifies the type of path to take the limit along. Default ‘radial’.
- dtheta: real scalar, default pi/8
If the path is ‘spiral’ it will follow an exponential spiral into the limit, with angular steps at dtheta radians.
- property dtheta¶
Angular steps in radians used for the exponential spiral path.
- property num_steps¶
The number of steps generated
- property step_ratio¶
Ratio between sequential steps generated.
- class Limit(fun, step=None, method='above', order=4, full_output=False, **options)[source]¶
Bases:
_Limit
Compute limit of a function at a given point
- Parameters
- funcallable
function fun(z, *args, **kwds) to compute the limit for z->z0. The function, fun, is assumed to return a result of the same shape and size as its input, z.
- step: float, complex, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is CStepGenerator(base_step=step, **options)
- method{‘above’, ‘below’}
defines if the limit is taken from above or below
- order: positive scalar integer, optional.
defines the order of approximation used to find the specified limit. The order must be member of [1 2 3 4 5 6 7 8]. 4 is a good compromise.
- full_output: bool
If true return additional info.
- options:
options to pass on to CStepGenerator
- Returns
- limit_fz: array like
estimated limit of f(z) as z –> z0
- info:
Only given if full_output is True and contains the following:
- error estimate: ndarray
95 % uncertainty estimate around the limit, such that abs(limit_fz - lim z->z0 f(z)) < error_estimate
- final_step: ndarray
final step used in approximation
Notes
Limit computes the limit of a given function at a specified point, z0. When the function is evaluable at the point in question, this is a simple task. But when the function cannot be evaluated at that location due to a singularity, you may need a tool to compute the limit. Limit does this, as well as produce an uncertainty estimate in the final result.
The methods used by Limit are Richardson extrapolation in a combination with Wynn’s epsilon algorithm which also yield an error estimate. The user can specify the method order, as well as the path into z0. z0 may be real or complex. Limit uses a proportionally cascaded series of function evaluations, moving away from your point of evaluation along a path along the real line (or in the complex plane for complex z0 or step.) The step_ratio is the ratio used between sequential steps. The sign of step allows you to specify a limit from above or below. Negative values of step will cause the limit to be taken approaching z0 from below.
A smaller step_ratio means that Limit will take more function evaluations to evaluate the limit, but the result will potentially be less accurate. The step_ratio MUST be a scalar larger than 1. A value in the range [2,100] is recommended. 4 seems a good compromise.
>>> import numpy as np >>> from numdifftools.limits import Limit >>> def f(x): return np.sin(x)/x >>> lim_f0, err = Limit(f, full_output=True)(0) >>> np.allclose(lim_f0, 1) True >>> np.allclose(err.error_estimate, 1.77249444610966e-15) True
Compute the derivative of cos(x) at x == pi/2. It should be -1. The limit will be taken as a function of the differential parameter, dx.
>>> x0 = np.pi/2; >>> def g(x): return (np.cos(x0+x)-np.cos(x0))/x >>> lim_g0, err = Limit(g, full_output=True)(0) >>> np.allclose(lim_g0, -1) True >>> err.error_estimate < 1e-14 True
Compute the residue at a first order pole at z = 0 The function 1./(1-exp(2*z)) has a pole at z == 0. The residue is given by the limit of z*fun(z) as z –> 0. Here, that residue should be -0.5.
>>> def h(z): return -z/(np.expm1(2*z)) >>> lim_h0, err = Limit(h, full_output=True)(0) >>> np.allclose(lim_h0, -0.5) True >>> err.error_estimate < 1e-14 True
Compute the residue of function 1./sin(z)**2 at z = 0. This pole is of second order thus the residue is given by the limit of z**2*fun(z) as z –> 0.
>>> def g(z): return z**2/(np.sin(z)**2) >>> lim_gpi, err = Limit(g, full_output=True)(0) >>> np.allclose(lim_gpi, 1) True >>> err.error_estimate < 1e-14 True
A more difficult limit is one where there is significant subtractive cancellation at the limit point. In the following example, the cancellation is second order. The true limit should be 0.5.
>>> def k(x): return (x*np.exp(x)-np.expm1(x))/x**2 >>> lim_k0,err = Limit(k, full_output=True)(0) >>> np.allclose(lim_k0, 0.5) True >>> err.error_estimate < 1.0e-8 True
>>> def h(x): return (x-np.sin(x))/x**3 >>> lim_h0, err = Limit(h, full_output=True)(0) >>> np.allclose(lim_h0, 1./6) True >>> err.error_estimate < 1e-8 True
- class Residue(f, step=None, method='above', order=None, pole_order=1, full_output=False, **options)[source]¶
Bases:
Limit
Compute residue of a function at a given point
- Parameters
- funcallable
function fun(z, *args, **kwds) to compute the Residue at z=z0. The function, fun, is assumed to return a result of the same shape and size as its input, z.
- step: float, complex, array-like or StepGenerator object, optional
Defines the spacing used in the approximation. Default is CStepGenerator(base_step=step, **options)
- method{‘above’, ‘below’}
defines if the limit is taken from above or below
- order: positive scalar integer, optional.
defines the order of approximation used to find the specified limit. The order must be member of [1 2 3 4 5 6 7 8]. 4 is a good compromise.
- pole_orderscalar integer
specifies the order of the pole at z0.
- full_output: bool
If true return additional info.
- options:
options to pass on to CStepGenerator
- Returns
- res_fz: array like
estimated residue, i.e., limit of f(z)*(z-z0)**pole_order as z –> z0 When the residue is estimated as approximately zero,
the wrong order pole may have been specified.
- info: namedtuple,
Only given if full_output is True and contains the following:
- error estimate: ndarray
95 % uncertainty estimate around the residue, such that abs(res_fz - lim z->z0 f(z)*(z-z0)**pole_order) < error_estimate Large uncertainties here suggest that the wrong order pole was specified for f(z0).
- final_step: ndarray
final step used in approximation
Notes
Residue computes the residue of a given function at a simple first order pole, or at a second order pole.
The methods used by residue are polynomial extrapolants, which also yield an error estimate. The user can specify the method order, as well as the order of the pole.
- z0 - scalar point at which to compute the residue. z0 may be
real or complex.
See the document DERIVEST.pdf for more explanation of the algorithms behind the parameters of Residue. In most cases, the user should never need to specify anything other than possibly the PoleOrder.
Examples
A first order pole at z = 0
>>> import numpy as np >>> from numdifftools.limits import Residue >>> def f(z): return -1./(np.expm1(2*z)) >>> res_f, info = Residue(f, full_output=True)(0) >>> np.allclose(res_f, -0.5) True >>> info.error_estimate < 1e-14 True
A second order pole around z = 0 and z = pi >>> def h(z): return 1.0/np.sin(z)**2 >>> res_h, info = Residue(h, full_output=True, pole_order=2)([0, np.pi]) >>> np.allclose(res_h, 1) True >>> (info.error_estimate < 1e-10).all() True
5.2.7. numdifftools.multicomplex module¶
Created on 22. apr. 2015
@author: pab
5.2.7.1. References¶
A methodology for robust optimization of low-thrust trajectories in multi-body environments Gregory Lantoine (2010) Phd thesis, Georgia Institute of Technology
Using multicomplex variables for automatic computation of high-order derivatives Gregory Lantoine, Ryan P. Russell , and Thierry Dargent ACM Transactions on Mathematical Software, Vol. 38, No. 3, Article 16, April 2012, 21 pages, http://doi.acm.org/10.1145/2168773.2168774
Bicomplex Numbers and Their Elementary Functions M.E. Luna-Elizarraras, M. Shapiro, D.C. Struppa1, A. Vajiac (2012) CUBO A Mathematical Journal Vol. 14, No 2, (61-80). June 2012.
Computation of higher-order derivatives using the multi-complex step method Adriaen Verheyleweghen, (2014) Project report, NTNU
5.2.8. numdifftools.nd_algopy module¶
5.2.8.1. Numdifftools.nd_algopy¶
This module provide an easy to use interface to derivatives calculated with AlgoPy. Algopy stands for Algorithmic Differentiation in Python.
The purpose of AlgoPy is the evaluation of higher-order derivatives in the forward and reverse mode of Algorithmic Differentiation (AD) of functions that are implemented as Python programs. Particular focus are functions that contain numerical linear algebra functions as they often appear in statistically motivated functions. The intended use of AlgoPy is for easy prototyping at reasonable execution speeds. More precisely, for a typical program a directional derivative takes order 10 times as much time as time as the function evaluation. This is approximately also true for the gradient.
5.2.8.2. Algoritmic differentiation¶
Algorithmic differentiation (AD) is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
Algorithmic differentiation is not:
Symbolic differentiation, nor Numerical differentiation (the method of finite differences). These classical methods run into problems: symbolic differentiation leads to inefficient code (unless carefully done) and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce round-off errors in the discretization process and cancellation. Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Algoritmic differentiation solves all of these problems.
5.2.8.2.1. References¶
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
- class Derivative(fun, n=1, method='forward', full_output=False)[source]¶
Bases:
_Derivative
Calculate n-th derivative with Algorithmic Differentiation method
- Parameters
- fun: function
function of one array fun(x, *args, **kwds)
- n: int, optional
Order of the derivative.
- method: string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
- Returns
- der: ndarray
array of derivatives
Notes
Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
Examples
# 1’st and 2’nd derivative of exp(x), at x == 1
>>> import numpy as np >>> import numdifftools.nd_algopy as nda >>> fd = nda.Derivative(np.exp) # 1'st derivative >>> np.allclose(fd(1), 2.718281828459045) True >>> fd5 = nda.Derivative(np.exp, n=5) # 5'th derivative >>> np.allclose(fd5(1), 2.718281828459045) True
# 1’st derivative of x^3+x^4, at x = [0,1]
>>> fun = lambda x: x**3 + x**4 >>> fd3 = nda.Derivative(fun) >>> np.allclose(fd3([0,1]), [ 0., 7.]) True
Methods
__call__: callable with the following parameters:
x: array_like value at which function derivative is evaluated args: tuple Arguments for function fun. kwds: dict Keyword arguments for function fun.
- class Gradient(fun, n=1, method='forward', full_output=False)[source]¶
Bases:
_Derivative
Calculate Gradient with Algorithmic Differentiation method
- Parameters
- fun: function
function of one array fun(x, *args, **kwds)
- method: string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
- Returns
- grad: array
gradient
See also
Notes
Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
Examples
>>> import numpy as np >>> import numdifftools.nd_algopy as nda >>> fun = lambda x: np.sum(x**2) >>> df = nda.Gradient(fun, method='reverse') >>> np.allclose(df([1,2,3]), [ 2., 4., 6.]) True
#At [x,y] = [1,1], compute the numerical gradient #of the function sin(x-y) + y*exp(x)
>>> sin = np.sin; exp = np.exp >>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0]) >>> dz = nda.Gradient(z) >>> grad2 = dz([1, 1]) >>> np.allclose(grad2, [ 3.71828183, 1.71828183]) True
#At the global minimizer (1,1) of the Rosenbrock function, #compute the gradient. It should be essentially zero.
>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2 >>> rd = nda.Gradient(rosen) >>> grad3 = rd([1,1]) >>> np.allclose(grad3, [ 0., 0.]) True
Methods
__call__: callable with the following parameters:
x: array_like value at which function derivative is evaluated args: tuple Arguments for function fun. kwds: dict Keyword arguments for function fun.
- class Hessdiag(f, method='forward', full_output=False)[source]¶
Bases:
Hessian
Calculate Hessian diagonal with Algorithmic Differentiation method
- Parameters
- fun: function
function of one array fun(x, *args, **kwds)
- method: string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
- Returns
- hessdiagndarray
Hessian diagonal array of partial second order derivatives.
See also
Notes
Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
Examples
>>> import numpy as np >>> import numdifftools.nd_algopy as nda
# Rosenbrock function, minimized at [1,1]
>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> Hfun = nda.Hessdiag(rosen) >>> h = Hfun([1, 1]) # h =[ 842, 210] >>> np.allclose(h, [ 842., 210.]) True
# cos(x-y), at (0,0)
>>> cos = np.cos >>> fun = lambda xy : cos(xy[0]-xy[1]) >>> Hfun2 = nda.Hessdiag(fun) >>> h2 = Hfun2([0, 0]) # h2 = [-1, -1] >>> np.allclose(h2, [-1., -1.]) True
>>> Hfun3 = nda.Hessdiag(fun, method='reverse') >>> h3 = Hfun3([0, 0]) # h2 = [-1, -1]; >>> np.allclose(h3, [-1., -1.]) True
Methods
__call__: callable with the following parameters:
x: array_like value at which function derivative is evaluated args: tuple Arguments for function fun. kwds: dict Keyword arguments for function fun.
- class Hessian(f, method='forward', full_output=False)[source]¶
Bases:
_Derivative
Calculate Hessian with Algorithmic Differentiation method
- Parameters
- fun: function
function of one array fun(x, *args, **kwds)
- method: string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
- Returns
- hessndarray
array of partial second derivatives, Hessian
See also
Notes
Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
Examples
>>> import numpy as np >>> import numdifftools.nd_algopy as nda
# Rosenbrock function, minimized at [1,1]
>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> Hf = nda.Hessian(rosen) >>> h = Hf([1, 1]) # h =[ 842 -420; -420, 210]; >>> np.allclose(h, [[ 842., -420.], ... [-420., 210.]]) True
# cos(x-y), at (0,0)
>>> cos = np.cos >>> fun = lambda xy : cos(xy[0]-xy[1]) >>> Hfun2 = nda.Hessian(fun) >>> h2 = Hfun2([0, 0]) # h2 = [-1 1; 1 -1] >>> np.allclose(h2, [[-1., 1.], ... [ 1., -1.]]) True
>>> Hfun3 = nda.Hessian(fun, method='reverse') >>> h3 = Hfun3([0, 0]) # h2 = [-1, 1; 1, -1]; >>> np.allclose(h3, [[-1., 1.], ... [ 1., -1.]]) True
Methods
__call__: callable with the following parameters:
x: array_like value at which function derivative is evaluated args: tuple Arguments for function fun. kwds: dict Keyword arguments for function fun.
- class Jacobian(fun, n=1, method='forward', full_output=False)[source]¶
Bases:
Gradient
Calculate Jacobian with Algorithmic Differentiation method
- Parameters
- fun: function
function of one array fun(x, *args, **kwds)
- method: string, optional {‘forward’, ‘reverse’}
defines method used in the approximation
- Returns
- jacob: array
Jacobian
See also
Notes
Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
References
Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013
https://en.wikipedia.org/wiki/Automatic_differentiation
Examples
>>> import numpy as np >>> import numdifftools.nd_algopy as nda
#(nonlinear least squares)
>>> xdata = np.arange(0,1,0.1) >>> ydata = 1+2*np.exp(0.75*xdata) >>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
Jfun = nda.Jacobian(fun) # Todo: This does not work Jfun([1,2,0.75]).T # should be numerically zero array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
>>> Jfun2 = nda.Jacobian(fun, method='reverse') >>> res = Jfun2([1,2,0.75]).T # should be numerically zero >>> np.allclose(res, ... [[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], ... [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], ... [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) True
>>> f2 = lambda x : x[0]*x[1]*x[2]**2 >>> Jfun2 = nda.Jacobian(f2) >>> np.allclose(Jfun2([1., 2., 3.]), [[ 18., 9., 12.]]) True
>>> Jfun21 = nda.Jacobian(f2, method='reverse') >>> np.allclose(Jfun21([1., 2., 3.]), [[ 18., 9., 12.]]) True
>>> def fun3(x): ... n = int(np.prod(np.shape(x[0]))) ... out = nda.algopy.zeros((2, n), dtype=x) ... out[0] = x[0]*x[1]*x[2]**2 ... out[1] = x[0]*x[1]*x[2] ... return out >>> Jfun3 = nda.Jacobian(fun3)
>>> np.allclose(Jfun3([1., 2., 3.]), [[[18., 9., 12.]], [[6., 3., 2.]]]) True >>> np.allclose(Jfun3([4., 5., 6.]), [[[180., 144., 240.]], ... [[30., 24., 20.]]]) True >>> np.allclose(Jfun3(np.array([[1.,2.,3.], [4., 5., 6.]]).T), ... [[[18., 0., 9., 0., 12., 0.], ... [0., 180., 0., 144., 0., 240.]], ... [[6., 0., 3., 0., 2., 0.], ... [0., 30., 0., 24., 0., 20.]]]) True
Methods
__call__: callable with the following parameters:
x: array_like value at which function derivative is evaluated args: tuple Arguments for function fun. kwds: dict Keyword arguments for function fun.
- class Taylor(fun, n=1)[source]¶
Bases:
object
Return Taylor coefficients of function using algorithmic differentiation
- Parameters
- fun: callable
function to differentiate
- z0: real or complex array
at which to evaluate the derivatives
- n: scalar integer, default 1
Number of taylor coefficents to compute.
- Returns
- coefs: ndarray
array of taylor coefficents
Examples
Compute the first 6 taylor coefficients 1 + 2*z + 3*z**2 expanded round z0 = 0:
>>> import numpy as np >>> import numdifftools.nd_algopy as nda >>> c = nda.Taylor(lambda x: 1+2*x+3*x**2, n=6)(z0=0) >>> np.allclose(c, [1, 2, 3, 0, 0, 0]) True
- directionaldiff(f, x0, vec, **options)[source]¶
Return directional derivative of a function of n variables
- Parameters
- fun: callable
analytical function to differentiate.
- x0: array
vector location at which to differentiate fun. If x0 is an nxm array, then fun is assumed to be a function of n*m variables.
- vec: array
vector defining the line along which to take the derivative. It should be the same size as x0, but need not be a vector of unit length.
- **options:
optional arguments to pass on to Derivative.
- Returns
- dder: scalar
estimate of the first derivative of fun in the specified direction.
See also
Examples
At the global minimizer (1,1) of the Rosenbrock function, compute the directional derivative in the direction [1 2]
>>> import numpy as np >>> import numdifftools.nd_algopy as nda >>> vec = np.r_[1, 2] >>> rosen = lambda x: (1-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> dd = nda.directionaldiff(rosen, [1, 1], vec) >>> np.allclose(dd, 0) True
5.2.9. numdifftools.nd_scipy module¶
- class Gradient(fun, step=None, method='central', order=2, bounds=(-inf, inf), sparsity=None)[source]¶
Bases:
Jacobian
Calculate Gradient with finite difference approximation
- Parameters
- funfunction
function of one array fun(x, *args, **kwds)
- stepfloat, optional
Stepsize, if None, optimal stepsize is used, i.e., x * _EPS for method==`complex` x * _EPS**(1/2) for method==`forward` x * _EPS**(1/3) for method==`central`.
- method{‘central’, ‘complex’, ‘forward’}
defines the method used in the approximation.
See also
Hessian
,Jacobian
Examples
>>> import numpy as np >>> import numdifftools.nd_scipy as nd >>> fun = lambda x: np.sum(x**2) >>> dfun = nd.Gradient(fun) >>> np.allclose(dfun([1,2,3]), [ 2., 4., 6.]) True
# At [x,y] = [1,1], compute the numerical gradient # of the function sin(x-y) + y*exp(x)
>>> sin = np.sin; exp = np.exp >>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0]) >>> dz = nd.Gradient(z) >>> grad2 = dz([1, 1]) >>> np.allclose(grad2, [ 3.71828183, 1.71828183]) True
# At the global minimizer (1,1) of the Rosenbrock function, # compute the gradient. It should be essentially zero.
>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2 >>> rd = nd.Gradient(rosen) >>> grad3 = rd([1,1]) >>> np.allclose(grad3,[0, 0], atol=1e-7) True
- class Jacobian(fun, step=None, method='central', order=2, bounds=(-inf, inf), sparsity=None)[source]¶
Bases:
_Common
Calculate Jacobian with finite difference approximation
- Parameters
- funfunction
function of one array fun(x, *args, **kwds)
- stepfloat, optional
Stepsize, if None, optimal stepsize is used, i.e., x * _EPS for method==`complex` x * _EPS**(1/2) for method==`forward` x * _EPS**(1/3) for method==`central`.
- method{‘central’, ‘complex’, ‘forward’}
defines the method used in the approximation.
Examples
>>> import numpy as np >>> import numdifftools.nd_scipy as nd
#(nonlinear least squares)
>>> xdata = np.arange(0,1,0.1) >>> ydata = 1+2*np.exp(0.75*xdata) >>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2 >>> np.allclose(fun([1, 2, 0.75]).shape, (10,)) True >>> dfun = nd.Jacobian(fun) >>> np.allclose(dfun([1, 2, 0.75]), np.zeros((10,3))) True
>>> fun2 = lambda x : x[0]*x[1]*x[2]**2 >>> dfun2 = nd.Jacobian(fun2) >>> np.allclose(dfun2([1.,2.,3.]), [[18., 9., 12.]]) True
>>> fun3 = lambda x : np.vstack((x[0]*x[1]*x[2]**2, x[0]*x[1]*x[2]))
TODO: The following does not work: der3 = nd.Jacobian(fun3)([1., 2., 3.]) np.allclose(der3, … [[18., 9., 12.], [6., 3., 2.]]) True np.allclose(nd.Jacobian(fun3)([4., 5., 6.]), … [[180., 144., 240.], [30., 24., 20.]]) True
np.allclose(nd.Jacobian(fun3)(np.array([[1.,2.,3.], [4., 5., 6.]]).T), … [[[ 18., 180.], … [ 9., 144.], … [ 12., 240.]], … [[ 6., 30.], … [ 3., 24.], … [ 2., 20.]]]) True
5.2.10. numdifftools.nd_statsmodels module¶
5.2.10.1. Numdifftools.nd_statsmodels¶
This module provides an easy to use interface to derivatives calculated with statsmodels.numdiff.
- class Gradient(fun, step=None, method='central', order=None)[source]¶
Bases:
Jacobian
Calculate Gradient with finite difference approximation
- Parameters
- funfunction
function of one array fun(x, *args, **kwds)
- stepfloat, optional
Stepsize, if None, optimal stepsize is used, i.e., x * _EPS for method==`complex` x * _EPS**(1/2) for method==`forward` x * _EPS**(1/3) for method==`central`.
- method{‘central’, ‘complex’, ‘forward’, ‘backward’}
defines the method used in the approximation.
Examples
>>> import numpy as np >>> import numdifftools.nd_statsmodels as nd >>> fun = lambda x: np.sum(x**2) >>> dfun = nd.Gradient(fun) >>> np.allclose(dfun([1,2,3]), [ 2., 4., 6.]) True
# At [x,y] = [1,1], compute the numerical gradient # of the function sin(x-y) + y*exp(x)
>>> sin = np.sin; exp = np.exp >>> z = lambda xy: sin(xy[0]-xy[1]) + xy[1]*exp(xy[0]) >>> dz = nd.Gradient(z) >>> grad2 = dz([1, 1]) >>> np.allclose(grad2, [ 3.71828183, 1.71828183]) True
# At the global minimizer (1,1) of the Rosenbrock function, # compute the gradient. It should be essentially zero.
>>> rosen = lambda x : (1-x[0])**2 + 105.*(x[1]-x[0]**2)**2 >>> rd = nd.Gradient(rosen) >>> grad3 = rd([1,1]) >>> np.allclose(grad3,[0, 0]) True
- class Hessian(fun, step=None, method='central', order=None)[source]¶
Bases:
_Common
Calculate Hessian with finite difference approximation
- Parameters
- funfunction
function of one array fun(x, *args, **kwds)
- stepfloat, optional
Stepsize, if None, optimal stepsize is used, i.e., x * _EPS**(1/3) for method==`forward`, complex or central2 x * _EPS**(1/4) for method==`central`.
- method{‘central’, ‘complex’, ‘forward’, ‘backward’}
defines the method used in the approximation.
Examples
>>> import numpy as np >>> import numdifftools.nd_statsmodels as nd
# Rosenbrock function, minimized at [1,1]
>>> rosen = lambda x : (1.-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> Hfun = nd.Hessian(rosen) >>> h = Hfun([1, 1]) >>> np.allclose(h, [[ 842., -420.], [-420., 210.]]) True
# cos(x-y), at (0,0)
>>> cos = np.cos >>> fun = lambda xy : cos(xy[0]-xy[1]) >>> Hfun2 = nd.Hessian(fun) >>> h2 = Hfun2([0, 0]) >>> np.allclose(h2, [[-1., 1.], [ 1., -1.]]) True
- property n¶
- class Jacobian(fun, step=None, method='central', order=None)[source]¶
Bases:
_Common
Calculate Jacobian with finite difference approximation
- Parameters
- funfunction
function of one array fun(x, *args, **kwds)
- stepfloat, optional
Stepsize, if None, optimal stepsize is used, i.e., x * _EPS for method==`complex` x * _EPS**(1/2) for method==`forward` x * _EPS**(1/3) for method==`central`.
- method{‘central’, ‘complex’, ‘forward’, ‘backward’}
defines the method used in the approximation.
Examples
>>> import numpy as np >>> import numdifftools.nd_statsmodels as nd
#(nonlinear least squares)
>>> xdata = np.arange(0,1,0.1) >>> ydata = 1+2*np.exp(0.75*xdata) >>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2 >>> np.allclose(fun([1, 2, 0.75]).shape, (10,)) True >>> dfun = nd.Jacobian(fun) >>> np.allclose(dfun([1, 2, 0.75]), np.zeros((10,3))) True
>>> fun2 = lambda x : x[0]*x[1]*x[2]**2 >>> dfun2 = nd.Jacobian(fun2) >>> np.allclose(dfun2([1.,2.,3.]), [[18., 9., 12.]]) True
>>> fun3 = lambda x : np.vstack((x[0]*x[1]*x[2]**2, x[0]*x[1]*x[2])) >>> np.allclose(nd.Jacobian(fun3)([1., 2., 3.]), [[[18.], [9.], [12.]], [[6.], [3.], [2.]]]) True >>> np.allclose(nd.Jacobian(fun3)([4., 5., 6.]), ... [[[180.], [144.], [240.]], [[30.], [24.], [20.]]]) True
>>> np.allclose(nd.Jacobian(fun3)(np.array([[1.,2.,3.], [4., 5., 6.]]).T), ... [[[ 18., 180.], ... [ 9., 144.], ... [ 12., 240.]], ... [[ 6., 30.], ... [ 3., 24.], ... [ 2., 20.]]]) True
- approx_fprime(x, f, epsilon=None, args=(), kwargs=None, centered=True)[source]¶
Gradient of function, or Jacobian if function fun returns 1d array
- Parameters
- xarray
parameters at which the derivative is evaluated
- funfunction
fun(*((x,)+args), **kwargs) returning either one value or 1d array
- epsilonfloat, optional
Stepsize, if None, optimal stepsize is used. This is _EPS**(1/2)*x for centered == False and _EPS**(1/3)*x for centered == True.
- argstuple
Tuple of additional arguments for function fun.
- kwargsdict
Dictionary of additional keyword arguments for function fun.
- centeredbool
Whether central difference should be returned. If not, does forward differencing.
- Returns
- gradarray
gradient or Jacobian
Notes
If fun returns a 1d array, it returns a Jacobian. If a 2d array is returned by fun (e.g., with a value for each observation), it returns a 3d array with the Jacobian of each observation with shape xk x nobs x xk. I.e., the Jacobian of the first observation would be [:, 0, :]
- approx_fprime_cs(x, f, epsilon=None, args=(), kwargs=None)[source]¶
Calculate gradient or Jacobian with complex step derivative approximation
- Parameters
- xarray
parameters at which the derivative is evaluated
- ffunction
f(*((x,)+args), **kwargs) returning either one value or 1d array
- epsilonfloat, optional
Stepsize, if None, optimal stepsize is used. Optimal step-size is EPS*x. See note.
- argstuple
Tuple of additional arguments for function f.
- kwargsdict
Dictionary of additional keyword arguments for function f.
- Returns
- partialsndarray
array of partial derivatives, Gradient or Jacobian
Notes
The complex-step derivative has truncation error O(epsilon**2), so truncation error can be eliminated by choosing epsilon to be very small. The complex-step derivative avoids the problem of round-off error with small epsilon because there is no subtraction.
5.2.11. numdifftools.step_generators module¶
Step generators module
- class BasicMaxStepGenerator(base_step, step_ratio, num_steps, offset=0)[source]¶
Bases:
object
Generates a sequence of steps of decreasing magnitude
- where
steps = base_step * step_ratio ** (-i + offset)
for i=0, 1,.., num_steps-1.
- Parameters
- base_stepfloat, array-like.
Defines the start step, i.e., maximum step
- step_ratioreal scalar.
Ratio between sequential steps generated. Note: Ratio > 1
- num_stepsscalar integer.
defines number of steps generated.
- offsetreal scalar, optional, default 0
offset to the base step
Examples
>>> from numdifftools.step_generators import BasicMaxStepGenerator >>> step_gen = BasicMaxStepGenerator(base_step=2.0, step_ratio=2, ... num_steps=4) >>> [s for s in step_gen()] [2.0, 1.0, 0.5, 0.25]
- class BasicMinStepGenerator(base_step, step_ratio, num_steps, offset=0)[source]¶
Bases:
BasicMaxStepGenerator
Generates a sequence of steps of decreasing magnitude
- where
steps = base_step * step_ratio ** (i + offset), i=num_steps-1,… 1, 0.
- Parameters
- base_stepfloat, array-like.
Defines the end step, i.e., minimum step
- step_ratioreal scalar.
Ratio between sequential steps generated. Note: Ratio > 1
- num_stepsscalar integer.
defines number of steps generated.
- offsetreal scalar, optional, default 0
offset to the base step
Examples
>>> from numdifftools.step_generators import BasicMinStepGenerator >>> step_gen = BasicMinStepGenerator(base_step=0.25, step_ratio=2, ... num_steps=4) >>> [s for s in step_gen()] [2.0, 1.0, 0.5, 0.25]
- class MaxStepGenerator(base_step=2.0, step_ratio=None, num_steps=15, step_nom=None, offset=0, num_extrap=9, use_exact_steps=False, check_num_steps=True, scale=500)[source]¶
Bases:
MinStepGenerator
Generates a sequence of steps
- where
steps = step_nom * base_step * step_ratio ** (-i + offset)
for i = 0, 1, …, num_steps-1.
- Parameters
- base_stepfloat, array-like, default 2.0
Defines the maximum step, if None, the value is set to EPS**(1/scale)
- step_ratioreal scalar, optional, default 2 or 1.6
Ratio between sequential steps generated. Note: Ratio > 1 If None then step_ratio is 2 for n=1 otherwise step_ratio is 1.6
- num_stepsscalar integer, optional, default min_num_steps + num_extrap
defines number of steps generated. It should be larger than min_num_steps = (n + order - 1) / fact where fact is 1, 2 or 4 depending on differentiation method used.
- step_nomdefault maximum(log(exp(1)+|x|), 1)
Nominal step where x is supplied at runtime through the __call__ method.
- offsetreal scalar, optional, default 0
offset to the base step
- num_extrapscalar integer, default 0
number of points used for extrapolation
- check_num_stepsboolean, default True
If True make sure num_steps is larger than the minimum required steps.
- use_exact_stepsboolean, default True
If true make sure exact steps are generated
- scalereal scalar, default 500
scale used in base step.
- class MinStepGenerator(base_step=None, step_ratio=None, num_steps=None, step_nom=None, offset=0, num_extrap=0, use_exact_steps=True, check_num_steps=True, scale=None)[source]¶
Bases:
object
Generates a sequence of steps
- where
steps = step_nom * base_step * step_ratio ** (i + offset)
for i = num_steps-1,… 1, 0.
- Parameters
- base_stepfloat, array-like, optional
Defines the minimum step, if None, the value is set to EPS**(1/scale)
- step_ratioreal scalar, optional, default 2
Ratio between sequential steps generated. Note: Ratio > 1 If None then step_ratio is 2 for n=1 otherwise step_ratio is 1.6
- num_stepsscalar integer, optional, default min_num_steps + num_extrap
defines number of steps generated. It should be larger than min_num_steps = (n + order - 1) / fact where fact is 1, 2 or 4 depending on differentiation method used.
- step_nomdefault maximum(log(exp(1)+|x|), 1)
Nominal step where x is supplied at runtime through the __call__ method.
- offsetreal scalar, optional, default 0
offset to the base step
- num_extrapscalar integer, default 0
number of points used for extrapolation
- check_num_stepsboolean, default True
If True make sure num_steps is larger than the minimum required steps.
- use_exact_stepsboolean, default True
If true make sure exact steps are generated
- scalereal scalar, optional
scale used in base step. If not None it will override the default computed with the default_scale function.
- property base_step¶
Base step defines the minimum or maximum step when offset==0.
- property min_num_steps¶
Minimum number of steps required given the differentiation method and order.
- property num_steps¶
Defines number of steps generated
- property scale¶
Scale used in base step.
- property step_nom¶
Nominal step
- property step_ratio¶
Ratio between sequential steps generated