finite_differences module¶
Gradient approximation by finite differences.
- class gemseo.utils.derivatives.finite_differences.FirstOrderFD(f_pointer, step=1e-06, parallel=False, design_space=None, normalize=True, **parallel_args)[source]¶
Bases:
GradientApproximator
First-order finite differences approximator.
\[\begin{split}\frac{df(x)}{dx}\approx\frac{f(x+\\delta x)-f(x)}{\\delta x}\end{split}\]- Parameters:
f_pointer (Callable[[ndarray], ndarray]) – The pointer to the function to derive.
step (float | ndarray) –
The default differentiation step.
By default it is set to 1e-06.
parallel (bool) –
Whether to differentiate the function in parallel.
By default it is set to False.
design_space (DesignSpace | None) – The design space containing the upper bounds of the input variables. If None, consider that the input variables are unbounded.
normalize (bool) –
If True, then the functions are normalized.
By default it is set to True.
**parallel_args (int | bool | float) – The parallel execution options, see
gemseo.core.parallel_execution
.
- compute_optimal_step(x_vect, numerical_error=2.220446049250313e-16, **kwargs)[source]¶
Compute the gradient by real step.
- Parameters:
x_vect (ndarray) – The input vector.
numerical_error (float) –
The numerical error associated to the calculation of \(f\). By default, machine epsilon (appx 1e-16), but can be higher. when the calculation of \(f\) requires a numerical resolution.
By default it is set to 2.220446049250313e-16.
**kwargs – The additional arguments passed to the function.
- Returns:
The optimal steps. The errors.
- Return type:
- f_gradient(x_vect, step=None, x_indices=None, **kwargs)[source]¶
Approximate the gradient of the function for a given input vector.
- Parameters:
x_vect (ndarray) – The input vector.
step (float | ndarray | None) – The differentiation step. If None, use the default differentiation step.
x_indices (Sequence[int] | None) – The components of the input vector to be used for the differentiation. If None, use all the components.
**kwargs (Any) – The optional arguments for the function.
- Returns:
The approximated gradient.
- Return type:
ndarray
- generate_perturbations(n_dim, x_vect, x_indices=None, step=None)¶
Generate the input perturbations from the differentiation step.
These perturbations will be used to compute the output ones.
- Parameters:
n_dim (int) – The input dimension.
x_vect (ndarray) – The input vector.
x_indices (Sequence[int] | None) – The components of the input vector to be used for the differentiation. If None, use all the components.
step (float | None) – The differentiation step. If None, use the default differentiation step.
- Returns:
The input perturbations.
The differentiation step, either one global step or one step by input component.
- Return type:
- ALIAS = 'finite_differences'¶
- gemseo.utils.derivatives.finite_differences.approx_hess(f_p, f_x, f_m, step)[source]¶
Compute the second-order approximation of the Hessian matrix \(d^2f/dx^2\).
- Parameters:
- Returns:
The approximation of the Hessian matrix at the current step \(x\).
- Return type:
- gemseo.utils.derivatives.finite_differences.comp_best_step(f_p, f_x, f_m, step, epsilon_mach=2.220446049250313e-16)[source]¶
Compute the optimal step for finite differentiation.
Applied to a forward first order finite differences gradient approximation.
Require a first evaluation of the perturbed functions values.
The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (round-off when doing \(f(x+step)-f(x))\) are equal.
See also
https://en.wikipedia.org/wiki/Numerical_differentiation and Numerical Algorithms and Digital Representation, Knut Morken, Chapter 11, “Numerical Differenciation”
- Parameters:
f_p (ndarray) – The value of the function \(f\) at the next step \(x+\\delta_x\).
f_x (ndarray) – The value of the function \(f\) at the current step \(x\).
f_m (ndarray) – The value of the function \(f\) at the previous step \(x-\\delta_x\).
step (float) – The differentiation step \(\\delta_x\).
epsilon_mach (float) –
By default it is set to 2.220446049250313e-16.
- Returns:
The estimation of the truncation error. None if the Hessian approximation is too small to compute the optimal step. The estimation of the cancellation error. None if the Hessian approximation is too small to compute the optimal step. The optimal step.
- Return type:
- gemseo.utils.derivatives.finite_differences.compute_cancellation_error(f_x, step, epsilon_mach=2.220446049250313e-16)[source]¶
Estimate the cancellation error.
This is the round-off when doing \(f(x+\\delta_x)-f(x)\).
- Parameters:
- Returns:
The cancellation error.
- Return type: