gemseo / utils / derivatives

finite_differences module

Gradient approximation by finite differences.

class gemseo.utils.derivatives.finite_differences.FirstOrderFD(f_pointer, step=1e-06, parallel=False, design_space=None, normalize=True, **parallel_args)[source]

Bases: GradientApproximator

First-order finite differences approximator.

\[\]

rac{df(x)}{dx}pprox rac{f(x+delta x)-f(x)}{delta x}

Parameters:
  • f_pointer (Callable[[ndarray], ndarray]) – The pointer to the function to derive.

  • step (float | ndarray) –

    The default differentiation step.

    By default it is set to 1e-06.

  • parallel (bool) –

    Whether to differentiate the function in parallel.

    By default it is set to False.

  • design_space (DesignSpace | None) – The design space containing the upper bounds of the input variables. If None, consider that the input variables are unbounded.

  • normalize (bool) –

    If True, then the functions are normalized.

    By default it is set to True.

  • **parallel_args (int | bool | float) – The parallel execution options, see gemseo.core.parallel_execution.

compute_optimal_step(x_vect, numerical_error=2.220446049250313e-16, **kwargs)[source]

Compute the gradient by real step.

Parameters:
  • x_vect (ndarray) – The input vector.

  • numerical_error (float) –

    The numerical error associated to the calculation of \(f\). By default machine epsilon (appx 1e-16), but can be higher. when the calculation of \(f\) requires a numerical resolution.

    By default it is set to 2.220446049250313e-16.

  • **kwargs – The additional arguments passed to the function.

Returns:

The optimal steps. The errors.

Return type:

tuple[numpy.ndarray, numpy.ndarray]

f_gradient(x_vect, step=None, x_indices=None, **kwargs)[source]

Approximate the gradient of the function for a given input vector.

Parameters:
  • x_vect (ndarray) – The input vector.

  • step (float | ndarray | None) – The differentiation step. If None, use the default differentiation step.

  • x_indices (Sequence[int] | None) – The components of the input vector to be used for the differentiation. If None, use all the components.

  • **kwargs (Any) – The optional arguments for the function.

Returns:

The approximated gradient.

Return type:

ndarray

generate_perturbations(n_dim, x_vect, x_indices=None, step=None)

Generate the input perturbations from the differentiation step.

These perturbations will be used to compute the output ones.

Parameters:
  • n_dim (int) – The input dimension.

  • x_vect (ndarray) – The input vector.

  • x_indices (Sequence[int] | None) – The components of the input vector to be used for the differentiation. If None, use all the components.

  • step (float | None) – The differentiation step. If None, use the default differentiation step.

Returns:

  • The input perturbations.

  • The differentiation step, either one global step or one step by input component.

Return type:

tuple[ndarray, float | ndarray]

ALIAS = 'finite_differences'
f_pointer: Callable[[ndarray], ndarray]

The pointer to the function to derive.

property step: float

The default approximation step.

gemseo.utils.derivatives.finite_differences.approx_hess(f_p, f_x, f_m, step)[source]

Compute the second-order approximation of the Hessian matrix \(d^2f/dx^2\).

Parameters:
  • f_p (ndarray) – The value of the function \(f\) at the next step \(x+\\delta_x\).

  • f_x (ndarray) – The value of the function \(f\) at the current step \(x\).

  • f_m (ndarray) – The value of the function \(f\) at the previous step \(x-\\delta_x\).

  • step (float) – The differentiation step \(\\delta_x\).

Returns:

The approximation of the Hessian matrix at the current step \(x\).

Return type:

ndarray

gemseo.utils.derivatives.finite_differences.comp_best_step(f_p, f_x, f_m, step, epsilon_mach=2.220446049250313e-16)[source]

Compute the optimal step for finite differentiation.

Applied to a forward first order finite differences gradient approximation.

Require a first evaluation of the perturbed functions values.

The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (round-off when doing \(f(x+step)-f(x))\) are equal.

See also

https://en.wikipedia.org/wiki/Numerical_differentiation and Numerical Algorithms and Digital Representation, Knut Morken, Chapter 11, “Numerical Differenciation”

Parameters:
  • f_p (ndarray) – The value of the function \(f\) at the next step \(x+\\delta_x\).

  • f_x (ndarray) – The value of the function \(f\) at the current step \(x\).

  • f_m (ndarray) – The value of the function \(f\) at the previous step \(x-\\delta_x\).

  • step (float) – The differentiation step \(\\delta_x\).

  • epsilon_mach (float) –

    By default it is set to 2.220446049250313e-16.

Returns:

The estimation of the truncation error. None if the Hessian approximation is too small to compute the optimal step. The estimation of the cancellation error. None if the Hessian approximation is too small to compute the optimal step. The optimal step.

Return type:

tuple[ndarray | None, ndarray | None, float]

gemseo.utils.derivatives.finite_differences.compute_cancellation_error(f_x, step, epsilon_mach=2.220446049250313e-16)[source]

Estimate the cancellation error.

This is the round-off when doing \(f(x+\\delta_x)-f(x)\).

Parameters:
  • f_x (ndarray) – The value of the function at the current step \(x\).

  • step (float) – The step used for the calculations of the perturbed functions values.

  • epsilon_mach

    The machine epsilon.

    By default it is set to 2.220446049250313e-16.

Returns:

The cancellation error.

Return type:

ndarray

gemseo.utils.derivatives.finite_differences.compute_truncature_error(hess, step)[source]

Estimate the truncation error.

Defined for a first order finite differences scheme.

Parameters:
  • hess (ndarray) – The second-order derivative \(d^2f/dx^2\).

  • step (float) – The differentiation step.

Returns:

The truncation error.

Return type:

ndarray