gemseo / utils / derivatives

derivatives_approx module

Finite differences approximation.

Classes:

DisciplineJacApprox(discipline[, ...])

Approximates a discipline Jacobian using finite differences or Complex step.

Functions:

approx_hess(f_p, f_x, f_m, step)

Compute the second-order approximation of the Hessian matrix \(d^2f/dx^2\).

comp_best_step(f_p, f_x, f_m, step[, ...])

Compute the optimal step for finite differentiation.

compute_cancellation_error(f_x, step[, ...])

Estimate the cancellation error.

compute_truncature_error(hess, step)

Estimate the truncation error.

class gemseo.utils.derivatives.derivatives_approx.DisciplineJacApprox(discipline, approx_method='finite_differences', step=1e-07, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0)[source]

Bases: object

Approximates a discipline Jacobian using finite differences or Complex step.

Parameters
  • discipline – The discipline for which the Jacobian approximation shall be made.

  • approx_method

    The approximation method, either “complex_step” or “finite_differences”.

    By default it is set to finite_differences.

  • step

    The differentiation step.

    By default it is set to 1e-07.

  • parallel

    Whether to differentiate the discipline in parallel.

    By default it is set to False.

  • n_processes

    The maximum number of processors on which to run.

    By default it is set to 2.

  • use_threading

    Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.

    By default it is set to False.

  • wait_time_between_fork

    The time waited between two forks of the process / thread.

    By default it is set to 0.

Attributes:

COMPLEX_STEP

FINITE_DIFFERENCES

N_CPUS

Methods:

auto_set_step(outputs, inputs[, ...])

Compute the optimal step.

check_jacobian(analytic_jacobian, outputs, ...)

Check if the analytical Jacobian is correct with respect to a reference one.

compute_approx_jac(outputs, inputs[, x_indices])

Approximate the Jacobian.

plot_jac_errors(computed_jac, approx_jac[, ...])

Generate a plot of the exact vs approximated Jacobian.

COMPLEX_STEP = 'complex_step'
FINITE_DIFFERENCES = 'finite_differences'
N_CPUS = 2
auto_set_step(outputs, inputs, print_errors=True, numerical_error=2.220446049250313e-16)[source]

Compute the optimal step.

Require a first evaluation of the perturbed functions values.

The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (round-off when doing \(f(x+step)-f(x))\) are equal.

See: - https://en.wikipedia.org/wiki/Numerical_differentiation - Numerical Algorithms and Digital Representation,

Knut Morken, Chapter 11, “Numerical Differenciation”

Parameters
  • inputs (Sequence[str]) – The names of the inputs used to differentiate the outputs.

  • outputs (Sequence[str]) – The names of the outputs to be differentiated.

  • print_errors (bool) –

    Whether to log the cancellation and truncation error estimates.

    By default it is set to True.

  • numerical_error (float) –

    The numerical error associated to the calculation of \(f\). By default Machine epsilon (appx 1e-16), but can be higher. when the calculation of \(f\) requires a numerical resolution.

    By default it is set to 2.220446049250313e-16.

Returns

The Jacobian of the function.

Return type

numpy.ndarray

check_jacobian(analytic_jacobian, outputs, inputs, discipline, threshold=1e-08, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10, reference_jacobian_path=None, save_reference_jacobian=False, indices=None)[source]

Check if the analytical Jacobian is correct with respect to a reference one.

If reference_jacobian_path is not None and save_reference_jacobian is True, compute the reference Jacobian with the approximation method and save it in reference_jacobian_path.

If reference_jacobian_path is not None and save_reference_jacobian is False, do not compute the reference Jacobian but read it from reference_jacobian_path.

If reference_jacobian_path is None, compute the reference Jacobian without saving it.

Parameters
  • analytic_jacobian (Dict[str,Dict[str,ndarray]]) – The Jacobian to validate.

  • inputs (Iterable[str]) – The names of the inputs used to differentiate the outputs.

  • outputs (Iterable[str]) – The names of the outputs to be differentiated.

  • threshold (float) –

    The acceptance threshold for the Jacobian error.

    By default it is set to 1e-08.

  • plot_result (bool) –

    Whether to plot the result of the validation (computed vs approximated Jacobians).

    By default it is set to False.

  • file_path (Union[str,Path]) –

    The path to the output file if plot_result is True.

    By default it is set to jacobian_errors.pdf.

  • show (bool) –

    Whether to open the figure.

    By default it is set to False.

  • figsize_x (int) –

    The x-size of the figure in inches.

    By default it is set to 10.

  • figsize_y (int) –

    The y-size of the figure in inches.

    By default it is set to 10.

  • reference_jacobian_path (Optional[Union[str,Path]]) –

    The path of the reference Jacobian file.

    By default it is set to None.

  • save_reference_jacobian (bool) –

    Whether to save the reference Jacobian.

    By default it is set to False.

  • indices (Optional[Union[int,Sequence[int],slice,Ellipsis]]) –

    The indices of the inputs and outputs for the different sub-Jacobian matrices, formatted as {variable_name: variable_components} where variable_components can be either an integer, e.g. 2 a sequence of integers, e.g. [0, 3], a slice, e.g. slice(0,3), the ellipsis symbol () or None, which is the same as ellipsis. If a variable name is missing, consider all its components. If None, consider all the components of all the inputs and outputs.

    By default it is set to None.

  • discipline (MDODiscipline) –

Returns

Whether the analytical Jacobian is correct.

Return type

bool

compute_approx_jac(outputs, inputs, x_indices=None)[source]

Approximate the Jacobian.

Parameters
  • inputs (Iterable[str]) – The names of the inputs used to differentiate the outputs.

  • outputs (Iterable[str]) – The names of the outputs to be differentiated.

  • x_indices (Optional[Sequence[int]]) –

    The components of the input vector to be used for the differentiation. If None, use all the components.

    By default it is set to None.

Returns

The approximated Jacobian.

Return type

Dict[str, Dict[str, numpy.ndarray]]

plot_jac_errors(computed_jac, approx_jac, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)[source]

Generate a plot of the exact vs approximated Jacobian.

Parameters
  • computed_jac (numpy.ndarray) – The Jacobian to validate.

  • approx_jac (numpy.ndarray) – The approximated Jacobian.

  • file_path (Union[str, pathlib.Path]) –

    The path to the output file if plot_result is True.

    By default it is set to jacobian_errors.pdf.

  • show (bool) –

    Whether to open the figure.

    By default it is set to False.

  • figsize_x (int) –

    The x-size of the figure in inches.

    By default it is set to 10.

  • figsize_y (int) –

    The y-size of the figure in inches.

    By default it is set to 10.

Return type

matplotlib.figure.Figure

gemseo.utils.derivatives.derivatives_approx.approx_hess(f_p, f_x, f_m, step)[source]

Compute the second-order approximation of the Hessian matrix \(d^2f/dx^2\).

Parameters
  • f_p (numpy.ndarray) – The value of the function \(f\) at the next step \(x+\\delta_x\).

  • f_x (numpy.ndarray) – The value of the function \(f\) at the current step \(x\).

  • f_m (numpy.ndarray) – The value of the function \(f\) at the previous step \(x-\\delta_x\).

  • step (float) – The differentiation step \(\\delta_x\).

Returns

The approximation of the Hessian matrix at the current step \(x\).

Return type

numpy.ndarray

gemseo.utils.derivatives.derivatives_approx.comp_best_step(f_p, f_x, f_m, step, epsilon_mach=2.220446049250313e-16)[source]

Compute the optimal step for finite differentiation.

Applied to a forward first order finite differences gradient approximation.

Require a first evaluation of the perturbed functions values.

The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (round-off when doing \(f(x+step)-f(x))\) are equal.

See: - https://en.wikipedia.org/wiki/Numerical_differentiation - Numerical Algorithms and Digital Representation,

Knut Morken, Chapter 11, “Numerical Differenciation”

Parameters
  • f_p (numpy.ndarray) – The value of the function \(f\) at the next step \(x+\\delta_x\).

  • f_x (numpy.ndarray) – The value of the function \(f\) at the current step \(x\).

  • f_m (numpy.ndarray) – The value of the function \(f\) at the previous step \(x-\\delta_x\).

  • step (float) – The differentiation step \(\\delta_x\).

  • epsilon_mach (float) –

    By default it is set to 2.220446049250313e-16.

Returns

  • The estimation of the truncation error. None if the Hessian approximation is too small to compute the optimal step.

  • The estimation of the cancellation error. None if the Hessian approximation is too small to compute the optimal step.

  • The optimal step.

Return type

Tuple[Optional[numpy.ndarray], Optional[numpy.ndarray], float]

gemseo.utils.derivatives.derivatives_approx.compute_cancellation_error(f_x, step, epsilon_mach=2.220446049250313e-16)[source]

Estimate the cancellation error.

This is the round-off when doing \(f(x+\\delta_x)-f(x)\).

Parameters
  • f_x (numpy.ndarray) – The value of the function at the current step \(x\).

  • step (float) – The step used for the calculations of the perturbed functions values.

  • epsilon_mach

    The machine epsilon.

    By default it is set to 2.220446049250313e-16.

Returns

The cancellation error.

Return type

numpy.ndarray

gemseo.utils.derivatives.derivatives_approx.compute_truncature_error(hess, step)[source]

Estimate the truncation error.

Defined for a first order finite differences scheme.

Parameters
  • hess (numpy.ndarray) – The second-order derivative \(d^2f/dx^2\).

  • step (float) – The differentiation step.

Returns

The truncation error.

Return type

numpy.ndarray