gemseo / utils

derivatives_approx module

Finite differences & complex step approximations

class gemseo.utils.derivatives_approx.ComplexStep(f_pointer, step=1e-20, parallel=False, **parallel_args)[source]

Bases: object

Complex step, second order gradient calculation. Enables a much lower step than real finite differences, typically fd_step=1e-30 since there is no cancellation error due to a difference calculation

grad = Imaginary part(f(x+j*fd_step)/(fd_step))

Constructor

Parameters
  • f_pointer – pointer on function to derive

  • step – differentiation step

  • parallel – if True, executes in parallel

  • parallel_args – arguments passed to the parallel execution, see gemseo.core.parallel_execution

f_gradient(x_vect, step=None, **kwargs)[source]

Compute gradient by complex step

Parameters
  • x_vect (numpy array) – design vector

  • kwargs – optional arguments for the function

Returns

function gradient

Return type

numpy array

generate_perturbations(n_dim, x_vect, step=None)[source]

Generates the perturbations x_perturb which will be used to compute f(x_vect+x_perturb)

Parameters
  • n_dim (integer) – dimension

  • x_vect (numpy array) – design vector

Returns

perturbations

Return type

numpy array

class gemseo.utils.derivatives_approx.DisciplineJacApprox(discipline, approx_method='finite_differences', step=1e-07, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0)[source]

Bases: object

Approximates a discipline Jacobian using finite differences or Complex step

Constructor:

Parameters
  • discipline – the discipline for which the jacobian approximation shall be made

  • approx_method – “complex_step” or “finite_differences”

  • step – the step for finite differences or complex step

  • parallel – if True, executes in parallel

  • n_processes – maximum number of processors on which to run

  • use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

  • wait_time_between_fork – time waited between two forks of the process /Thread

COMPLEX_STEP = 'complex_step'
FINITE_DIFFERENCES = 'finite_differences'
N_CPUS = 2
auto_set_step(outputs, inputs, print_errors=True, numerical_error=2.220446049250313e-16)[source]

Compute optimal step for a forward first order finite differences gradient approximation Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are equal.

Parameters
  • outputs – the list of outputs to compute the derivative

  • inputs – this list of outputs to derive wrt

  • print_errors – if True logs the cancellation and truncation error estimates

  • numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution

Returns

function gradient

See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”

check_jacobian(analytic_jacobian, outputs, inputs, discipline, threshold=1e-08, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)[source]

Checks if the jacobian provided by the linearize() method is correct

Parameters
  • analytic_jacobian – jacobian to validate

  • inputs – list of inputs wrt which to differentiate

  • outputs – list of outputs to differentiate

  • threshold – acceptance threshold for the jacobian error (Default value = 1e-8)

  • inputs – list of inputs wrt which to differentiate (Default value = None)

  • plot_result – plot the result of the validation (computed and approximate jacobians)

  • file_path – path to the output file if plot_result is True

  • show – if True, open the figure

  • figsize_x – x size of the figure in inches

  • figsize_y – y size of the figure in inches

Returns

True if the check is accepted, False otherwise

compute_approx_jac(outputs, inputs)[source]

Computes the approximation

Parameters
  • inputs – derive outputs wrt inputs

  • outputs – outputs to be derived

plot_jac_errors(computed_jac, approx_jac, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)[source]

Generate a plot of the exact vs approximate jacobian

Parameters
  • computed_jac – computed jacobianfrom linearize method

  • approx_jac – finite differences approximate jacobian

  • file_path – path to the output file if plot_result is True

  • show – if True, open the figure

  • figsize_x – x size of the figure in inches

  • figsize_y – y size of the figure in inches

class gemseo.utils.derivatives_approx.FirstOrderFD(f_pointer, step=1e-06, parallel=False, **parallel_args)[source]

Bases: object

Finites differences at first order, first order gradient calculation.

grad =(f(x+fd_step)-f(x))/fd_step

Constructor

Parameters
  • f_pointer – pointer on function to derive

  • step – differentiation step

  • parallel – if True, executes in parallel

  • parallel_args – arguments passed to the parallel execution, see gemseo.core.parallel_execution

compute_optimal_step(x_vect, numerical_error=2.220446049250313e-16, **kwargs)[source]

Compute gradient by real step

Parameters
  • x_vect (numpy array) – design vector

  • kwargs – additional arguments passed to the function

  • numerical_error – numerical error associated to the calculation of f. By default Machine epsilon$ (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution

Returns

function gradient

Return type

numpy array

f_gradient(x_vect, step=None, **kwargs)[source]

Compute gradient by real step

Parameters
  • x_vect (numpy array) – design vector

  • kwargs – additional arguments passed to the function

Returns

function gradient

Return type

numpy array

generate_perturbations(n_dim, x_vect, step=None)[source]

Generates the perturbations x_perturb which will be used to compute f(x_vect+x_perturb) Generates the perturbations x_perturb which will be used to compute f(x_vect+x_perturb)

Parameters
  • n_dim (integer) – dimension

  • x_vect (numpy array) – design vector

  • step – step for the finite differences

Returns

perturbations x_perturb

Return type

numpy array

gemseo.utils.derivatives_approx.approx_hess(f_p, f_x, f_m, step)[source]

Second order approximation of the hessian (d²f/dx²)

Parameters
  • f_p – f(x+step)

  • f_x – f(x)

  • f_m – f(x-step)

  • step – step used for the calculations of perturbed functions values

Returns

hessian approximation

gemseo.utils.derivatives_approx.comp_best_step(f_p, f_x, f_m, step, epsilon_mach=2.220446049250313e-16)[source]

Compute optimal step for a forward first order finite differences gradient approximation Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are equal.

See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”

Parameters
  • f_p – f(x+step)

  • f_x – f(x)

  • f_m – f(x-step)

  • step – step used for the calculations of perturbed functions values

Returns

trunc_error, cancel_error, optimal step

gemseo.utils.derivatives_approx.compute_cancellation_error(f_x, step, epsilon_mach=2.220446049250313e-16)[source]

Compute the cancellation error, ie roundoff when doing f(x+step)-f(x)

Parameters
  • f_x – value of the function at current point

  • step – step used for the calculations of perturbed functions values

  • epsilon_mach – machine epsilon

Returns

the cancellation error

gemseo.utils.derivatives_approx.compute_truncature_error(hess, step)[source]

Computes the truncation error estimation for a first order finite differences scheme

Parameters
  • hess – second order derivative (d²f/dx²)

  • step – step of the finite differences used for the derivatives approximation

Returns

trunc_error the trucation error