gemseo.core.discipline.discipline module#
The discipline class.
- class Discipline(name='')[source]#
Bases:
BaseDiscipline
The base class for disciplines.
The
execute()
method is used to do compute output data from input data. Thelinearize()
method can be used to compute the Jacobian of the differentiable outputs with respect to differentiated inputs. Thejac
stores this Jacobian. This method can evaluate the true derivatives (default) if the_compute_jacobian()
method is implemented or approximated the derivatives, depending on thelinearization_mode
(one ofLinearizationMode
).Initialize self. See help(type(self)) for accurate signature.
- Parameters:
name (str) --
The name of the discipline. If empty, use the name of the class.
By default it is set to "".
- class ApproximationMode(value)#
Bases:
StrEnum
The approximation derivation modes.
- CENTERED_DIFFERENCES = 'centered_differences'#
The centered differences method used to approximate the Jacobians by perturbing each variable with a small real number.
- COMPLEX_STEP = 'complex_step'#
The complex step method used to approximate the Jacobians by perturbing each variable with a small complex number.
- FINITE_DIFFERENCES = 'finite_differences'#
The finite differences method used to approximate the Jacobians by perturbing each variable with a small real number.
- class InitJacobianType(value)[source]#
Bases:
StrEnum
The way to initialize the Jacobian matrices.
- DENSE = 'dense'#
Initialized as NumPy arrays filled with zeros.
- EMPTY = 'empty'#
Initialized as empty NumPy arrays.
- SPARSE = 'sparse'#
Initialized as SciPy CSR arrays filled with zeros.
- class LinearizationMode(value)#
Bases:
StrEnum
An enumeration.
- ADJOINT = 'adjoint'#
- AUTO = 'auto'#
- CENTERED_DIFFERENCES = 'centered_differences'#
- COMPLEX_STEP = 'complex_step'#
- DIRECT = 'direct'#
- FINITE_DIFFERENCES = 'finite_differences'#
- REVERSE = 'reverse'#
- add_differentiated_inputs(input_names=())[source]#
Add the inputs with respect to which to differentiate the outputs.
The inputs that do not represent continuous numbers are filtered out.
- Parameters:
input_names (Iterable[str]) --
The input variables with respect to which to differentiate the outputs. If empty, use all the inputs.
By default it is set to ().
- Raises:
ValueError -- When an input name is not the name of a discipline input.
- Return type:
None
- add_differentiated_outputs(output_names=())[source]#
Add the outputs to be differentiated.
The outputs that do not represent continuous numbers are filtered out.
- Parameters:
output_names (Iterable[str]) --
The outputs to be differentiated. If empty, use all the outputs.
By default it is set to ().
- Raises:
ValueError -- When an output name is not the name of a discipline output.
- Return type:
None
- check_jacobian(input_data=mappingproxy({}), derr_approx=ApproximationMode.FINITE_DIFFERENCES, step=1e-07, threshold=1e-08, linearization_mode=LinearizationMode.AUTO, input_names=(), output_names=(), parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, fig_size_x=10, fig_size_y=10, reference_jacobian_path='', save_reference_jacobian=False, indices=mappingproxy({}))[source]#
Check if the analytical Jacobian is correct with respect to a reference one.
If reference_jacobian_path is not None and save_reference_jacobian is True, compute the reference Jacobian with the approximation method and save it in reference_jacobian_path.
If reference_jacobian_path is not None and save_reference_jacobian is False, do not compute the reference Jacobian but read it from reference_jacobian_path.
If reference_jacobian_path is None, compute the reference Jacobian without saving it.
- Parameters:
input_data (Mapping[str, ndarray]) --
The input data needed to execute the discipline according to the discipline input grammar. If
None
, use theDiscipline.io.input_grammar.defaults
.By default it is set to {}.
derr_approx (ApproximationMode) --
The approximation method, either "complex_step" or "finite_differences".
By default it is set to "finite_differences".
threshold (float) --
The acceptance threshold for the Jacobian error.
By default it is set to 1e-08.
linearization_mode (LinearizationMode) --
the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = 'auto')
By default it is set to "auto".
input_names (Iterable[str]) --
The names of the inputs wrt which to differentiate the outputs.
By default it is set to ().
output_names (Iterable[str]) --
The names of the outputs to be differentiated.
By default it is set to ().
step (float) --
The differentiation step.
By default it is set to 1e-07.
parallel (bool) --
Whether to differentiate the discipline in parallel.
By default it is set to False.
n_processes (int) --
The maximum simultaneous number of threads, if
use_threading
is True, or processes otherwise, used to parallelize the execution.By default it is set to 2.
use_threading (bool) --
Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.
By default it is set to False.
wait_time_between_fork (float) --
The time waited between two forks of the process / thread.
By default it is set to 0.
auto_set_step (bool) --
Whether to compute the optimal step for a forward first order finite differences gradient approximation.
By default it is set to False.
plot_result (bool) --
Whether to plot the result of the validation (computed vs approximated Jacobians).
By default it is set to False.
file_path (str | Path) --
The path to the output file if
plot_result
isTrue
.By default it is set to "jacobian_errors.pdf".
show (bool) --
Whether to open the figure.
By default it is set to False.
fig_size_x (float) --
The x-size of the figure in inches.
By default it is set to 10.
fig_size_y (float) --
The y-size of the figure in inches.
By default it is set to 10.
reference_jacobian_path (str | Path) --
The path of the reference Jacobian file.
By default it is set to "".
save_reference_jacobian (bool) --
Whether to save the reference Jacobian.
By default it is set to False.
indices (Mapping[str, int | Sequence[int] | Ellipsis | slice]) --
The indices of the inputs and outputs for the different sub-Jacobian matrices, formatted as
{variable_name: variable_components}
wherevariable_components
can be either an integer, e.g. 2 a sequence of integers, e.g. [0, 3], a slice, e.g. slice(0,3), the ellipsis symbol (...) or None, which is the same as ellipsis. If a variable name is missing, consider all its components. IfNone
, consider all the components of all theinputs
andoutputs
.By default it is set to {}.
- Returns:
Whether the analytical Jacobian is correct with respect to the reference one.
- Return type:
- execute(input_data=mappingproxy({}))[source]#
Execute the discipline, i.e. compute output data from input data.
If
virtual_execution
isTrue
, this method returns thedefault_output_data
. Otherwise, it calls the_run()
method performing the true execution and returns the corresponding output data. This_run()
method must be implemented in subclasses.- Parameters:
input_data (StrKeyMapping) --
The input data. Complete this dictionary with the
default_input_data
.By default it is set to {}.
- Returns:
The input and output data.
- Return type:
- linearize(input_data=mappingproxy({}), compute_all_jacobians=False, execute=True)[source]#
Compute the Jacobians of some outputs with respect to some inputs.
- Parameters:
input_data (StrKeyMapping) --
The input data. If empty, use the
`default_input_data
.By default it is set to {}.
compute_all_jacobians (bool) --
Whether to compute the Jacobians of all the outputs with respect to all the inputs. Otherwise, set the output variables to differentiate with
add_differentiated_outputs()
and the input variables with respect to which to differentiate them withadd_differentiated_inputs()
.By default it is set to False.
execute (bool) --
Whether to start by executing the discipline to ensure that the discipline was executed with the right input data; it can be almost free if the corresponding output data have been stored in the
cache
.By default it is set to True.
- Returns:
The Jacobian matrices in the dictionary form
{output_name: {input_name: jacobian_matrix}}
wherejacobian_matrix[i, j]
is the partial derivative ofoutput_name[i]
wrtinput_name[j]
.- Raises:
ValueError -- When either the inputs for which to differentiate the outputs or the outputs to differentiate are missing.
- Return type:
JacobianData
- set_jacobian_approximation(jac_approx_type=ApproximationMode.FINITE_DIFFERENCES, jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)[source]#
Set the Jacobian approximation method.
Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling
Discipline.linearize()
.- Parameters:
jac_approx_type (ApproximationMode) --
The approximation method, either "complex_step" or "finite_differences".
By default it is set to "finite_differences".
jax_approx_step (float) --
The differentiation step.
By default it is set to 1e-07.
jac_approx_n_processes (int) --
The maximum simultaneous number of threads, if
jac_approx_use_threading
is True, or processes otherwise, used to parallelize the execution.By default it is set to 1.
jac_approx_use_threading (bool) --
Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.
By default it is set to False.
jac_approx_wait_time (float) --
The time waited between two forks of the process / thread.
By default it is set to 0.
- Return type:
None
- set_optimal_fd_step(output_names=(), input_names=(), compute_all_jacobians=False, print_errors=False, numerical_error=2.220446049250313e-16)[source]#
Compute the optimal finite-difference step.
Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of the perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (round-off when doing f(x+step)-f(x)) are approximately equal.
Warning
This calls the discipline execution twice per input variables.
See also
https://en.wikipedia.org/wiki/Numerical_differentiation and "Numerical Algorithms and Digital Representation", Knut Morken , Chapter 11, "Numerical Differentiation"
- Parameters:
input_names (Iterable[str]) --
The inputs with respect to which the outputs are linearized. If empty, use the differentiated inputs defined by
add_differentiated_inputs()
.By default it is set to ().
output_names (Iterable[str]) --
The outputs to be linearized. If empty, use the outputs defined by
add_differentiated_outputs()
.By default it is set to ().
compute_all_jacobians (bool) --
Whether to compute the Jacobians of all the output with respect to all the inputs. Otherwise, set the input variables with respect to which to differentiate the output ones with
add_differentiated_inputs()
and set these output variables to differentiate withadd_differentiated_outputs()
.By default it is set to False.
print_errors (bool) --
Whether to display the estimated errors.
By default it is set to False.
numerical_error (float) --
The numerical error associated to the calculation of f. By default, this is the machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution.
By default it is set to 2.220446049250313e-16.
- Returns:
The estimated errors of truncation and cancellation error.
- Raises:
ValueError -- When the Jacobian approximation method has not been set.
- Return type:
- jac: JacobianData#
The Jacobian matrices of the outputs.
The structure is
{output_name: {input_name: jacobian_matrix}}
.
- property linearization_mode: LinearizationMode#
The differentiation mode.