gemseo_calibration

scenario module

A module to calibrate a multidisciplinary system from data.

class gemseo_calibration.scenario.CalibrationScenario(disciplines, input_names, control_outputs, calibration_space, formulation='MDF', name=None, **formulation_options)[source]

Bases: gemseo.core.mdo_scenario.MDOScenario

A Scenario to calibrate a multidisciplinary system from reference data.

Set from parameters, this multidisciplinary system computes output data from input data.

The reference input-output data are used to calibrate the parameters so that the model output data are close to the reference output data for some outputs of interest. This distance is evaluated with a CalibrationMeasure. to compare the discipline outputs with the reference data.

Warning

Just like inputs, the parameters should be defined in the input grammars of the disciplines.

The parameters are calibrated with the method execute() from an optimizer and a reference input-output Dataset.

Parameters
  • disciplines (MDODiscipline | list[MDODiscipline]) – The disciplines whose parameters must be calibrated from the reference data.

  • input_names (str | Iterable[str]) – The names of the inputs to be considered for the calibration.

  • control_outputs (CalibrationMeasure | Sequence[CalibrationMeasure]) – The names of the outputs used to calibrate the disciplines with the name of the calibration measure and the corresponding weight comprised between 0 and 1 (the weights must sum to 1). When the output is a 1D function discretized over an irregular mesh, the name of the mesh can be provided. E.g. CalibrationMeasure(output="z", measure="MSE") CalibrationMeasure(output="z", measure="MSE", weight=0.3) or CalibrationMeasure(output="z", measure="MSE", mesh="z_mesh") Lastly, CalibrationMeasure can be imported from gemseo-calibration.scenario.

  • calibration_space (DesignSpace) – The space of the parameters to be calibrated, whose current values are consider as a prior for calibration.

  • formulation (str) –

    The name of a formulation to manage the multidisciplinary coupling.

    By default it is set to MDF.

  • name (str | None) –

    A name for this calibration scenario. If None, use the name of the class.

    By default it is set to None.

  • **formulation_options (Any) – The options of the formulation.

Return type

None

classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

Return type

None

add_constraint(control_outputs, constraint_type='eq', constraint_name=None, value=None, positive=False)[source]

Define a constraint from a calibration measure related to discipline outputs.

Parameters
  • control_outputs (CalibrationMeasure | Iterable[CalibrationMeasure]) – The names of the outputs used to calibrate the disciplines with the name of the calibration measure and the corresponding weight comprised between 0 and 1 (the weights must sum to 1). When the output is a 1D function discretized over an irregular mesh, the name of the mesh can be provided. E.g. CalibrationMeasure(output="z", measure="MSE") CalibrationMeasure(output="z", measure="MSE", weight=0.3) or CalibrationMeasure(output="z", measure="MSE", mesh="z_mesh") Lastly, CalibrationMeasure can be imported from gemseo-calibration.scenario.

  • constraint_type (str) –

    The type of constraint, "eq" for equality constraint and "ineq" for inequality constraint.

    By default it is set to eq.

  • constraint_name (str | None) –

    The name of the constraint to be stored. If None, the name of the constraint is generated from the output name.

    By default it is set to None.

  • value (str | float) –

    The value for which the constraint is active. If None, this value is 0.

    By default it is set to None.

  • positive (bool) –

    Whether to consider the inequality constraint as positive.

    By default it is set to False.

Raises

ValueError – If the constraint type is neither ‘eq’ or ‘ineq’.

Return type

None

add_differentiated_inputs(inputs=None)

Add inputs against which to differentiate the outputs.

This method updates MDODiscipline._differentiated_inputs with inputs.

Parameters

inputs (Iterable[str] | None) –

The input variables against which to differentiate the outputs. If None, all the inputs of the discipline are used.

By default it is set to None.

Raises

ValueError – When the inputs wrt which differentiate the discipline are not inputs of the latter.

Return type

None

add_differentiated_outputs(outputs=None)

Add outputs to be differentiated.

This method updates MDODiscipline._differentiated_outputs with outputs.

Parameters

outputs (Iterable[str] | None) –

The output variables to be differentiated. If None, all the outputs of the discipline are used.

By default it is set to None.

Raises

ValueError – When the outputs to differentiate are not discipline outputs.

Return type

None

add_namespace_to_input(name, namespace)

Add a namespace prefix to an existing input grammar element.

The updated input grammar element name will be namespace``+:data:`~gemseo.core.namespaces.namespace_separator`+``name.

Parameters
  • name (str) – The element name to rename.

  • namespace (str) – The name of the namespace.

add_namespace_to_output(name, namespace)

Add a namespace prefix to an existing output grammar element.

The updated output grammar element name will be namespace``+:data:`~gemseo.core.namespaces.namespace_separator`+``name.

Parameters
  • name (str) – The element name to rename.

  • namespace (str) – The name of the namespace.

add_observable(output_names, observable_name=None, discipline=None)

Add an observable to the optimization problem.

The repartition strategy of the observable is defined in the formulation class. When more than one output name is provided, the observable function returns a concatenated array of the output values.

Parameters
  • output_names (Sequence[str]) – The names of the outputs to observe.

  • observable_name (Sequence[str] | None) –

    The name to be given to the observable. If None, the output name is used by default.

    By default it is set to None.

  • discipline (MDODiscipline | None) –

    The discipline used to build the observable function. If None, detect the discipline from the inner disciplines.

    By default it is set to None.

Return type

None

add_status_observer(obs)

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters

obs (Any) – The observer to add.

Return type

None

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to the discipline.

Search in the directory comp_dir for either an input grammar file named name + "_input.json" or an output grammar file named name + "_output.json".

Parameters
  • is_input (bool) –

    Whether to search for an input or output grammar file.

    By default it is set to True.

  • name (str | None) –

    The name to be searched in the file names. If None, use the name of the discipline class.

    By default it is set to None.

  • comp_dir (str | Path | None) –

    The directory in which to search the grammar file. If None, use the GRAMMAR_DIRECTORY if any, or the directory of the discipline class module.

    By default it is set to None.

Returns

The grammar file path.

Return type

str

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters
  • input_data (dict[str, Any]) – The input data needed to execute the discipline according to the discipline input grammar.

  • raise_exception (bool) –

    Whether to raise on error.

    By default it is set to True.

Return type

None

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, fig_size_x=10, fig_size_y=10, reference_jacobian_path=None, save_reference_jacobian=False, indices=None)

Check if the analytical Jacobian is correct with respect to a reference one.

If reference_jacobian_path is not None and save_reference_jacobian is True, compute the reference Jacobian with the approximation method and save it in reference_jacobian_path.

If reference_jacobian_path is not None and save_reference_jacobian is False, do not compute the reference Jacobian but read it from reference_jacobian_path.

If reference_jacobian_path is None, compute the reference Jacobian without saving it.

Parameters
  • input_data (dict[str, ndarray] | None) –

    The input data needed to execute the discipline according to the discipline input grammar. If None, use the MDODiscipline.default_inputs.

    By default it is set to None.

  • derr_approx (str) –

    The approximation method, either “complex_step” or “finite_differences”.

    By default it is set to finite_differences.

  • threshold (float) –

    The acceptance threshold for the Jacobian error.

    By default it is set to 1e-08.

  • linearization_mode (str) –

    the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

    By default it is set to auto.

  • inputs (Iterable[str] | None) –

    The names of the inputs wrt which to differentiate the outputs.

    By default it is set to None.

  • outputs (Iterable[str] | None) –

    The names of the outputs to be differentiated.

    By default it is set to None.

  • step (float) –

    The differentiation step.

    By default it is set to 1e-07.

  • parallel (bool) –

    Whether to differentiate the discipline in parallel.

    By default it is set to False.

  • n_processes (int) –

    The maximum simultaneous number of threads, if use_threading is True, or processes otherwise, used to parallelize the execution.

    By default it is set to 2.

  • use_threading (bool) –

    Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.

    By default it is set to False.

  • wait_time_between_fork (float) –

    The time waited between two forks of the process / thread.

    By default it is set to 0.

  • auto_set_step (bool) –

    Whether to compute the optimal step for a forward first order finite differences gradient approximation.

    By default it is set to False.

  • plot_result (bool) –

    Whether to plot the result of the validation (computed vs approximated Jacobians).

    By default it is set to False.

  • file_path (str | Path) –

    The path to the output file if plot_result is True.

    By default it is set to jacobian_errors.pdf.

  • show (bool) –

    Whether to open the figure.

    By default it is set to False.

  • fig_size_x (float) –

    The x-size of the figure in inches.

    By default it is set to 10.

  • fig_size_y (float) –

    The y-size of the figure in inches.

    By default it is set to 10.

  • reference_jacobian_path (str | Path | None) –

    The path of the reference Jacobian file.

    By default it is set to None.

  • save_reference_jacobian (bool) –

    Whether to save the reference Jacobian.

    By default it is set to False.

  • indices (Iterable[int] | None) –

    The indices of the inputs and outputs for the different sub-Jacobian matrices, formatted as {variable_name: variable_components} where variable_components can be either an integer, e.g. 2 a sequence of integers, e.g. [0, 3], a slice, e.g. slice(0,3), the ellipsis symbol () or None, which is the same as ellipsis. If a variable name is missing, consider all its components. If None, consider all the components of all the inputs and outputs.

    By default it is set to None.

Returns

Whether the analytical Jacobian is correct with respect to the reference one.

check_output_data(raise_exception=True)

Check the output data validity.

Parameters

raise_exception (bool) –

Whether to raise an exception when the data is invalid.

By default it is set to True.

Return type

None

classmethod deactivate_time_stamps()

Deactivate the time stamps.

For storing start and end times of execution and linearizations.

Return type

None

static deserialize(file_path)

Deserialize a discipline from a file.

Parameters

file_path (str | Path) – The path to the file containing the discipline.

Returns

The discipline instance.

Return type

MDODiscipline

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

Parameters

input_data (Mapping[str, Any] | None) –

The input data needed to execute the discipline according to the discipline input grammar. If None, use the MDODiscipline.default_inputs.

By default it is set to None.

Returns

The discipline local data after execution.

Raises

RuntimeError – When residual_variables are declared but self.run_solves_residuals is False. This is not suported yet.

Return type

dict[str, Any]

export_to_dataset(name=None, by_group=True, categorize=True, opt_naming=True, export_gradients=False)

Export the database of the optimization problem to a Dataset.

The variables can be classified into groups: Dataset.DESIGN_GROUP or Dataset.INPUT_GROUP for the design variables and Dataset.FUNCTION_GROUP or Dataset.OUTPUT_GROUP for the functions (objective, constraints and observables).

Parameters
  • name (str | None) –

    The name to be given to the dataset. If None, use the name of the OptimizationProblem.database.

    By default it is set to None.

  • by_group (bool) –

    Whether to store the data by group in Dataset.data, in the sense of one unique NumPy array per group. If categorize is False, there is a unique group: Dataset.PARAMETER_GROUP`. If categorize is True, the groups can be either Dataset.DESIGN_GROUP and Dataset.FUNCTION_GROUP if opt_naming is True, or Dataset.INPUT_GROUP and Dataset.OUTPUT_GROUP. If by_group is False, store the data by variable names.

    By default it is set to True.

  • categorize (bool) –

    Whether to distinguish between the different groups of variables. Otherwise, group all the variables in Dataset.PARAMETER_GROUP`.

    By default it is set to True.

  • opt_naming (bool) –

    Whether to use Dataset.DESIGN_GROUP and Dataset.FUNCTION_GROUP as groups. Otherwise, use Dataset.INPUT_GROUP and Dataset.OUTPUT_GROUP.

    By default it is set to True.

  • export_gradients (bool) –

    Whether to export the gradients of the functions (objective function, constraints and observables) if the latter are available in the database of the optimization problem.

    By default it is set to False.

Returns

A dataset built from the database of the optimization problem.

Return type

Dataset

get_all_inputs()

Return the local input data as a list.

The order is given by MDODiscipline.get_input_data_names().

Returns

The local input data.

Return type

list[Any]

get_all_outputs()

Return the local output data as a list.

The order is given by MDODiscipline.get_output_data_names().

Returns

The local output data.

Return type

list[Any]

get_attributes_to_serialize()

Define the names of the attributes to be serialized.

Shall be overloaded by disciplines

Returns

The names of the attributes to be serialized.

Return type

list[str]

get_available_driver_names()

The available drivers.

Return type

list[str]

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of strings, then the method returns a generator of value corresponding to the keys which can be iterated.

Parameters
  • keys (str | Iterable) – One or several names.

  • data_dict (dict[str, Any]) – The mapping from which to get the data.

Returns

Either a data or a generator of data.

Return type

Any | Generator[Any]

get_disciplines_in_dataflow_chain()

Return the disciplines that must be shown as blocks within the XDSM representation of a chain.

By default, only the discipline itself is shown. This function can be differently implemented for any type of inherited discipline.

Returns

The disciplines shown in the XDSM chain.

Return type

list[gemseo.core.discipline.MDODiscipline]

get_disciplines_statuses()

Retrieve the statuses of the disciplines.

Returns

The statuses of the disciplines.

Return type

dict[str, str]

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

The default expected data exchange sequence is an empty list.

See also

MDOFormulation.get_expected_dataflow

Returns

The data exchange arcs.

Return type

list[tuple[gemseo.core.discipline.MDODiscipline, gemseo.core.discipline.MDODiscipline, list[str]]]

get_expected_workflow()

Return the expected execution sequence.

This method is used for the XDSM representation.

The default expected execution sequence is the execution of the discipline itself.

See also

MDOFormulation.get_expected_workflow

Returns

The expected execution sequence.

Return type

gemseo.core.execution_sequence.LoopExecSequence

get_input_data(with_namespaces=True)

Return the local input data as a dictionary.

Parameters

with_namespaces

Whether to keep the namespace prefix of the input names, if any.

By default it is set to True.

Returns

The local input data.

Return type

dict[str, Any]

get_input_data_names(with_namespaces=True)

Return the names of the input variables.

Parameters

with_namespaces

Whether to keep the namespace prefix of the input names, if any.

By default it is set to True.

Returns

The names of the input variables.

Return type

list[str]

get_input_output_data_names(with_namespaces=True)

Return the names of the input and output variables.

Args:
with_namespaces: Whether to keep the namespace prefix of the

output names, if any.

Returns

The name of the input and output variables.

Return type

list[str]

get_inputs_asarray()

Return the local output data as a large NumPy array.

The order is the one of MDODiscipline.get_all_outputs().

Returns

The local output data.

Return type

numpy.ndarray

get_inputs_by_name(data_names)

Return the local data associated with input variables.

Parameters

data_names (Iterable[str]) – The names of the input variables.

Returns

The local data for the given input variables.

Raises

ValueError – When a variable is not an input of the discipline.

Return type

list[Any]

get_local_data_by_name(data_names)

Return the local data of the discipline associated with variables names.

Parameters

data_names (Iterable[str]) – The names of the variables.

Returns

The local data associated with the variables names.

Raises

ValueError – When a name is not a discipline input name.

Return type

Generator[Any]

get_optim_variables_names()

A convenience function to access the optimization variables.

Returns

The optimization variables of the scenario.

Return type

list[str]

get_optimum()

Return the optimization results.

Returns

The optimal solution found by the scenario if executed, None otherwise.

Return type

OptimizationResult | None

get_output_data(with_namespaces=True)

Return the local output data as a dictionary.

Parameters

with_namespaces

Whether to keep the namespace prefix of the output names, if any.

By default it is set to True.

Returns

The local output data.

Return type

dict[str, Any]

get_output_data_names(with_namespaces=True)

Return the names of the output variables.

Parameters

with_namespaces

Whether to keep the namespace prefix of the output names, if any.

By default it is set to True.

Returns

The names of the output variables.

Return type

list[str]

get_outputs_asarray()

Return the local input data as a large NumPy array.

The order is the one of MDODiscipline.get_all_inputs().

Returns

The local input data.

Return type

numpy.ndarray

get_outputs_by_name(data_names)

Return the local data associated with output variables.

Parameters

data_names (Iterable[str]) – The names of the output variables.

Returns

The local data for the given output variables.

Raises

ValueError – When a variable is not an output of the discipline.

Return type

list[Any]

get_sub_disciplines()

Return the sub-disciplines if any.

Returns

The sub-disciplines.

Return type

list[gemseo.core.discipline.MDODiscipline]

is_all_inputs_existing(data_names)

Test if several variables are discipline inputs.

Parameters

data_names (Iterable[str]) – The names of the variables.

Returns

Whether all the variables are discipline inputs.

Return type

bool

is_all_outputs_existing(data_names)

Test if several variables are discipline outputs.

Parameters

data_names (Iterable[str]) – The names of the variables.

Returns

Whether all the variables are discipline outputs.

Return type

bool

is_input_existing(data_name)

Test if a variable is a discipline input.

Parameters

data_name (str) – The name of the variable.

Returns

Whether the variable is a discipline input.

Return type

bool

is_output_existing(data_name)

Test if a variable is a discipline output.

Parameters

data_name (str) – The name of the variable.

Returns

Whether the variable is a discipline output.

Return type

bool

static is_scenario()

Indicate if the current object is a Scenario.

Returns

True if the current object is a Scenario.

Return type

bool

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters
  • input_data (dict[str, Any] | None) –

    The input data needed to linearize the discipline according to the discipline input grammar. If None, use the MDODiscipline.default_inputs.

    By default it is set to None.

  • force_all (bool) –

    If False, MDODiscipline._differentiated_inputs and MDODiscipline._differentiated_outputs are used to filter the differentiated variables. otherwise, all outputs are differentiated wrt all inputs.

    By default it is set to False.

  • force_no_exec (bool) –

    If True, the discipline is not re-executed, cache is loaded anyway.

    By default it is set to False.

Returns

The Jacobian of the discipline.

Return type

dict[str, dict[str, ndarray]]

notify_status_observers()

Notify all status observers that the status has changed.

Return type

None

post_process(post_name, **options)[source]

Post-process the optimization history.

Parameters
  • post_name (str) – The name of the post-processor, i.e. the name of a class inheriting from OptPostProcessor.

  • **options (Any) – The options for the post-processor.

Return type

None

print_execution_metrics()

Print the total number of executions and cumulated runtime by discipline.

Return type

None

remove_status_observer(obs)

Remove an observer for the status.

Parameters

obs (Any) – The observer to remove.

Return type

None

reset_statuses_for_run()

Set all the statuses to MDODiscipline.STATUS_PENDING.

Raises

ValueError – When the discipline cannot be run because of its status.

Return type

None

save_optimization_history(file_path, file_format='hdf5', append=False)

Save the optimization history of the scenario to a file.

Parameters
  • file_path (str) – The path to the file to save the history.

  • file_format (str) –

    The format of the file, either “hdf5” or “ggobi”.

    By default it is set to hdf5.

  • append (bool) –

    If True, the history is appended to the file if not empty.

    By default it is set to False.

Raises

ValueError – If the file format is not correct.

Return type

None

serialize(file_path)

Serialize the discipline and store it in a file.

Parameters

file_path (str | Path) – The path to the file to store the discipline.

Return type

None

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method defines when the output data have to be cached according to the distance between the corresponding input data and the input data already cached for which output data are also cached.

The cache can be either a SimpleCache recording the last execution or a cache storing all executions, e.g. MemoryFullCache and HDF5Cache. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache, or on the disk, e.g. HDF5Cache.

The attribute CacheFactory.caches provides the available caches types.

Parameters
  • cache_type (str) –

    The type of cache.

    By default it is set to SimpleCache.

  • cache_tolerance (float) –

    The maximum relative norm of the difference between two input arrays to consider that two input arrays are equal.

    By default it is set to 0.0.

  • cache_hdf_file (str | Path | None) –

    The path to the HDF file to store the data; this argument is mandatory when the MDODiscipline.HDF5_CACHE policy is used.

    By default it is set to None.

  • cache_hdf_node_name (str | None) –

    The name of the HDF file node to store the discipline data. If None, MDODiscipline.name is used.

    By default it is set to None.

  • is_memory_shared (bool) –

    Whether to store the data with a shared memory dictionary, which makes the cache compatible with multiprocessing.

    By default it is set to True.

Return type

None

set_differentiation_method(method='user', step=1e-06)

Set the differentiation method for the process.

Parameters
  • method (str | None) –

    The method to use to differentiate the process, either “user”, “finite_differences”, “complex_step” or “no_derivatives”, which is equivalent to None.

    By default it is set to user.

  • step (float) –

    The finite difference step.

    By default it is set to 1e-06.

Return type

None

set_disciplines_statuses(status)

Set the sub-disciplines statuses.

To be implemented in subclasses.

Parameters

status (str) – The status.

Return type

None

set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)

Set the Jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling MDODiscipline.linearize().

Parameters
  • jac_approx_type (str) –

    The approximation method, either “complex_step” or “finite_differences”.

    By default it is set to finite_differences.

  • jax_approx_step (float) –

    The differentiation step.

    By default it is set to 1e-07.

  • jac_approx_n_processes (int) –

    The maximum simultaneous number of threads, if jac_approx_use_threading is True, or processes otherwise, used to parallelize the execution.

    By default it is set to 1.

  • jac_approx_use_threading (bool) –

    Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.

    By default it is set to False.

  • jac_approx_wait_time (float) –

    The time waited between two forks of the process / thread.

    By default it is set to 0.

Return type

None

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of the perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (round-off when doing f(x+step)-f(x)) are approximately equal.

Warning

This calls the discipline execution twice per input variables.

See also

https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differentiation”

Parameters
  • inputs (Iterable[str] | None) –

    The inputs wrt which the outputs are linearized. If None, use the MDODiscipline._differentiated_inputs.

    By default it is set to None.

  • outputs (Iterable[str] | None) –

    The outputs to be linearized. If None, use the MDODiscipline._differentiated_outputs.

    By default it is set to None.

  • force_all (bool) –

    Whether to consider all the inputs and outputs of the discipline;

    By default it is set to False.

  • print_errors (bool) –

    Whether to display the estimated errors.

    By default it is set to False.

  • numerical_error (float) –

    The numerical error associated to the calculation of f. By default, this is the machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution.

    By default it is set to 2.220446049250313e-16.

Returns

The estimated errors of truncation and cancellation error.

Raises

ValueError – When the Jacobian approximation method has not been set.

set_optimization_history_backup(file_path, each_new_iter=False, each_store=True, erase=False, pre_load=False, generate_opt_plot=False)

Set the backup file for the optimization history during the run.

Parameters
  • file_path (str) – The path to the file to save the history.

  • each_new_iter (bool) –

    If True, callback at every iteration.

    By default it is set to False.

  • each_store (bool) –

    If True, callback at every call to store() in the database.

    By default it is set to True.

  • erase (bool) –

    If True, the backup file is erased before the run.

    By default it is set to False.

  • pre_load (bool) –

    If True, the backup file is loaded before run, useful after a crash.

    By default it is set to False.

  • generate_opt_plot (bool) –

    If True, generate the optimization history view at backup.

    By default it is set to False.

Raises

ValueError – If both erase and pre_load are True.

Return type

None

store_local_data(**kwargs)

Store discipline data in local data.

Parameters

**kwargs (Any) – The data to be stored in MDODiscipline.local_data.

Return type

None

xdsmize(monitor=False, outdir='.', print_statuses=False, outfilename='xdsm.html', latex_output=False, open_browser=False, html_output=True, json_output=False)

Create a JSON file defining the XDSM related to the current scenario.

Parameters
  • monitor (bool) –

    If True, update the generated file at each discipline status change.

    By default it is set to False.

  • outdir (str | None) –

    The directory where the JSON file is generated. If None, the current working directory is used.

    By default it is set to ..

  • print_statuses (bool) –

    If True, print the statuses in the console at each update.

    By default it is set to False.

  • outfilename (str) –

    The name of the file of the output. The basename is used and the extension is adapted for the HTML / JSON / PDF outputs.

    By default it is set to xdsm.html.

  • latex_output (bool) –

    If True, build TEX, TIKZ and PDF files.

    By default it is set to False.

  • open_browser (bool) –

    If True, open the web browser and display the the XDSM.

    By default it is set to False.

  • html_output (bool) –

    If True, output a self contained HTML file.

    By default it is set to True.

  • json_output (bool) –

    If True, output a JSON file for XDSMjs.

    By default it is set to False.

Return type

None

ALGO = 'algo'
ALGO_OPTIONS = 'algo_options'
APPROX_MODES = ['finite_differences', 'complex_step']
AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')
AVAILABLE_STATUSES = ['DONE', 'FAILED', 'PENDING', 'RUNNING', 'VIRTUAL']
COMPLEX_STEP = 'complex_step'
FINITE_DIFFERENCES = 'finite_differences'
GRAMMAR_DIRECTORY: ClassVar[str | None] = None

The directory in which to search for the grammar files if not the class one.

HDF5_CACHE = 'HDF5Cache'
JSON_GRAMMAR_TYPE = 'JSONGrammar'
L_BOUNDS = 'l_bounds'
MAX_ITER = 'max_iter'
MEMORY_FULL_CACHE = 'MemoryFullCache'
N_CPUS = 2
RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'
RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'
SIMPLE_CACHE = 'SimpleCache'
SIMPLE_GRAMMAR_TYPE = 'SimpleGrammar'
STATUS_DONE = 'DONE'
STATUS_FAILED = 'FAILED'
STATUS_PENDING = 'PENDING'
STATUS_RUNNING = 'RUNNING'
STATUS_VIRTUAL = 'VIRTUAL'
U_BOUNDS = 'u_bounds'
X_0 = 'x_0'
X_OPT = 'x_opt'
activate_cache: bool = True

Whether to cache the discipline evaluations by default.

activate_counters: ClassVar[bool] = True

Whether to activate the counters (execution time, calls and linearizations).

activate_input_data_check: ClassVar[bool] = True

Whether to check the input data respect the input grammar.

activate_output_data_check: ClassVar[bool] = True

Whether to check the output data respect the output grammar.

cache: AbstractCache

The cache containing one or several executions of the discipline according to the cache policy.

property cache_tol: float

The cache input tolerance.

This is the tolerance for equality of the inputs in the cache. If norm(stored_input_data-input_data) <= cache_tol * norm(stored_input_data), the cached data for stored_input_data is returned when calling self.execute(input_data).

Raises

ValueError – When the discipline does not have a cache.

property calibrator: gemseo_calibration.calibrator.Calibrator

The discipline computing calibration measures from the parameter values.

clear_history_before_run: bool

If True, clear history before run.

data_processor: DataProcessor

A tool to pre- and post-process discipline data.

property default_inputs: dict[str, Any]

The default inputs.

Raises

TypeError – When the default inputs are not passed as a dictionary.

property design_space: gemseo.algos.design_space.DesignSpace

The design space on which the scenario is performed.

disciplines: list[MDODiscipline]

The disciplines.

exec_for_lin: bool

Whether the last execution was due to a linearization.

property exec_time: float | None

The cumulated execution time of the discipline.

This property is multiprocessing safe.

Raises

RuntimeError – When the discipline counters are disabled.

formulation: MDOFormulation

The MDO formulation.

formulation_name: str

The name of the MDO formulation.

property grammar_type: gemseo.core.grammars.base_grammar.BaseGrammar

The type of grammar to be used for inputs and outputs declaration.

input_grammar: BaseGrammar

The input grammar.

jac: dict[str, dict[str, ndarray]]

The Jacobians of the outputs wrt inputs of the form {output: {input: matrix}}.

property linearization_mode: str

The linearization mode among MDODiscipline.AVAILABLE_MODES.

Raises

ValueError – When the linearization mode is unknown.

property local_data: gemseo.core.discipline_data.DisciplineData

The current input and output data.

property n_calls: int | None

The number of times the discipline was executed.

This property is multiprocessing safe.

Raises

RuntimeError – When the discipline counters are disabled.

property n_calls_linearize: int | None

The number of times the discipline was linearized.

This property is multiprocessing safe.

Raises

RuntimeError – When the discipline counters are disabled.

name: str

The name of the discipline.

optimization_result: OptimizationResult

The optimization result.

output_grammar: BaseGrammar

The output grammar.

property post_factory: PostFactory | None

The factory of post-processors.

posterior_model_data: Dataset | None

The model data after the calibration.

property posterior_parameters: dict[str, numpy.ndarray]

The values of the parameters after the calibration stage.

property posts: list[str]

The available post-processors.

prior_model_data: Dataset | None

The model data before the calibration.

property prior_parameters: dict[str, numpy.ndarray]

The values of the parameters before the calibration stage.

re_exec_policy: str

The policy to re-execute the same discipline.

residual_variables: Mapping[str, str]

The output variables mapping to their inputs, to be considered as residuals; they shall be equal to zero.

run_solves_residuals: bool

If True, the run method shall solve the residuals.

property status: str

The status of the discipline.

time_stamps = None
property use_standardized_objective: bool

Whether to use the standardized OptimizationProblem.objective for logging and post-processing.