gemseo / problems / analytical

rosenbrock module

The Rosenbrock analytic problem

class gemseo.problems.analytical.rosenbrock.RosenMF(dimension=2)[source]

Bases: MDODiscipline

RosenMF, a multi-fidelity Rosenbrock MDODiscipline, returns the value:

\[\mathrm{fidelity} * \mathrm{Rosenbrock}(x)\]

where both \(\mathrm{fidelity}\) and \(x\) are provided as input data.

Parameters:

dimension (int) –

The dimension of the design space.

By default it is set to 2.

classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

Return type:

None

add_differentiated_inputs(inputs=None)

Add inputs against which to differentiate the outputs.

This method updates MDODiscipline._differentiated_inputs with inputs.

Parameters:

inputs (Iterable[str] | None) – The input variables against which to differentiate the outputs. If None, all the inputs of the discipline are used.

Raises:

ValueError – When the inputs wrt which differentiate the discipline are not inputs of the latter.

Return type:

None

add_differentiated_outputs(outputs=None)

Add outputs to be differentiated.

This method updates MDODiscipline._differentiated_outputs with outputs.

Parameters:

outputs (Iterable[str] | None) – The output variables to be differentiated. If None, all the outputs of the discipline are used.

Raises:

ValueError – When the outputs to differentiate are not discipline outputs.

Return type:

None

add_namespace_to_input(name, namespace)

Add a namespace prefix to an existing input grammar element.

The updated input grammar element name will be namespace + namespaces_separator + name.

Parameters:
  • name (str) – The element name to rename.

  • namespace (str) – The name of the namespace.

add_namespace_to_output(name, namespace)

Add a namespace prefix to an existing output grammar element.

The updated output grammar element name will be namespace + namespaces_separator + name.

Parameters:
  • name (str) – The element name to rename.

  • namespace (str) – The name of the namespace.

add_status_observer(obs)

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters:

obs (Any) – The observer to add.

Return type:

None

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to the discipline.

Search in the directory comp_dir for either an input grammar file named name + "_input.json" or an output grammar file named name + "_output.json".

Parameters:
  • is_input (bool) –

    Whether to search for an input or output grammar file.

    By default it is set to True.

  • name (str | None) – The name to be searched in the file names. If None, use the name of the discipline class.

  • comp_dir (str | Path | None) – The directory in which to search the grammar file. If None, use the GRAMMAR_DIRECTORY if any, or the directory of the discipline class module.

Returns:

The grammar file path.

Return type:

str

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters:
  • input_data (dict[str, Any]) – The input data needed to execute the discipline according to the discipline input grammar.

  • raise_exception (bool) –

    Whether to raise on error.

    By default it is set to True.

Return type:

None

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, fig_size_x=10, fig_size_y=10, reference_jacobian_path=None, save_reference_jacobian=False, indices=None)

Check if the analytical Jacobian is correct with respect to a reference one.

If reference_jacobian_path is not None and save_reference_jacobian is True, compute the reference Jacobian with the approximation method and save it in reference_jacobian_path.

If reference_jacobian_path is not None and save_reference_jacobian is False, do not compute the reference Jacobian but read it from reference_jacobian_path.

If reference_jacobian_path is None, compute the reference Jacobian without saving it.

Parameters:
  • input_data (dict[str, ndarray] | None) – The input data needed to execute the discipline according to the discipline input grammar. If None, use the MDODiscipline.default_inputs.

  • derr_approx (str) –

    The approximation method, either “complex_step” or “finite_differences”.

    By default it is set to “finite_differences”.

  • threshold (float) –

    The acceptance threshold for the Jacobian error.

    By default it is set to 1e-08.

  • linearization_mode (str) –

    the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

    By default it is set to “auto”.

  • inputs (Iterable[str] | None) – The names of the inputs wrt which to differentiate the outputs.

  • outputs (Iterable[str] | None) – The names of the outputs to be differentiated.

  • step (float) –

    The differentiation step.

    By default it is set to 1e-07.

  • parallel (bool) –

    Whether to differentiate the discipline in parallel.

    By default it is set to False.

  • n_processes (int) –

    The maximum simultaneous number of threads, if use_threading is True, or processes otherwise, used to parallelize the execution.

    By default it is set to 2.

  • use_threading (bool) –

    Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.

    By default it is set to False.

  • wait_time_between_fork (float) –

    The time waited between two forks of the process / thread.

    By default it is set to 0.

  • auto_set_step (bool) –

    Whether to compute the optimal step for a forward first order finite differences gradient approximation.

    By default it is set to False.

  • plot_result (bool) –

    Whether to plot the result of the validation (computed vs approximated Jacobians).

    By default it is set to False.

  • file_path (str | Path) –

    The path to the output file if plot_result is True.

    By default it is set to “jacobian_errors.pdf”.

  • show (bool) –

    Whether to open the figure.

    By default it is set to False.

  • fig_size_x (float) –

    The x-size of the figure in inches.

    By default it is set to 10.

  • fig_size_y (float) –

    The y-size of the figure in inches.

    By default it is set to 10.

  • reference_jacobian_path (str | Path | None) – The path of the reference Jacobian file.

  • save_reference_jacobian (bool) –

    Whether to save the reference Jacobian.

    By default it is set to False.

  • indices (Iterable[int] | None) – The indices of the inputs and outputs for the different sub-Jacobian matrices, formatted as {variable_name: variable_components} where variable_components can be either an integer, e.g. 2 a sequence of integers, e.g. [0, 3], a slice, e.g. slice(0,3), the ellipsis symbol () or None, which is the same as ellipsis. If a variable name is missing, consider all its components. If None, consider all the components of all the inputs and outputs.

Returns:

Whether the analytical Jacobian is correct with respect to the reference one.

check_output_data(raise_exception=True)

Check the output data validity.

Parameters:

raise_exception (bool) –

Whether to raise an exception when the data is invalid.

By default it is set to True.

Return type:

None

classmethod deactivate_time_stamps()

Deactivate the time stamps.

For storing start and end times of execution and linearizations.

Return type:

None

static deserialize(file_path)

Deserialize a discipline from a file.

Parameters:

file_path (str | Path) – The path to the file containing the discipline.

Returns:

The discipline instance.

Return type:

MDODiscipline

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

Parameters:

input_data (Mapping[str, Any] | None) – The input data needed to execute the discipline according to the discipline input grammar. If None, use the MDODiscipline.default_inputs.

Returns:

The discipline local data after execution.

Raises:

RuntimeError – When residual_variables are declared but self.run_solves_residuals is False. This is not suported yet.

Return type:

dict[str, Any]

get_all_inputs()

Return the local input data as a list.

The order is given by MDODiscipline.get_input_data_names().

Returns:

The local input data.

Return type:

list[Any]

get_all_outputs()

Return the local output data as a list.

The order is given by MDODiscipline.get_output_data_names().

Returns:

The local output data.

Return type:

list[Any]

get_attributes_to_serialize()

Define the names of the attributes to be serialized.

Shall be overloaded by disciplines

Returns:

The names of the attributes to be serialized.

Return type:

list[str]

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of strings, then the method returns a generator of value corresponding to the keys which can be iterated.

Parameters:
  • keys (str | Iterable) – One or several names.

  • data_dict (dict[str, Any]) – The mapping from which to get the data.

Returns:

Either a data or a generator of data.

Return type:

Any | Generator[Any]

get_disciplines_in_dataflow_chain()

Return the disciplines that must be shown as blocks within the XDSM representation of a chain.

By default, only the discipline itself is shown. This function can be differently implemented for any type of inherited discipline.

Returns:

The disciplines shown in the XDSM chain.

Return type:

list[gemseo.core.discipline.MDODiscipline]

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

The default expected data exchange sequence is an empty list.

See also

MDOFormulation.get_expected_dataflow

Returns:

The data exchange arcs.

Return type:

list[tuple[gemseo.core.discipline.MDODiscipline, gemseo.core.discipline.MDODiscipline, list[str]]]

get_expected_workflow()

Return the expected execution sequence.

This method is used for the XDSM representation.

The default expected execution sequence is the execution of the discipline itself.

See also

MDOFormulation.get_expected_workflow

Returns:

The expected execution sequence.

Return type:

SerialExecSequence

get_input_data(with_namespaces=True)

Return the local input data as a dictionary.

Parameters:

with_namespaces

Whether to keep the namespace prefix of the input names, if any.

By default it is set to True.

Returns:

The local input data.

Return type:

dict[str, Any]

get_input_data_names(with_namespaces=True)

Return the names of the input variables.

Parameters:

with_namespaces

Whether to keep the namespace prefix of the input names, if any.

By default it is set to True.

Returns:

The names of the input variables.

Return type:

list[str]

get_input_output_data_names(with_namespaces=True)

Return the names of the input and output variables.

Args:
with_namespaces: Whether to keep the namespace prefix of the

output names, if any.

Returns:

The name of the input and output variables.

Return type:

list[str]

get_inputs_asarray()

Return the local output data as a large NumPy array.

The order is the one of MDODiscipline.get_all_outputs().

Returns:

The local output data.

Return type:

ndarray

get_inputs_by_name(data_names)

Return the local data associated with input variables.

Parameters:

data_names (Iterable[str]) – The names of the input variables.

Returns:

The local data for the given input variables.

Raises:

ValueError – When a variable is not an input of the discipline.

Return type:

list[Any]

get_local_data_by_name(data_names)

Return the local data of the discipline associated with variables names.

Parameters:

data_names (Iterable[str]) – The names of the variables.

Returns:

The local data associated with the variables names.

Raises:

ValueError – When a name is not a discipline input name.

Return type:

Generator[Any]

get_output_data(with_namespaces=True)

Return the local output data as a dictionary.

Parameters:

with_namespaces

Whether to keep the namespace prefix of the output names, if any.

By default it is set to True.

Returns:

The local output data.

Return type:

dict[str, Any]

get_output_data_names(with_namespaces=True)

Return the names of the output variables.

Parameters:

with_namespaces

Whether to keep the namespace prefix of the output names, if any.

By default it is set to True.

Returns:

The names of the output variables.

Return type:

list[str]

get_outputs_asarray()

Return the local input data as a large NumPy array.

The order is the one of MDODiscipline.get_all_inputs().

Returns:

The local input data.

Return type:

ndarray

get_outputs_by_name(data_names)

Return the local data associated with output variables.

Parameters:

data_names (Iterable[str]) – The names of the output variables.

Returns:

The local data for the given output variables.

Raises:

ValueError – When a variable is not an output of the discipline.

Return type:

list[Any]

get_sub_disciplines()

Return the sub-disciplines if any.

Returns:

The sub-disciplines.

Return type:

list[gemseo.core.discipline.MDODiscipline]

is_all_inputs_existing(data_names)

Test if several variables are discipline inputs.

Parameters:

data_names (Iterable[str]) – The names of the variables.

Returns:

Whether all the variables are discipline inputs.

Return type:

bool

is_all_outputs_existing(data_names)

Test if several variables are discipline outputs.

Parameters:

data_names (Iterable[str]) – The names of the variables.

Returns:

Whether all the variables are discipline outputs.

Return type:

bool

is_input_existing(data_name)

Test if a variable is a discipline input.

Parameters:

data_name (str) – The name of the variable.

Returns:

Whether the variable is a discipline input.

Return type:

bool

is_output_existing(data_name)

Test if a variable is a discipline output.

Parameters:

data_name (str) – The name of the variable.

Returns:

Whether the variable is a discipline output.

Return type:

bool

static is_scenario()

Whether the discipline is a scenario.

Return type:

bool

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters:
  • input_data (dict[str, Any] | None) – The input data needed to linearize the discipline according to the discipline input grammar. If None, use the MDODiscipline.default_inputs.

  • force_all (bool) –

    If False, MDODiscipline._differentiated_inputs and MDODiscipline._differentiated_outputs are used to filter the differentiated variables. otherwise, all outputs are differentiated wrt all inputs.

    By default it is set to False.

  • force_no_exec (bool) –

    If True, the discipline is not re-executed, cache is loaded anyway.

    By default it is set to False.

Returns:

The Jacobian of the discipline.

Return type:

dict[str, dict[str, ndarray]]

notify_status_observers()

Notify all status observers that the status has changed.

Return type:

None

remove_status_observer(obs)

Remove an observer for the status.

Parameters:

obs (Any) – The observer to remove.

Return type:

None

reset_statuses_for_run()

Set all the statuses to MDODiscipline.STATUS_PENDING.

Raises:

ValueError – When the discipline cannot be run because of its status.

Return type:

None

serialize(file_path)

Serialize the discipline and store it in a file.

Parameters:

file_path (str | Path) – The path to the file to store the discipline.

Return type:

None

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method defines when the output data have to be cached according to the distance between the corresponding input data and the input data already cached for which output data are also cached.

The cache can be either a SimpleCache recording the last execution or a cache storing all executions, e.g. MemoryFullCache and HDF5Cache. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache, or on the disk, e.g. HDF5Cache.

The attribute CacheFactory.caches provides the available caches types.

Parameters:
  • cache_type (str) –

    The type of cache.

    By default it is set to “SimpleCache”.

  • cache_tolerance (float) –

    The maximum relative norm of the difference between two input arrays to consider that two input arrays are equal.

    By default it is set to 0.0.

  • cache_hdf_file (str | Path | None) – The path to the HDF file to store the data; this argument is mandatory when the MDODiscipline.HDF5_CACHE policy is used.

  • cache_hdf_node_name (str | None) – The name of the HDF file node to store the discipline data. If None, MDODiscipline.name is used.

  • is_memory_shared (bool) –

    Whether to store the data with a shared memory dictionary, which makes the cache compatible with multiprocessing.

    By default it is set to True.

Return type:

None

set_disciplines_statuses(status)

Set the sub-disciplines statuses.

To be implemented in subclasses.

Parameters:

status (str) – The status.

Return type:

None

set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)

Set the Jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling MDODiscipline.linearize().

Parameters:
  • jac_approx_type (str) –

    The approximation method, either “complex_step” or “finite_differences”.

    By default it is set to “finite_differences”.

  • jax_approx_step (float) –

    The differentiation step.

    By default it is set to 1e-07.

  • jac_approx_n_processes (int) –

    The maximum simultaneous number of threads, if jac_approx_use_threading is True, or processes otherwise, used to parallelize the execution.

    By default it is set to 1.

  • jac_approx_use_threading (bool) –

    Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.

    By default it is set to False.

  • jac_approx_wait_time (float) –

    The time waited between two forks of the process / thread.

    By default it is set to 0.

Return type:

None

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of the perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (round-off when doing f(x+step)-f(x)) are approximately equal.

Warning

This calls the discipline execution twice per input variables.

See also

https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differentiation”

Parameters:
  • inputs (Iterable[str] | None) – The inputs wrt which the outputs are linearized. If None, use the MDODiscipline._differentiated_inputs.

  • outputs (Iterable[str] | None) – The outputs to be linearized. If None, use the MDODiscipline._differentiated_outputs.

  • force_all (bool) –

    Whether to consider all the inputs and outputs of the discipline;

    By default it is set to False.

  • print_errors (bool) –

    Whether to display the estimated errors.

    By default it is set to False.

  • numerical_error (float) –

    The numerical error associated to the calculation of f. By default, this is the machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution.

    By default it is set to 2.220446049250313e-16.

Returns:

The estimated errors of truncation and cancellation error.

Raises:

ValueError – When the Jacobian approximation method has not been set.

store_local_data(**kwargs)

Store discipline data in local data.

Parameters:

**kwargs (Any) – The data to be stored in MDODiscipline.local_data.

Return type:

None

APPROX_MODES = ['finite_differences', 'complex_step']
AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')
AVAILABLE_STATUSES = ['DONE', 'FAILED', 'PENDING', 'RUNNING', 'VIRTUAL']
COMPLEX_STEP = 'complex_step'
FINITE_DIFFERENCES = 'finite_differences'
GRAMMAR_DIRECTORY: ClassVar[str | None] = None

The directory in which to search for the grammar files if not the class one.

HDF5_CACHE = 'HDF5Cache'
JSON_GRAMMAR_TYPE = 'JSONGrammar'
MEMORY_FULL_CACHE = 'MemoryFullCache'
N_CPUS = 2
RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'
RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'
SIMPLE_CACHE = 'SimpleCache'
SIMPLE_GRAMMAR_TYPE = 'SimpleGrammar'
STATUS_DONE = 'DONE'
STATUS_FAILED = 'FAILED'
STATUS_PENDING = 'PENDING'
STATUS_RUNNING = 'RUNNING'
STATUS_VIRTUAL = 'VIRTUAL'
activate_cache: bool = True

Whether to cache the discipline evaluations by default.

activate_counters: ClassVar[bool] = True

Whether to activate the counters (execution time, calls and linearizations).

activate_input_data_check: ClassVar[bool] = True

Whether to check the input data respect the input grammar.

activate_output_data_check: ClassVar[bool] = True

Whether to check the output data respect the output grammar.

cache: AbstractCache

The cache containing one or several executions of the discipline according to the cache policy.

property cache_tol: float

The cache input tolerance.

This is the tolerance for equality of the inputs in the cache. If norm(stored_input_data-input_data) <= cache_tol * norm(stored_input_data), the cached data for stored_input_data is returned when calling self.execute(input_data).

Raises:

ValueError – When the discipline does not have a cache.

data_processor: DataProcessor

A tool to pre- and post-process discipline data.

property default_inputs: dict[str, Any]

The default inputs.

Raises:

TypeError – When the default inputs are not passed as a dictionary.

exec_for_lin: bool

Whether the last execution was due to a linearization.

property exec_time: float | None

The cumulated execution time of the discipline.

This property is multiprocessing safe.

Raises:

RuntimeError – When the discipline counters are disabled.

property grammar_type: BaseGrammar

The type of grammar to be used for inputs and outputs declaration.

input_grammar: BaseGrammar

The input grammar.

jac: dict[str, dict[str, ndarray]]

The Jacobians of the outputs wrt inputs of the form {output: {input: matrix}}.

property linearization_mode: str

The linearization mode among MDODiscipline.AVAILABLE_MODES.

Raises:

ValueError – When the linearization mode is unknown.

property local_data: DisciplineData

The current input and output data.

property n_calls: int | None

The number of times the discipline was executed.

This property is multiprocessing safe.

Raises:

RuntimeError – When the discipline counters are disabled.

property n_calls_linearize: int | None

The number of times the discipline was linearized.

This property is multiprocessing safe.

Raises:

RuntimeError – When the discipline counters are disabled.

name: str

The name of the discipline.

output_grammar: BaseGrammar

The output grammar.

re_exec_policy: str

The policy to re-execute the same discipline.

residual_variables: Mapping[str, str]

The output variables mapping to their inputs, to be considered as residuals; they shall be equal to zero.

run_solves_residuals: bool

If True, the run method shall solve the residuals.

property status: str

The status of the discipline.

time_stamps = None
class gemseo.problems.analytical.rosenbrock.Rosenbrock(n_x=2, l_b=-2.0, u_b=2.0, scalar_var=False, initial_guess=None)[source]

Bases: OptimizationProblem

Rosenbrock OptimizationProblem uses the Rosenbrock objective function

\[f(x) = \sum_{i=2}^{n_x} 100(x_{i} - x_{i-1}^2)^2 + (1 - x_{i-1})^2\]

with the default DesignSpace \([-0.2,0.2]^{n_x}\).

Parameters:
  • n_x (int) –

    The dimension of the design space.

    By default it is set to 2.

  • l_b (float) –

    The lower bound (common value to all variables).

    By default it is set to -2.0.

  • u_b (float) –

    The upper bound (common value to all variables).

    By default it is set to 2.0.

  • scalar_var (bool) –

    If True, the design space will contain only scalar variables (as many as the problem dimension); if False, the design space will contain a single multidimensional variable (whose size equals the problem dimension).

    By default it is set to False.

  • initial_guess (ndarray | None) – The initial guess for optimal solution.

add_callback(callback_func, each_new_iter=True, each_store=False)

Add a callback function after each store operation or new iteration.

Parameters:
  • callback_func (Callable) – A function to be called after some event.

  • each_new_iter (bool) –

    If True, then callback at every iteration.

    By default it is set to True.

  • each_store (bool) –

    If True, then callback at every call to Database.store().

    By default it is set to False.

Return type:

None

add_constraint(cstr_func, value=None, cstr_type=None, positive=False)

Add a constraint (equality and inequality) to the optimization problem.

Parameters:
  • cstr_func (MDOFunction) – The constraint.

  • value (float | None) – The value for which the constraint is active. If None, this value is 0.

  • cstr_type (str | None) – The type of the constraint. Either equality or inequality.

  • positive (bool) –

    If True, then the inequality constraint is positive.

    By default it is set to False.

Raises:
Return type:

None

add_eq_constraint(cstr_func, value=None)

Add an equality constraint to the optimization problem.

Parameters:
  • cstr_func (MDOFunction) – The constraint.

  • value (float | None) – The value for which the constraint is active. If None, this value is 0.

Return type:

None

add_ineq_constraint(cstr_func, value=None, positive=False)

Add an inequality constraint to the optimization problem.

Parameters:
  • cstr_func (MDOFunction) – The constraint.

  • value (float | None) – The value for which the constraint is active. If None, this value is 0.

  • positive (bool) –

    If True, then the inequality constraint is positive.

    By default it is set to False.

Return type:

None

add_observable(obs_func, new_iter=True)

Add a function to be observed.

When the OptimizationProblem is executed, the observables are called following this sequence:

Parameters:
  • obs_func (MDOFunction) – An observable to be observed.

  • new_iter (bool) –

    If True, then the observable will be called at each new iterate.

    By default it is set to True.

Return type:

None

aggregate_constraint(constr_id, method='max', groups=None, **options)

Aggregates a constraint to generate a reduced dimension constraint.

Parameters:
  • constr_id (int) – The index of the constraint in constraints.

  • method (str | Callable[[Callable], Callable]) –

    The aggregation method, e.g. "max", "KS" or "IKS".

    By default it is set to “max”.

  • groups (tuple[ndarray] | None) – The groups for which to produce an output. If None, a single output constraint is produced.

  • **options (Any) – The options of the aggregation method.

Raises:

ValueError – When the given is index is greater or equal than the number of constraints or when the method is aggregation unknown.

change_objective_sign()

Change the objective function sign in order to minimize its opposite.

The OptimizationProblem expresses any optimization problem as a minimization problem. Then, an objective function originally expressed as a performance function to maximize must be converted into a cost function to minimize, by means of this method.

Return type:

None

check()

Check if the optimization problem is ready for run.

Raises:

ValueError – If the objective function is missing.

Return type:

None

static check_format(input_function)

Check that a function is an instance of MDOFunction.

Parameters:

input_function (Any) – The function to be tested.

Raises:

TypeError – If the function is not a MDOFunction.

Return type:

None

clear_listeners()

Clear all the listeners.

Return type:

None

evaluate_functions(x_vect=None, eval_jac=False, eval_obj=True, eval_observables=False, normalize=True, no_db_no_norm=False, constraints_names=None, observables_names=None, jacobians_names=None)

Compute the functions of interest, and possibly their derivatives.

These functions of interest are the constraints, and possibly the objective.

Some optimization libraries require the number of constraints as an input parameter which is unknown by the formulation or the scenario. Evaluation of initial point allows to get this mandatory information. This is also used for design of experiments to evaluate samples.

Parameters:
  • x_vect (ndarray) – The input vector at which the functions must be evaluated; if None, the initial point x_0 is used.

  • eval_jac (bool) –

    Whether to compute the Jacobian matrices of the functions of interest. If True and jacobians_names is None then compute the Jacobian matrices (or gradients) of the functions that are selected for evaluation (with eval_obj, constraints_names, eval_observables and``observables_names``). If False and jacobians_names is None then no Jacobian matrix is evaluated. If jacobians_names is not None then the value of eval_jac is ignored.

    By default it is set to False.

  • eval_obj (bool) –

    Whether to consider the objective function as a function of interest.

    By default it is set to True.

  • eval_observables (bool) –

    Whether to evaluate the observables. If True and observables_names is None then all the observables are evaluated. If False and observables_names is None then no observable is evaluated. If observables_names is not None then the value of eval_observables is ignored.

    By default it is set to False.

  • normalize (bool) –

    Whether to consider the input vector x_vect normalized.

    By default it is set to True.

  • no_db_no_norm (bool) –

    If True, then do not use the pre-processed functions, so we have no database, nor normalization.

    By default it is set to False.

  • constraints_names (Iterable[str] | None) – The names of the constraints to evaluate. If None then all the constraints are evaluated.

  • observables_names (Iterable[str] | None) – The names of the observables to evaluate. If None and eval_observables is True then all the observables are evaluated. If None and eval_observables is False then no observable is evaluated.

  • jacobians_names (Iterable[str] | None) – The names of the functions whose Jacobian matrices (or gradients) to compute. If None and eval_jac is True then compute the Jacobian matrices (or gradients) of the functions that are selected for evaluation (with eval_obj, constraints_names, eval_observables and``observables_names``). If None and eval_jac is False then no Jacobian matrix is computed.

Returns:

The output values of the functions of interest, as well as their Jacobian matrices if eval_jac is True.

Raises:

ValueError – If a name in jacobians_names is not the name of a function of the problem.

Return type:

tuple[dict[str, float | ndarray], dict[str, ndarray]]

execute_observables_callback(last_x)

The callback function to be passed to the database.

Call all the observables with the last design variables values as argument.

Parameters:

last_x (ndarray) – The design variables values from the last evaluation.

Return type:

None

export_hdf(file_path, append=False)

Export the optimization problem to an HDF file.

Parameters:
  • file_path (str | Path) – The path of the file to store the data.

  • append (bool) –

    If True, then the data are appended to the file if not empty.

    By default it is set to False.

Return type:

None

export_to_dataset(name=None, by_group=True, categorize=True, opt_naming=True, export_gradients=False, input_values=None)

Export the database of the optimization problem to a Dataset.

The variables can be classified into groups: Dataset.DESIGN_GROUP or Dataset.INPUT_GROUP for the design variables and Dataset.FUNCTION_GROUP or Dataset.OUTPUT_GROUP for the functions (objective, constraints and observables).

Parameters:
  • name (str | None) – The name to be given to the dataset. If None, use the name of the OptimizationProblem.database.

  • by_group (bool) –

    Whether to store the data by group in Dataset.data, in the sense of one unique NumPy array per group. If categorize is False, there is a unique group: Dataset.PARAMETER_GROUP`. If categorize is True, the groups can be either Dataset.DESIGN_GROUP and Dataset.FUNCTION_GROUP if opt_naming is True, or Dataset.INPUT_GROUP and Dataset.OUTPUT_GROUP. If by_group is False, store the data by variable names.

    By default it is set to True.

  • categorize (bool) –

    Whether to distinguish between the different groups of variables. Otherwise, group all the variables in Dataset.PARAMETER_GROUP`.

    By default it is set to True.

  • opt_naming (bool) –

    Whether to use Dataset.DESIGN_GROUP and Dataset.FUNCTION_GROUP as groups. Otherwise, use Dataset.INPUT_GROUP and Dataset.OUTPUT_GROUP.

    By default it is set to True.

  • export_gradients (bool) –

    Whether to export the gradients of the functions (objective function, constraints and observables) if the latter are available in the database of the optimization problem.

    By default it is set to False.

  • input_values (Iterable[ndarray] | None) – The input values to be considered. If None, consider all the input values of the database.

Returns:

A dataset built from the database of the optimization problem.

Return type:

Dataset

get_active_ineq_constraints(x_vect, tol=1e-06)

For each constraint, indicate if its different components are active.

Parameters:
  • x_vect (ndarray) – The vector of design variables.

  • tol (float) –

    The tolerance for deciding whether a constraint is active.

    By default it is set to 1e-06.

Returns:

For each constraint, a boolean indicator of activation of its different components.

Return type:

dict[gemseo.core.mdofunctions.mdo_function.MDOFunction, numpy.ndarray]

get_all_functions()

Retrieve all the functions of the optimization problem.

These functions are the constraints, the objective function and the observables.

Returns:

All the functions of the optimization problem.

Return type:

list[gemseo.core.mdofunctions.mdo_function.MDOFunction]

get_all_functions_names()

Retrieve the names of all the function of the optimization problem.

These functions are the constraints, the objective function and the observables.

Returns:

The names of all the functions of the optimization problem.

Return type:

list[str]

get_best_infeasible_point()

Retrieve the best infeasible point within a given tolerance.

Returns:

The best infeasible point expressed as the design variables values, the objective function value, the feasibility of the point and the functions values.

Return type:

Tuple[Optional[ndarray], Optional[ndarray], bool, Dict[str, ndarray]]

get_constraints_names()

Retrieve the names of the constraints.

Returns:

The names of the constraints.

Return type:

list[str]

get_constraints_number()

Retrieve the number of constraints.

Returns:

The number of constraints.

Return type:

int

get_data_by_names(names, as_dict=True, filter_non_feasible=False)

Return the data for specific names of variables.

Parameters:
  • names (str | Iterable[str]) – The names of the variables.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to True.

  • filter_non_feasible (bool) –

    If True, remove the non-feasible points from the data.

    By default it is set to False.

Returns:

The data related to the variables.

Return type:

ndarray | dict[str, ndarray]

get_design_variable_names()

Retrieve the names of the design variables.

Returns:

The names of the design variables.

Return type:

list[str]

get_dimension()

Retrieve the total number of design variables.

Returns:

The dimension of the design space.

Return type:

int

get_eq_constraints()

Retrieve all the equality constraints.

Returns:

The equality constraints.

Return type:

list[gemseo.core.mdofunctions.mdo_function.MDOFunction]

get_eq_constraints_number()

Retrieve the number of equality constraints.

Returns:

The number of equality constraints.

Return type:

int

get_eq_cstr_total_dim()

Retrieve the total dimension of the equality constraints.

This dimension is the sum of all the outputs dimensions of all the equality constraints.

Returns:

The total dimension of the equality constraints.

Return type:

int

get_feasible_points()

Retrieve the feasible points within a given tolerance.

This tolerance is defined by OptimizationProblem.eq_tolerance for equality constraints and OptimizationProblem.ineq_tolerance for inequality ones.

Returns:

The values of the design variables and objective function for the feasible points.

Return type:

tuple[list[ndarray], list[dict[str, float | list[int]]]]

get_function_dimension(name)

Return the dimension of a function of the problem (e.g. a constraint).

Parameters:

name (str) – The name of the function.

Returns:

The dimension of the function.

Raises:
  • ValueError – If the function name is unknown to the problem.

  • RuntimeError – If the function dimension is not unavailable.

Return type:

int

get_function_names(names)

Return the names of the functions stored in the database.

Parameters:

names (Iterable[str]) – The names of the outputs or constraints specified by the user.

Returns:

The names of the constraints stored in the database.

Return type:

list[str]

get_functions_dimensions(names=None)

Return the dimensions of the outputs of the problem functions.

Parameters:

names (Iterable[str] | None) – The names of the functions. If None, then the objective and all the constraints are considered.

Returns:

The dimensions of the outputs of the problem functions. The dictionary keys are the functions names and the values are the functions dimensions.

Return type:

dict[str, int]

get_ineq_constraints()

Retrieve all the inequality constraints.

Returns:

The inequality constraints.

Return type:

list[gemseo.core.mdofunctions.mdo_function.MDOFunction]

get_ineq_constraints_number()

Retrieve the number of inequality constraints.

Returns:

The number of inequality constraints.

Return type:

int

get_ineq_cstr_total_dim()

Retrieve the total dimension of the inequality constraints.

This dimension is the sum of all the outputs dimensions of all the inequality constraints.

Returns:

The total dimension of the inequality constraints.

Return type:

int

get_nonproc_constraints()

Retrieve the non-processed constraints.

Returns:

The non-processed constraints.

Return type:

list[gemseo.core.mdofunctions.mdo_function.MDOFunction]

get_nonproc_objective()

Retrieve the non-processed objective function.

Return type:

MDOFunction

get_number_of_unsatisfied_constraints(design_variables)

Return the number of scalar constraints not satisfied by design variables.

Parameters:

design_variables (ndarray) – The design variables.

Returns:

The number of unsatisfied scalar constraints.

Return type:

int

get_objective_name(standardize=True)

Retrieve the name of the objective function.

Parameters:

standardize (bool) –

Whether to use the name of the objective expressed as a cost, e.g. "-f" when the user seeks to maximize "f".

By default it is set to True.

Returns:

The name of the objective function.

Return type:

str

get_observable(name)

Return an observable of the problem.

Parameters:

name (str) – The name of the observable.

Returns:

The pre-processed observable if the functions of the problem have already been pre-processed, otherwise the original one.

Return type:

MDOFunction

get_optimum()

Return the optimum solution within a given feasibility tolerances.

Returns:

The optimum result, defined by:

  • the value of the objective function,

  • the value of the design variables,

  • the indicator of feasibility of the optimal solution,

  • the value of the constraints,

  • the value of the gradients of the constraints.

Return type:

Tuple[ndarray, ndarray, bool, Dict[str, ndarray], Dict[str, ndarray]]

get_scalar_constraints_names()

Return the names of the scalar constraints.

Returns:

The names of the scalar constraints.

Return type:

list[str]

get_solution()[source]

Return the theoretical optimal value.

Returns:

The design variables and the objective at optimum.

Return type:

tuple[numpy.ndarray, float]

get_violation_criteria(x_vect)

Compute a violation measure associated to an iteration.

For each constraint, when it is violated, add the absolute distance to zero, in L2 norm.

If 0, all constraints are satisfied

Parameters:

x_vect (ndarray) – The vector of the design variables values.

Returns:

The feasibility of the point and the violation measure.

Return type:

tuple[bool, float]

get_x0_normalized(cast_to_real=False)

Return the current values of the design variables after normalization.

Parameters:

cast_to_real (bool) –

Whether to cast the return value to real.

By default it is set to False.

Returns:

The current values of the design variables normalized between 0 and 1 from their lower and upper bounds.

Return type:

ndarray

has_constraints()

Check if the problem has equality or inequality constraints.

Returns:

True if the problem has equality or inequality constraints.

has_eq_constraints()

Check if the problem has equality constraints.

Returns:

True if the problem has equality constraints.

Return type:

bool

has_ineq_constraints()

Check if the problem has inequality constraints.

Returns:

True if the problem has inequality constraints.

Return type:

bool

has_nonlinear_constraints()

Check if the problem has non-linear constraints.

Returns:

True if the problem has equality or inequality constraints.

Return type:

bool

classmethod import_hdf(file_path, x_tolerance=0.0)

Import an optimization history from an HDF file.

Parameters:
  • file_path (str | Path) – The file containing the optimization history.

  • x_tolerance (float) –

    The tolerance on the design variables when reading the file.

    By default it is set to 0.0.

Returns:

The read optimization problem.

Return type:

OptimizationProblem

is_max_iter_reached()

Check if the maximum amount of iterations has been reached.

Returns:

Whether the maximum amount of iterations has been reached.

Return type:

bool

is_point_feasible(out_val, constraints=None)

Check if a point is feasible.

Note

If the value of a constraint is absent from this point, then this constraint will be considered satisfied.

Parameters:
  • out_val (dict[str, ndarray]) – The values of the objective function, and eventually constraints.

  • constraints (Iterable[MDOFunction] | None) – The constraints whose values are to be tested. If None, then take all constraints of the problem.

Returns:

The feasibility of the point.

Return type:

bool

preprocess_functions(is_function_input_normalized=True, use_database=True, round_ints=True, eval_obs_jac=False)

Pre-process all the functions and eventually the gradient.

Required to wrap the objective function and constraints with the database and eventually the gradients by complex step or finite differences.

Parameters:
  • is_function_input_normalized (bool) –

    Whether to consider the function input as normalized and unnormalize it before the evaluation takes place.

    By default it is set to True.

  • use_database (bool) –

    Whether to wrap the functions in the database.

    By default it is set to True.

  • round_ints (bool) –

    Whether to round the integer variables.

    By default it is set to True.

  • eval_obs_jac (bool) –

    Whether to evaluate the Jacobian of the observables.

    By default it is set to False.

Return type:

None

static repr_constraint(func, ctype, value=None, positive=False)

Express a constraint as a string expression.

Parameters:
  • func (MDOFunction) – The constraint function.

  • ctype (str) – The type of the constraint. Either equality or inequality.

  • value (float | None) – The value for which the constraint is active. If None, this value is 0.

  • positive (bool) –

    If True, then the inequality constraint is positive.

    By default it is set to False.

Returns:

A string representation of the constraint.

Return type:

str

reset(database=True, current_iter=True, design_space=True, function_calls=True, preprocessing=True)

Partially or fully reset the optimization problem.

Parameters:
  • database (bool) –

    Whether to clear the database.

    By default it is set to True.

  • current_iter (bool) –

    Whether to reset the current iteration OptimizationProblem.current_iter.

    By default it is set to True.

  • design_space (bool) –

    Whether to reset the current point of the OptimizationProblem.design_space to its initial value (possibly none).

    By default it is set to True.

  • function_calls (bool) –

    Whether to reset the number of calls of the functions.

    By default it is set to True.

  • preprocessing (bool) –

    Whether to turn the pre-processing of functions to False.

    By default it is set to True.

Return type:

None

AVAILABLE_PB_TYPES: ClassVar[str] = ['linear', 'non-linear']
COMPLEX_STEP: Final[str] = 'complex_step'
CONSTRAINTS_GROUP: Final[str] = 'constraints'
DESIGN_SPACE_ATTRS: Final[str] = ['u_bounds', 'l_bounds', 'x_0', 'x_names', 'dimension']
DESIGN_SPACE_GROUP: Final[str] = 'design_space'
DESIGN_VAR_NAMES: Final[str] = 'x_names'
DESIGN_VAR_SIZE: Final[str] = 'x_size'
DIFFERENTIATION_METHODS: ClassVar[str] = ['user', 'complex_step', 'finite_differences', 'no_derivatives']
FINITE_DIFFERENCES: Final[str] = 'finite_differences'
FUNCTIONS_ATTRS: ClassVar[str] = ['objective', 'constraints']
GGOBI_FORMAT: Final[str] = 'ggobi'
HDF5_FORMAT: Final[str] = 'hdf5'
KKT_RESIDUAL_NORM: Final[str] = 'KKT residual norm'
LINEAR_PB: Final[str] = 'linear'
NON_LINEAR_PB: Final[str] = 'non-linear'
NO_DERIVATIVES: Final[str] = 'no_derivatives'
OBJECTIVE_GROUP: Final[str] = 'objective'
OBSERVABLES_GROUP: Final[str] = 'observables'
OPTIM_DESCRIPTION: ClassVar[str] = ['minimize_objective', 'fd_step', 'differentiation_method', 'pb_type', 'ineq_tolerance', 'eq_tolerance']
OPT_DESCR_GROUP: Final[str] = 'opt_description'
SOLUTION_GROUP: Final[str] = 'solution'
USER_GRAD: Final[str] = 'user'
activate_bound_check: ClassVar[bool] = True

Whether to check if a point is in the design space before calling functions.

constraint_names: dict[str, list[str]]

The standardized constraint names bound to the original ones.

constraints: list[MDOFunction]

The constraints.

database: Database

The database to store the optimization problem data.

design_space: DesignSpace

The design space on which the optimization problem is solved.

property differentiation_method: str

The differentiation method.

property dimension: int

The dimension of the design space.

eq_tolerance: float

The tolerance for the equality constraints.

fd_step: float

The finite differences step.

ineq_tolerance: float

The tolerance for the inequality constraints.

property is_mono_objective: bool

Whether the optimization problem is mono-objective.

minimize_objective: bool

Whether to maximize the objective.

new_iter_observables: list[MDOFunction]

The observables to be called at each new iterate.

nonproc_constraints: list[MDOFunction]

The non-processed constraints.

nonproc_new_iter_observables: list[MDOFunction]

The non-processed observables to be called at each new iterate.

nonproc_objective: MDOFunction

The non-processed objective function.

nonproc_observables: list[MDOFunction]

The non-processed observables.

property objective: MDOFunction

The objective function.

observables: list[MDOFunction]

The observables.

property parallel_differentiation: bool

Whether to approximate the derivatives in parallel.

property parallel_differentiation_options: bool

The options to approximate the derivatives in parallel.

pb_type: str

The type of optimization problem.

preprocess_options: dict

The options to pre-process the functions.

solution: OptimizationResult

The solution of the optimization problem.

stop_if_nan: bool

Whether the optimization stops when a function returns NaN.

use_standardized_objective: bool

Whether to use standardized objective for logging and post-processing.

The standardized objective corresponds to the original one expressed as a cost function to minimize. A DriverLib works with this standardized objective and the Database stores its values. However, for convenience, it may be more relevant to log the expression and the values of the original objective.