rosenbrock module¶
The Rosenbrock analytic problem.
- class gemseo.problems.analytical.rosenbrock.RosenMF(dimension=2)[source]¶
Bases:
MDODiscipline
A multi-fidelity Rosenbrock discipline.
Its expression is \(\mathrm{fidelity} * \mathrm{Rosenbrock}(x)\) where both \(\mathrm{fidelity}\) and \(x\) are provided as input data.
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
dimension (int) –
The dimension of the design space.
By default it is set to 2.
- class ApproximationMode(value)¶
Bases:
StrEnum
The approximation derivation modes.
- CENTERED_DIFFERENCES = 'centered_differences'¶
The centered differences method used to approximate the Jacobians by perturbing each variable with a small real number.
- COMPLEX_STEP = 'complex_step'¶
The complex step method used to approximate the Jacobians by perturbing each variable with a small complex number.
- FINITE_DIFFERENCES = 'finite_differences'¶
The finite differences method used to approximate the Jacobians by perturbing each variable with a small real number.
- class CacheType(value)¶
Bases:
StrEnum
The name of the cache class.
- HDF5 = 'HDF5Cache'¶
- MEMORY_FULL = 'MemoryFullCache'¶
- NONE = ''¶
No cache is used.
- SIMPLE = 'SimpleCache'¶
- class ExecutionStatus(value)¶
Bases:
StrEnum
The execution statuses of a discipline.
- DONE = 'DONE'¶
- FAILED = 'FAILED'¶
- LINEARIZE = 'LINEARIZE'¶
- PENDING = 'PENDING'¶
- RUNNING = 'RUNNING'¶
- VIRTUAL = 'VIRTUAL'¶
- class GrammarType(value)¶
Bases:
StrEnum
The name of the grammar class.
- JSON = 'JSONGrammar'¶
- PYDANTIC = 'PydanticGrammar'¶
- SIMPLE = 'SimpleGrammar'¶
- SIMPLER = 'SimplerGrammar'¶
- class InitJacobianType(value)¶
Bases:
StrEnum
The way to initialize Jacobian matrices.
- DENSE = 'dense'¶
The Jacobian is initialized as a NumPy ndarray filled in with zeros.
- EMPTY = 'empty'¶
The Jacobian is initialized as an empty NumPy ndarray.
- SPARSE = 'sparse'¶
The Jacobian is initialized as a SciPy CSR array with zero elements.
- class LinearizationMode(value)¶
Bases:
StrEnum
An enumeration.
- ADJOINT = 'adjoint'¶
- AUTO = 'auto'¶
- CENTERED_DIFFERENCES = 'centered_differences'¶
- COMPLEX_STEP = 'complex_step'¶
- DIRECT = 'direct'¶
- FINITE_DIFFERENCES = 'finite_differences'¶
- REVERSE = 'reverse'¶
- class ReExecutionPolicy(value)¶
Bases:
StrEnum
The re-execution policy of a discipline.
- DONE = 'RE_EXEC_DONE'¶
- NEVER = 'RE_EXEC_NEVER'¶
- classmethod activate_time_stamps()¶
Activate the time stamps.
For storing start and end times of execution and linearizations.
- Return type:
None
- add_differentiated_inputs(inputs=None)¶
Add the inputs for differentiation.
The inputs that do not represent continuous numbers are filtered out.
- Parameters:
inputs (Iterable[str] | None) – The input variables against which to differentiate the outputs. If
None
, all the inputs of the discipline are used.- Raises:
ValueError – When ``inputs `` are not in the input grammar.
- Return type:
None
- add_differentiated_outputs(outputs=None)¶
Add the outputs for differentiation.
The outputs that do not represent continuous numbers are filtered out.
- Parameters:
outputs (Iterable[str] | None) – The output variables to be differentiated. If
None
, all the outputs of the discipline are used.- Raises:
ValueError – When ``outputs `` are not in the output grammar.
- Return type:
None
- add_namespace_to_input(name, namespace)¶
Add a namespace prefix to an existing input grammar element.
The updated input grammar element name will be
namespace
+namespaces_separator
+name
.
- add_namespace_to_output(name, namespace)¶
Add a namespace prefix to an existing output grammar element.
The updated output grammar element name will be
namespace
+namespaces_separator
+name
.
- add_status_observer(obs)¶
Add an observer for the status.
Add an observer for the status to be notified when self changes of status.
- Parameters:
obs (Any) – The observer to add.
- Return type:
None
- auto_get_grammar_file(is_input=True, name=None, comp_dir=None)¶
Use a naming convention to associate a grammar file to the discipline.
Search in the directory
comp_dir
for either an input grammar file namedname + "_input.json"
or an output grammar file namedname + "_output.json"
.- Parameters:
is_input (bool) –
Whether to search for an input or output grammar file.
By default it is set to True.
name (str | None) – The name to be searched in the file names. If
None
, use the name of the discipline class.comp_dir (str | Path | None) – The directory in which to search the grammar file. If
None
, use theGRAMMAR_DIRECTORY
if any, or the directory of the discipline class module.
- Returns:
The grammar file path.
- Return type:
Path
- check_input_data(input_data, raise_exception=True)¶
Check the input data validity.
- check_jacobian(input_data=None, derr_approx=ApproximationMode.FINITE_DIFFERENCES, step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, fig_size_x=10, fig_size_y=10, reference_jacobian_path=None, save_reference_jacobian=False, indices=None)¶
Check if the analytical Jacobian is correct with respect to a reference one.
If reference_jacobian_path is not None and save_reference_jacobian is True, compute the reference Jacobian with the approximation method and save it in reference_jacobian_path.
If reference_jacobian_path is not None and save_reference_jacobian is False, do not compute the reference Jacobian but read it from reference_jacobian_path.
If reference_jacobian_path is None, compute the reference Jacobian without saving it.
- Parameters:
input_data (Mapping[str, ndarray] | None) – The input data needed to execute the discipline according to the discipline input grammar. If
None
, use theMDODiscipline.default_inputs
.derr_approx (ApproximationMode) –
The approximation method, either “complex_step” or “finite_differences”.
By default it is set to “finite_differences”.
threshold (float) –
The acceptance threshold for the Jacobian error.
By default it is set to 1e-08.
linearization_mode (str) –
the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)
By default it is set to “auto”.
inputs (Iterable[str] | None) – The names of the inputs wrt which to differentiate the outputs.
outputs (Iterable[str] | None) – The names of the outputs to be differentiated.
step (float) –
The differentiation step.
By default it is set to 1e-07.
parallel (bool) –
Whether to differentiate the discipline in parallel.
By default it is set to False.
n_processes (int) –
The maximum simultaneous number of threads, if
use_threading
is True, or processes otherwise, used to parallelize the execution.By default it is set to 2.
use_threading (bool) –
Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.
By default it is set to False.
wait_time_between_fork (float) –
The time waited between two forks of the process / thread.
By default it is set to 0.
auto_set_step (bool) –
Whether to compute the optimal step for a forward first order finite differences gradient approximation.
By default it is set to False.
plot_result (bool) –
Whether to plot the result of the validation (computed vs approximated Jacobians).
By default it is set to False.
file_path (str | Path) –
The path to the output file if
plot_result
isTrue
.By default it is set to “jacobian_errors.pdf”.
show (bool) –
Whether to open the figure.
By default it is set to False.
fig_size_x (float) –
The x-size of the figure in inches.
By default it is set to 10.
fig_size_y (float) –
The y-size of the figure in inches.
By default it is set to 10.
reference_jacobian_path (str | Path | None) – The path of the reference Jacobian file.
save_reference_jacobian (bool) –
Whether to save the reference Jacobian.
By default it is set to False.
indices (Iterable[int] | None) – The indices of the inputs and outputs for the different sub-Jacobian matrices, formatted as
{variable_name: variable_components}
wherevariable_components
can be either an integer, e.g. 2 a sequence of integers, e.g. [0, 3], a slice, e.g. slice(0,3), the ellipsis symbol (…) or None, which is the same as ellipsis. If a variable name is missing, consider all its components. IfNone
, consider all the components of all theinputs
andoutputs
.
- Returns:
Whether the analytical Jacobian is correct with respect to the reference one.
- Return type:
- check_output_data(raise_exception=True)¶
Check the output data validity.
- Parameters:
raise_exception (bool) –
Whether to raise an exception when the data is invalid.
By default it is set to True.
- Return type:
None
- classmethod deactivate_time_stamps()¶
Deactivate the time stamps.
For storing start and end times of execution and linearizations.
- Return type:
None
- execute(input_data=None)¶
Execute the discipline.
This method executes the discipline:
Adds the default inputs to the
input_data
if some inputs are not defined in input_data but exist inMDODiscipline.default_inputs
.Checks whether the last execution of the discipline was called with identical inputs, i.e. cached in
MDODiscipline.cache
; if so, directly returnsself.cache.get_output_cache(inputs)
.Caches the inputs.
Checks the input data against
MDODiscipline.input_grammar
.If
MDODiscipline.data_processor
is not None, runs the preprocessor.Updates the status to
MDODiscipline.ExecutionStatus.RUNNING
.Calls the
MDODiscipline._run()
method, that shall be defined.If
MDODiscipline.data_processor
is not None, runs the postprocessor.Checks the output data.
Caches the outputs.
Updates the status to
MDODiscipline.ExecutionStatus.DONE
orMDODiscipline.ExecutionStatus.FAILED
.Updates summed execution time.
- Parameters:
input_data (Mapping[str, Any] | None) – The input data needed to execute the discipline according to the discipline input grammar. If
None
, use theMDODiscipline.default_inputs
.- Returns:
The discipline local data after execution.
- Return type:
- static from_pickle(file_path)¶
Deserialize a discipline from a file.
- Parameters:
file_path (str | Path) – The path to the file containing the discipline.
- Returns:
The discipline instance.
- Return type:
- get_all_inputs()¶
Return the local input data.
The order is given by
MDODiscipline.get_input_data_names()
.- Returns:
The local input data.
- Return type:
Iterator[Any]
- get_all_outputs()¶
Return the local output data.
The order is given by
MDODiscipline.get_output_data_names()
.- Returns:
The local output data.
- Return type:
Iterator[Any]
- static get_data_list_from_dict(keys, data_dict)¶
Filter the dict from a list of keys or a single key.
If keys is a string, then the method return the value associated to the key. If keys is a list of strings, then the method returns a generator of value corresponding to the keys which can be iterated.
- get_disciplines_in_dataflow_chain()¶
Return the disciplines that must be shown as blocks in the XDSM.
By default, only the discipline itself is shown. This function can be differently implemented for any type of inherited discipline.
- Returns:
The disciplines shown in the XDSM chain.
- Return type:
- get_expected_dataflow()¶
Return the expected data exchange sequence.
This method is used for the XDSM representation.
The default expected data exchange sequence is an empty list.
See also
MDOFormulation.get_expected_dataflow
- Returns:
The data exchange arcs.
- Return type:
- get_expected_workflow()¶
Return the expected execution sequence.
This method is used for the XDSM representation.
The default expected execution sequence is the execution of the discipline itself.
See also
MDOFormulation.get_expected_workflow
- Returns:
The expected execution sequence.
- Return type:
- get_input_data(with_namespaces=True)¶
Return the local input data as a dictionary.
- get_input_data_names(with_namespaces=True)¶
Return the names of the input variables.
- get_input_output_data_names(with_namespaces=True)¶
Return the names of the input and output variables.
- get_inputs_asarray()¶
Return the local output data as a large NumPy array.
The order is the one of
MDODiscipline.get_all_outputs()
.- Returns:
The local output data.
- Return type:
- get_inputs_by_name(data_names)¶
Return the local data associated with input variables.
- Parameters:
data_names (Iterable[str]) – The names of the input variables.
- Returns:
The local data for the given input variables.
- Raises:
ValueError – When a variable is not an input of the discipline.
- Return type:
Iterator[Any]
- get_local_data_by_name(data_names)¶
Return the local data of the discipline associated with variables names.
- Parameters:
data_names (Iterable[str]) – The names of the variables.
- Returns:
The local data associated with the variables names.
- Raises:
ValueError – When a name is not a discipline input name.
- Return type:
Iterator[Any]
- get_output_data(with_namespaces=True)¶
Return the local output data as a dictionary.
- get_output_data_names(with_namespaces=True)¶
Return the names of the output variables.
- get_outputs_asarray()¶
Return the local input data as a large NumPy array.
The order is the one of
MDODiscipline.get_all_inputs()
.- Returns:
The local input data.
- Return type:
- get_outputs_by_name(data_names)¶
Return the local data associated with output variables.
- Parameters:
data_names (Iterable[str]) – The names of the output variables.
- Returns:
The local data for the given output variables.
- Raises:
ValueError – When a variable is not an output of the discipline.
- Return type:
Iterator[Any]
- get_sub_disciplines(recursive=False)¶
Determine the sub-disciplines.
This method lists the sub-disciplines’ disciplines. It will list up to one level of disciplines contained inside another one unless the
recursive
argument is set toTrue
.- Parameters:
recursive (bool) –
If
True
, the method will look inside any discipline that has other disciplines inside until it reaches a discipline without sub-disciplines, in this case the return value will not include any discipline that has sub-disciplines. IfFalse
, the method will list up to one level of disciplines contained inside another one, in this case the return value may include disciplines that contain sub-disciplines.By default it is set to False.
- Returns:
The sub-disciplines.
- Return type:
- is_all_inputs_existing(data_names)¶
Test if several variables are discipline inputs.
- is_all_outputs_existing(data_names)¶
Test if several variables are discipline outputs.
- is_input_existing(data_name)¶
Test if a variable is a discipline input.
- is_output_existing(data_name)¶
Test if a variable is a discipline output.
- linearize(input_data=None, compute_all_jacobians=False, execute=True)¶
Compute the Jacobians of some outputs with respect to some inputs.
- Parameters:
input_data (Mapping[str, Any] | None) – The input data for which to compute the Jacobian. If
None
, use theMDODiscipline.default_inputs
.compute_all_jacobians (bool) –
Whether to compute the Jacobians of all the output with respect to all the inputs. Otherwise, set the input variables against which to differentiate the output ones with
add_differentiated_inputs()
and set these output variables to differentiate withadd_differentiated_outputs()
.By default it is set to False.
execute (bool) –
Whether to start by executing the discipline with the input data for which to compute the Jacobian; this allows to ensure that the discipline was executed with the right input data; it can be almost free if the corresponding output data have been stored in the
cache
.By default it is set to True.
- Returns:
The Jacobian of the discipline shaped as
{output_name: {input_name: jacobian_array}}
wherejacobian_array[i, j]
is the partial derivative ofoutput_name[i]
with respect toinput_name[j]
.- Raises:
ValueError – When either the inputs for which to differentiate the outputs or the outputs to differentiate are missing.
- Return type:
- notify_status_observers()¶
Notify all status observers that the status has changed.
- Return type:
None
- remove_status_observer(obs)¶
Remove an observer for the status.
- Parameters:
obs (Any) – The observer to remove.
- Return type:
None
- reset_statuses_for_run()¶
Set all the statuses to
MDODiscipline.ExecutionStatus.PENDING
.- Raises:
ValueError – When the discipline cannot be run because of its status.
- Return type:
None
- set_cache_policy(cache_type=CacheType.SIMPLE, cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_path=None, is_memory_shared=True)¶
Set the type of cache to use and the tolerance level.
This method defines when the output data have to be cached according to the distance between the corresponding input data and the input data already cached for which output data are also cached.
The cache can be either a
SimpleCache
recording the last execution or a cache storing all executions, e.g.MemoryFullCache
andHDF5Cache
. Caching data can be either in-memory, e.g.SimpleCache
andMemoryFullCache
, or on the disk, e.g.HDF5Cache
.The attribute
CacheFactory.caches
provides the available caches types.- Parameters:
cache_type (CacheType) –
The type of cache.
By default it is set to “SimpleCache”.
cache_tolerance (float) –
The maximum relative norm of the difference between two input arrays to consider that two input arrays are equal.
By default it is set to 0.0.
cache_hdf_file (str | Path | None) – The path to the HDF file to store the data; this argument is mandatory when the
MDODiscipline.CacheType.HDF5
policy is used.cache_hdf_node_path (str | None) – The name of the HDF file node to store the discipline data, possibly passed as a path
root_name/.../group_name/.../node_name
. IfNone
,MDODiscipline.name
is used.is_memory_shared (bool) –
Whether to store the data with a shared memory dictionary, which makes the cache compatible with multiprocessing.
By default it is set to True.
- Return type:
None
- set_disciplines_statuses(status)¶
Set the sub-disciplines statuses.
To be implemented in subclasses.
- Parameters:
status (str) – The status.
- Return type:
None
- set_jacobian_approximation(jac_approx_type=ApproximationMode.FINITE_DIFFERENCES, jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)¶
Set the Jacobian approximation method.
Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling
MDODiscipline.linearize()
.- Parameters:
jac_approx_type (ApproximationMode) –
The approximation method, either “complex_step” or “finite_differences”.
By default it is set to “finite_differences”.
jax_approx_step (float) –
The differentiation step.
By default it is set to 1e-07.
jac_approx_n_processes (int) –
The maximum simultaneous number of threads, if
jac_approx_use_threading
is True, or processes otherwise, used to parallelize the execution.By default it is set to 1.
jac_approx_use_threading (bool) –
Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.
By default it is set to False.
jac_approx_wait_time (float) –
The time waited between two forks of the process / thread.
By default it is set to 0.
- Return type:
None
- set_linear_relationships(outputs=(), inputs=())¶
Set linear relationships between discipline inputs and outputs.
- Parameters:
outputs (Iterable[str]) –
The discipline output(s) in a linear relation with the input(s). If empty, all discipline outputs are considered.
By default it is set to ().
inputs (Iterable[str]) –
The discipline input(s) in a linear relation with the output(s). If empty, all discipline inputs are considered.
By default it is set to ().
- Return type:
None
- set_optimal_fd_step(outputs=None, inputs=None, compute_all_jacobians=False, print_errors=False, numerical_error=2.220446049250313e-16)¶
Compute the optimal finite-difference step.
Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of the perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (round-off when doing f(x+step)-f(x)) are approximately equal.
Warning
This calls the discipline execution twice per input variables.
See also
https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differentiation”
- Parameters:
inputs (Iterable[str] | None) – The inputs wrt which the outputs are linearized. If
None
, use theMDODiscipline._differentiated_inputs
.outputs (Iterable[str] | None) – The outputs to be linearized. If
None
, use theMDODiscipline._differentiated_outputs
.compute_all_jacobians (bool) –
Whether to compute the Jacobians of all the output with respect to all the inputs. Otherwise, set the input variables against which to differentiate the output ones with
add_differentiated_inputs()
and set these output variables to differentiate withadd_differentiated_outputs()
.By default it is set to False.
print_errors (bool) –
Whether to display the estimated errors.
By default it is set to False.
numerical_error (float) –
The numerical error associated to the calculation of f. By default, this is the machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution.
By default it is set to 2.220446049250313e-16.
- Returns:
The estimated errors of truncation and cancellation error.
- Raises:
ValueError – When the Jacobian approximation method has not been set.
- Return type:
ndarray
- store_local_data(**kwargs)¶
Store discipline data in local data.
- Parameters:
**kwargs (Any) – The data to be stored in
MDODiscipline.local_data
.- Return type:
None
- to_pickle(file_path)¶
Serialize the discipline and store it in a file.
- Parameters:
file_path (str | Path) – The path to the file to store the discipline.
- Return type:
None
- GRAMMAR_DIRECTORY: ClassVar[str | None] = None¶
The directory in which to search for the grammar files if not the class one.
- activate_counters: ClassVar[bool] = True¶
Whether to activate the counters (execution time, calls and linearizations).
- activate_input_data_check: ClassVar[bool] = True¶
Whether to check the input data respect the input grammar.
- activate_output_data_check: ClassVar[bool] = True¶
Whether to check the output data respect the output grammar.
- cache: AbstractCache | None¶
The cache containing one or several executions of the discipline according to the cache policy.
- property cache_tol: float¶
The cache input tolerance.
This is the tolerance for equality of the inputs in the cache. If norm(stored_input_data-input_data) <= cache_tol * norm(stored_input_data), the cached data for
stored_input_data
is returned when callingself.execute(input_data)
.- Raises:
ValueError – When the discipline does not have a cache.
- data_processor: DataProcessor¶
A tool to pre- and post-process discipline data.
- property default_outputs: Defaults¶
The default outputs used when
virtual_execution
isTrue
.
- property disciplines: list[MDODiscipline]¶
The sub-disciplines, if any.
- property exec_time: float | None¶
The cumulated execution time of the discipline.
This property is multiprocessing safe.
- Raises:
RuntimeError – When the discipline counters are disabled.
- property grammar_type: GrammarType¶
The type of grammar to be used for inputs and outputs declaration.
- input_grammar: BaseGrammar¶
The input grammar.
- jac: MutableMapping[str, MutableMapping[str, ndarray | csr_array | JacobianOperator]]¶
The Jacobians of the outputs wrt inputs.
The structure is
{output: {input: matrix}}
.
- property linear_relationships: Mapping[str, Iterable[str]]¶
The linear relationships between inputs and outputs.
- property linearization_mode: LinearizationMode¶
The linearization mode among
MDODiscipline.LinearizationMode
.- Raises:
ValueError – When the linearization mode is unknown.
- property local_data: DisciplineData¶
The current input and output data.
- property n_calls: int | None¶
The number of times the discipline was executed.
This property is multiprocessing safe.
- Raises:
RuntimeError – When the discipline counters are disabled.
- property n_calls_linearize: int | None¶
The number of times the discipline was linearized.
This property is multiprocessing safe.
- Raises:
RuntimeError – When the discipline counters are disabled.
- output_grammar: BaseGrammar¶
The output grammar.
- re_exec_policy: ReExecutionPolicy¶
The policy to re-execute the same discipline.
- residual_variables: dict[str, str]¶
The output variables mapping to their inputs, to be considered as residuals; they shall be equal to zero.
- property status: ExecutionStatus¶
The status of the discipline.
The status aims at monitoring the process and give the user a simplified view on the state (the process state = execution or linearize or done) of the disciplines. The core part of the execution is _run, the core part of linearize is _compute_jacobian or approximate jacobian computation.
- time_stamps: ClassVar[dict[str, float] | None] = None¶
The mapping from discipline name to their execution time.
- virtual_execution: ClassVar[bool] = False¶
Whether to skip the
_run()
method during execution and return thedefault_outputs
, whatever the inputs.
- class gemseo.problems.analytical.rosenbrock.Rosenbrock(n_x=2, l_b=-2.0, u_b=2.0, scalar_var=False, initial_guess=None)[source]¶
Bases:
OptimizationProblem
The Rosenbrock optimization problem.
\[f(x) = \sum_{i=2}^{n_x} 100(x_{i} - x_{i-1}^2)^2 + (1 - x_{i-1})^2\]with the default
DesignSpace
\([-0.2,0.2]^{n_x}\).- Parameters:
n_x (int) –
The dimension of the design space.
By default it is set to 2.
l_b (float) –
The lower bound (common value to all variables).
By default it is set to -2.0.
u_b (float) –
The upper bound (common value to all variables).
By default it is set to 2.0.
scalar_var (bool) –
If
True
, the design space will contain only scalar variables (as many as the problem dimension); ifFalse
, the design space will contain a single multidimensional variable (whose size equals the problem dimension).By default it is set to False.
initial_guess (ndarray | None) – The initial guess for optimal solution.
- AggregationFunction¶
alias of
EvaluationFunction
- class ApproximationMode(value)¶
Bases:
StrEnum
The approximation derivation modes.
- CENTERED_DIFFERENCES = 'centered_differences'¶
The centered differences method used to approximate the Jacobians by perturbing each variable with a small real number.
- COMPLEX_STEP = 'complex_step'¶
The complex step method used to approximate the Jacobians by perturbing each variable with a small complex number.
- FINITE_DIFFERENCES = 'finite_differences'¶
The finite differences method used to approximate the Jacobians by perturbing each variable with a small real number.
- class DifferentiationMethod(value)¶
Bases:
StrEnum
The differentiation methods.
- CENTERED_DIFFERENCES = 'centered_differences'¶
- COMPLEX_STEP = 'complex_step'¶
- FINITE_DIFFERENCES = 'finite_differences'¶
- NO_DERIVATIVE = 'no_derivative'¶
- USER_GRAD = 'user'¶
- class ProblemType(value)¶
Bases:
StrEnum
The type of problem.
- LINEAR = 'linear'¶
- NON_LINEAR = 'non-linear'¶
- add_callback(callback_func, each_new_iter=True, each_store=False)¶
Add a callback for some events.
The callback functions are attached to the database, which means they are triggered when new values are stored within the database of the optimization problem.
- Parameters:
callback_func (Callable[[ndarray], Any]) – A function to be called after some events, whose argument is a design vector.
each_new_iter (bool) –
Whether to evaluate the callback functions after evaluating all functions of the optimization problem for a given point and storing their values in the
database
.By default it is set to True.
each_store (bool) –
Whether to evaluate the callback functions after storing any new value in the
database
.By default it is set to False.
- Return type:
None
- add_constraint(cstr_func, value=None, cstr_type=None, positive=False)¶
Add a constraint (equality and inequality) to the optimization problem.
- Parameters:
cstr_func (MDOFunction) – The constraint.
value (float | None) – The value for which the constraint is active. If
None
, this value is 0.cstr_type (MDOFunction.ConstraintType | None) – The type of the constraint.
positive (bool) –
If
True
, then the inequality constraint is positive.By default it is set to False.
- Raises:
TypeError – When the constraint of a linear optimization problem is not an
MDOLinearFunction
.ValueError – When the type of the constraint is missing.
- Return type:
None
- add_eq_constraint(cstr_func, value=None)¶
Add an equality constraint to the optimization problem.
- Parameters:
cstr_func (MDOFunction) – The constraint.
value (float | None) – The value for which the constraint is active. If
None
, this value is 0.
- Return type:
None
- add_ineq_constraint(cstr_func, value=None, positive=False)¶
Add an inequality constraint to the optimization problem.
- Parameters:
cstr_func (MDOFunction) – The constraint.
value (float | None) – The value for which the constraint is active. If
None
, this value is 0.positive (bool) –
If
True
, then the inequality constraint is positive.By default it is set to False.
- Return type:
None
- add_observable(obs_func, new_iter=True)¶
Add a function to be observed.
When the
OptimizationProblem
is executed, the observables are called following this sequence:The optimization algorithm calls the objective function with a normalized
x_vect
.The
OptimizationProblem.preprocess_functions()
wraps the function as aNormDBFunction
, which unnormalizes thex_vect
before evaluation.The unnormalized
x_vect
and the result of the evaluation are stored in theOptimizationProblem.database
.The previous step triggers the
OptimizationProblem.new_iter_listeners
, which calls the observables with the unnormalizedx_vect
.The observables themselves are wrapped as a
NormDBFunction
byOptimizationProblem.preprocess_functions()
, but in this case the input is always expected as unnormalized to avoid an additional normalizing-unnormalizing step.Finally, the output is stored in the
OptimizationProblem.database
.
- Parameters:
obs_func (MDOFunction) – An observable to be observed.
new_iter (bool) –
If
True
, then the observable will be called at each new iterate.By default it is set to True.
- Return type:
None
- aggregate_constraint(constraint_index, method=EvaluationFunction.MAX, groups=None, **options)¶
Aggregate a constraint to generate a reduced dimension constraint.
- Parameters:
constraint_index (int) – The index of the constraint in
constraints
.method (Callable[[NDArray[float]], float] | AggregationFunction) –
The aggregation method, e.g.
"max"
,"lower_bound_KS"
,"upper_bound_KS"``or ``"IKS"
.By default it is set to “MAX”.
groups (Iterable[Sequence[int]] | None) – The groups of components of the constraint to aggregate to produce one aggregation constraint per group of components; if
None
, a single aggregation constraint is produced.**options (Any) – The options of the aggregation method.
- Raises:
ValueError – When the given index is greater or equal than the number of constraints or when the constraint aggregation method is unknown.
- Return type:
None
- apply_exterior_penalty(objective_scale=1.0, scale_inequality=1.0, scale_equality=1.0)¶
Reformulate the optimization problem using exterior penalty.
Given the optimization problem with equality and inequality constraints:
\[ \begin{align}\begin{aligned}min_x f(x)\\s.t.\\g(x)\leq 0\\h(x)=0\\l_b\leq x\leq u_b\end{aligned}\end{align} \]The exterior penalty approach consists in building a penalized objective function that takes into account constraints violations:
\[ \begin{align}\begin{aligned}min_x \tilde{f}(x) = \frac{f(x)}{o_s} + s[\sum{H(g(x))g(x)^2}+\sum{h(x)^2}]\\s.t.\\l_b\leq x\leq u_b\end{aligned}\end{align} \]Where \(H(x)\) is the Heaviside function, \(o_s\) is the
objective_scale
parameter and \(s\) is the scale parameter. The solution of the new problem approximate the one of the original problem. Increasing the values ofobjective_scale
and scale, the solutions are closer but the optimization problem requires more and more iterations to be solved.- Parameters:
scale_equality (float | ndarray) –
The equality constraint scaling constant.
By default it is set to 1.0.
objective_scale (float) –
The objective scaling constant.
By default it is set to 1.0.
scale_inequality (float | ndarray) –
The inequality constraint scaling constant.
By default it is set to 1.0.
- Return type:
None
- change_objective_sign()¶
Change the objective function sign in order to minimize its opposite.
The
OptimizationProblem
expresses any optimization problem as a minimization problem. Then, an objective function originally expressed as a performance function to maximize must be converted into a cost function to minimize, by means of this method.- Return type:
None
- check()¶
Check if the optimization problem is ready for run.
- Raises:
ValueError – If the objective function is missing.
- Return type:
None
- static check_format(input_function)¶
Check that a function is an instance of
MDOFunction
.- Parameters:
input_function (Any) – The function to be tested.
- Raises:
TypeError – If the function is not an
MDOFunction
.- Return type:
None
- clear_listeners()¶
Clear all the listeners.
- Return type:
None
- evaluate_functions(x_vect=None, eval_jac=False, eval_obj=True, eval_observables=True, normalize=True, no_db_no_norm=False, constraint_names=None, observable_names=None, jacobian_names=None)¶
Compute the functions of interest, and possibly their derivatives.
These functions of interest are the constraints, and possibly the objective.
Some optimization libraries require the number of constraints as an input parameter which is unknown by the formulation or the scenario. Evaluation of initial point allows to get this mandatory information. This is also used for design of experiments to evaluate samples.
- Parameters:
x_vect (ndarray) – The input vector at which the functions must be evaluated; if None, the initial point x_0 is used.
eval_jac (bool) –
Whether to compute the Jacobian matrices of the functions of interest. If
True
andjacobian_names
isNone
then compute the Jacobian matrices (or gradients) of the functions that are selected for evaluation (witheval_obj
,constraint_names
,eval_observables
and``observable_names``). IfFalse
andjacobian_names
isNone
then no Jacobian matrix is evaluated. Ifjacobian_names
is notNone
then the value ofeval_jac
is ignored.By default it is set to False.
eval_obj (bool) –
Whether to consider the objective function as a function of interest.
By default it is set to True.
eval_observables (bool) –
Whether to evaluate the observables. If
True
andobservable_names
isNone
then all the observables are evaluated. IfFalse
andobservable_names
isNone
then no observable is evaluated. Ifobservable_names
is notNone
then the value ofeval_observables
is ignored.By default it is set to True.
normalize (bool) –
Whether to consider the input vector
x_vect
normalized.By default it is set to True.
no_db_no_norm (bool) –
If
True
, then do not use the pre-processed functions, so we have no database, nor normalization.By default it is set to False.
constraint_names (Iterable[str] | None) – The names of the constraints to evaluate. If
None
then all the constraints are evaluated.observable_names (Iterable[str] | None) – The names of the observables to evaluate. If
None
andeval_observables
isTrue
then all the observables are evaluated. IfNone
andeval_observables
isFalse
then no observable is evaluated.jacobian_names (Iterable[str] | None) – The names of the functions whose Jacobian matrices (or gradients) to compute. If
None
andeval_jac
isTrue
then compute the Jacobian matrices (or gradients) of the functions that are selected for evaluation (witheval_obj
,constraint_names
,eval_observables
and``observable_names``). IfNone
andeval_jac
isFalse
then no Jacobian matrix is computed.
- Returns:
The output values of the functions of interest, as well as their Jacobian matrices if
eval_jac
isTrue
.- Raises:
ValueError – If a name in
jacobian_names
is not the name of a function of the problem.- Return type:
EvaluationType
- execute_observables_callback(last_x)¶
The callback function to be passed to the database.
Call all the observables with the last design variables values as argument.
- Parameters:
last_x (ndarray) – The design variables values from the last evaluation.
- Return type:
None
- classmethod from_hdf(file_path, x_tolerance=0.0, hdf_node_path='')¶
Import an optimization history from an HDF file.
- Parameters:
file_path (str | Path) – The file containing the optimization history.
x_tolerance (float) –
The tolerance on the design variables when reading the file.
By default it is set to 0.0.
hdf_node_path (str) –
The path of the HDF node from which the database should be imported. If empty, the root node is considered.
By default it is set to “”.
- Returns:
The read optimization problem.
- Return type:
- get_active_ineq_constraints(x_vect, tol=1e-06)¶
For each constraint, indicate if its different components are active.
- Parameters:
- Returns:
For each constraint, a boolean indicator of activation of its different components.
- Return type:
- get_all_function_name()¶
Retrieve the names of all the function of the optimization problem.
These functions are the constraints, the objective function and the observables.
- get_all_functions(original=False)¶
Retrieve all the functions of the optimization problem.
These functions are the constraints, the objective function and the observables.
- Parameters:
original (bool) –
Whether to return the original functions or the preprocessed ones.
By default it is set to False.
- Returns:
All the functions of the optimization problem.
- Return type:
- get_best_infeasible_point()¶
Retrieve the best infeasible point within a given tolerance.
- get_constraint_names()¶
Retrieve the names of the constraints.
- get_constraints_number()¶
Retrieve the number of constraints.
- Returns:
The number of constraints.
- Return type:
- get_data_by_names(names, as_dict=True, filter_non_feasible=False)¶
Return the data for specific names of variables.
- Parameters:
- Returns:
The data related to the variables.
- Return type:
- get_design_variable_names()¶
Retrieve the names of the design variables.
- get_dimension()¶
Retrieve the total number of design variables.
- Returns:
The dimension of the design space.
- Return type:
- get_eq_constraints()¶
Retrieve all the equality constraints.
- Returns:
The equality constraints.
- Return type:
- get_eq_constraints_number()¶
Retrieve the number of equality constraints.
- Returns:
The number of equality constraints.
- Return type:
- get_eq_cstr_total_dim()¶
Retrieve the total dimension of the equality constraints.
This dimension is the sum of all the outputs dimensions of all the equality constraints.
- Returns:
The total dimension of the equality constraints.
- Return type:
- get_feasible_points()¶
Retrieve the feasible points within a given tolerance.
This tolerance is defined by
OptimizationProblem.eq_tolerance
for equality constraints andOptimizationProblem.ineq_tolerance
for inequality ones.
- get_function_dimension(name)¶
Return the dimension of a function of the problem (e.g. a constraint).
- Parameters:
name (str) – The name of the function.
- Returns:
The dimension of the function.
- Raises:
ValueError – If the function name is unknown to the problem.
RuntimeError – If the function dimension is not unavailable.
- Return type:
- get_function_names(names)¶
Return the names of the functions stored in the database.
- get_functions_dimensions(names=None)¶
Return the dimensions of the outputs of the problem functions.
- Parameters:
names (Iterable[str] | None) – The names of the functions. If
None
, then the objective and all the constraints are considered.- Returns:
The dimensions of the outputs of the problem functions. The dictionary keys are the functions names and the values are the functions dimensions.
- Return type:
- get_ineq_constraints()¶
Retrieve all the inequality constraints.
- Returns:
The inequality constraints.
- Return type:
- get_ineq_constraints_number()¶
Retrieve the number of inequality constraints.
- Returns:
The number of inequality constraints.
- Return type:
- get_ineq_cstr_total_dim()¶
Retrieve the total dimension of the inequality constraints.
This dimension is the sum of all the outputs dimensions of all the inequality constraints.
- Returns:
The total dimension of the inequality constraints.
- Return type:
- get_last_point()¶
Return the last design point.
- Returns:
The last point result, defined by:
the value of the objective function,
the value of the design variables,
the indicator of feasibility of the last point,
the value of the constraints,
the value of the gradients of the constraints.
- Raises:
ValueError – When the optimization database is empty.
- Return type:
tuple[ndarray, ndarray, bool, dict[str, ndarray], dict[str, ndarray]]
- get_nonproc_constraints()¶
Retrieve the non-processed constraints.
- Returns:
The non-processed constraints.
- Return type:
- get_nonproc_objective()¶
Retrieve the non-processed objective function.
- Return type:
- get_number_of_unsatisfied_constraints(design_variables, values=mappingproxy({}))¶
Return the number of scalar constraints not satisfied by design variables.
- Parameters:
- Returns:
The number of unsatisfied scalar constraints.
- Return type:
- get_objective_name(standardize=True)¶
Retrieve the name of the objective function.
- get_observable(name)¶
Return an observable of the problem.
- Parameters:
name (str) – The name of the observable.
- Returns:
The pre-processed observable if the functions of the problem have already been pre-processed, otherwise the original one.
- Return type:
- get_optimum()¶
Return the optimum solution within a given feasibility tolerances.
- Returns:
The optimum result, defined by:
the value of the objective function,
the value of the design variables,
the indicator of feasibility of the optimal solution,
the value of the constraints,
the value of the gradients of the constraints.
- Raises:
ValueError – When the optimization database is empty.
- Return type:
tuple[ndarray, ndarray, bool, dict[str, ndarray], dict[str, ndarray]]
- get_reformulated_problem_with_slack_variables()¶
Add slack variables and replace inequality constraints with equality ones.
Given the original optimization problem,
\[ \begin{align}\begin{aligned}min_x f(x)\\s.t.\\g(x)\leq 0\\h(x)=0\\l_b\leq x\leq u_b\end{aligned}\end{align} \]Slack variables are introduced for all inequality constraints that are non-positive. An equality constraint for each slack variable is then defined.
\[ \begin{align}\begin{aligned}min_{x,s} F(x,s) = f(x)\\s.t.\\H(x,s) = h(x)=0\\G(x,s) = g(x)-s=0\\l_b\leq x\leq u_b\\s\leq 0\end{aligned}\end{align} \]- Returns:
An optimization problem without inequality constraints.
- Return type:
- get_scalar_constraint_names()¶
Return the names of the scalar constraints.
- get_violation_criteria(x_vect)¶
Check if a design point is feasible and measure its constraint violation.
The constraint violation measure at a design point \(x\) is
\[\lVert\max(g(x)-\varepsilon_{\text{ineq}},0)\rVert_2^2 +\lVert|\max(|h(x)|-\varepsilon_{\text{eq}},0)\rVert_2^2\]where \(\|.\|_2\) is the Euclidean norm, \(g(x)\) is the inequality constraint vector, \(h(x)\) is the equality constraint vector, \(\varepsilon_{\text{ineq}}\) is the tolerance for the inequality constraints and \(\varepsilon_{\text{eq}}\) is the tolerance for the equality constraints.
If the design point is feasible, the constraint violation measure is 0.
- get_x0_normalized(cast_to_real=False, as_dict=False)¶
Return the initial values of the design variables after normalization.
- Parameters:
- Returns:
The current values of the design variables normalized between 0 and 1 from their lower and upper bounds.
- Return type:
- has_constraints()¶
Check if the problem has equality or inequality constraints.
- Returns:
True if the problem has equality or inequality constraints.
- Return type:
- has_eq_constraints()¶
Check if the problem has equality constraints.
- Returns:
True if the problem has equality constraints.
- Return type:
- has_ineq_constraints()¶
Check if the problem has inequality constraints.
- Returns:
True if the problem has inequality constraints.
- Return type:
- has_nonlinear_constraints()¶
Check if the problem has non-linear constraints.
- Returns:
True if the problem has equality or inequality constraints.
- Return type:
- is_max_iter_reached()¶
Check if the maximum amount of iterations has been reached.
- Returns:
Whether the maximum amount of iterations has been reached.
- Return type:
- is_point_feasible(out_val, constraints=None)¶
Check if a point is feasible.
Notes
If the value of a constraint is absent from this point, then this constraint will be considered satisfied.
- Parameters:
out_val (dict[str, ndarray]) – The values of the objective function, and eventually constraints.
constraints (Iterable[MDOFunction] | None) – The constraints whose values are to be tested. If
None
, then take all constraints of the problem.
- Returns:
The feasibility of the point.
- Return type:
- preprocess_functions(is_function_input_normalized=True, use_database=True, round_ints=True, eval_obs_jac=False, support_sparse_jacobian=False)¶
Pre-process all the functions and eventually the gradient.
Required to wrap the objective function and constraints with the database and eventually the gradients by complex step or finite differences.
- Parameters:
is_function_input_normalized (bool) –
Whether to consider the function input as normalized and unnormalize it before the evaluation takes place.
By default it is set to True.
use_database (bool) –
Whether to wrap the functions in the database.
By default it is set to True.
round_ints (bool) –
Whether to round the integer variables.
By default it is set to True.
eval_obs_jac (bool) –
Whether to evaluate the Jacobian of the observables.
By default it is set to False.
support_sparse_jacobian (bool) –
Whether the driver support sparse Jacobian.
By default it is set to False.
- Return type:
None
- static repr_constraint(func, cstr_type, value=None, positive=False)¶
Express a constraint as a string expression.
- Parameters:
func (MDOFunction) – The constraint function.
cstr_type (MDOFunction.ConstraintType) – The type of the constraint.
value (float | None) – The value for which the constraint is active. If
None
, this value is 0.positive (bool) –
If
True
, then the inequality constraint is positive.By default it is set to False.
- Returns:
A string representation of the constraint.
- Return type:
- reset(database=True, current_iter=True, design_space=True, function_calls=True, preprocessing=True)¶
Partially or fully reset the optimization problem.
- Parameters:
database (bool) –
Whether to clear the database.
By default it is set to True.
current_iter (bool) –
Whether to reset the current iteration
OptimizationProblem.current_iter
.By default it is set to True.
design_space (bool) –
Whether to reset the current point of the
OptimizationProblem.design_space
to its initial value (possibly none).By default it is set to True.
function_calls (bool) –
Whether to reset the number of calls of the functions.
By default it is set to True.
preprocessing (bool) –
Whether to turn the pre-processing of functions to False.
By default it is set to True.
- Return type:
None
- to_dataset(name='', categorize=True, opt_naming=True, export_gradients=False, input_values=())¶
Export the database of the optimization problem to a
Dataset
.The variables can be classified into groups:
Dataset.DESIGN_GROUP
orDataset.INPUT_GROUP
for the design variables andDataset.FUNCTION_GROUP
orDataset.OUTPUT_GROUP
for the functions (objective, constraints and observables).- Parameters:
name (str) –
The name to be given to the dataset. If empty, use the name of the
OptimizationProblem.database
.By default it is set to “”.
categorize (bool) –
Whether to distinguish between the different groups of variables. Otherwise, group all the variables in
Dataset.PARAMETER_GROUP`
.By default it is set to True.
opt_naming (bool) –
Whether to use
Dataset.DESIGN_GROUP
andDataset.FUNCTION_GROUP
as groups. Otherwise, useDataset.INPUT_GROUP
andDataset.OUTPUT_GROUP
.By default it is set to True.
export_gradients (bool) –
Whether to export the gradients of the functions (objective function, constraints and observables) if the latter are available in the database of the optimization problem.
By default it is set to False.
input_values (Iterable[ndarray]) –
The input values to be considered. If empty, consider all the input values of the database.
By default it is set to ().
- Returns:
A dataset built from the database of the optimization problem.
- Return type:
- to_hdf(file_path, append=False, hdf_node_path='')¶
Export the optimization problem to an HDF file.
- Parameters:
file_path (str | Path) – The path of the file to store the data.
append (bool) –
If
True
, then the data are appended to the file if not empty.By default it is set to False.
hdf_node_path (str) –
The path of the HDF node in which the database should be exported. If empty, the root node is considered.
By default it is set to “”.
- Return type:
None
- OPTIM_DESCRIPTION: ClassVar[str] = ['minimize_objective', 'fd_step', 'differentiation_method', 'pb_type', 'ineq_tolerance', 'eq_tolerance']¶
- activate_bound_check: ClassVar[bool] = True¶
Whether to check if a point is in the design space before calling functions.
- property constraint_names: dict[str, list[str]]¶
The standardized constraint names bound to the original ones.
- constraints: list[MDOFunction]¶
The constraints.
- design_space: DesignSpace¶
The design space on which the optimization problem is solved.
- property is_mono_objective: bool¶
Whether the optimization problem is mono-objective.
- Raises:
ValueError – When the dimension of the objective cannot be determined.
- new_iter_observables: list[MDOFunction]¶
The observables to be called at each new iterate.
- nonproc_constraints: list[MDOFunction]¶
The non-processed constraints.
- nonproc_new_iter_observables: list[MDOFunction]¶
The non-processed observables to be called at each new iterate.
- nonproc_objective: MDOFunction¶
The non-processed objective function.
- nonproc_observables: list[MDOFunction]¶
The non-processed observables.
- property objective: MDOFunction¶
The objective function.
- observables: list[MDOFunction]¶
The observables.
- property parallel_differentiation_options: dict[str, int | bool]¶
The options to approximate the derivatives in parallel.
- pb_type: ProblemType¶
The type of optimization problem.
- solution: OptimizationResult | None¶
The solution of the optimization problem if solved; otherwise
None
.
- use_standardized_objective: bool¶
Whether to use standardized objective for logging and post-processing.
The standardized objective corresponds to the original one expressed as a cost function to minimize. A
DriverLibrary
works with this standardized objective and theDatabase
stores its values. However, for convenience, it may be more relevant to log the expression and the values of the original objective.
Examples using Rosenbrock¶
Convert a database to a dataset