gemseo / problems / analytical

rosenbrock module

The Rosenbrock analytic problem

Classes:

RosenMF([dimension])

RosenMF, a multi-fidelity Rosenbrock MDODiscipline, returns the value:

Rosenbrock([n_x, l_b, u_b, scalar_var, …])

Rosenbrock OptimizationProblem uses the Rosenbrock objective function

class gemseo.problems.analytical.rosenbrock.RosenMF(dimension=2)[source]

Bases: gemseo.core.discipline.MDODiscipline

RosenMF, a multi-fidelity Rosenbrock MDODiscipline, returns the value:

\[\mathrm{fidelity} * \mathrm{Rosenbrock}(x)\]

where both \(\mathrm{fidelity}\) and \(x\) are provided as input data.

The constructor defines the default inputs of the MDODiscipline, namely the default design parameter values and the fidelity.

Parameters

dimension (int) – problem dimension

Attributes:

APPROX_MODES

AVAILABLE_MODES

COMPLEX_STEP

FINITE_DIFFERENCES

HDF5_CACHE

JSON_GRAMMAR_TYPE

MEMORY_FULL_CACHE

N_CPUS

RE_EXECUTE_DONE_POLICY

RE_EXECUTE_NEVER_POLICY

SIMPLE_CACHE

SIMPLE_GRAMMAR_TYPE

STATUS_DONE

STATUS_FAILED

STATUS_PENDING

STATUS_RUNNING

STATUS_VIRTUAL

cache_tol

Accessor to the cache input tolerance.

default_inputs

Accessor to the default inputs.

exec_time

Return the cumulated execution time.

linearization_mode

Accessor to the linearization mode.

n_calls

Return the number of calls to execute() which triggered the _run().

n_calls_linearize

Return the number of calls to linearize() which triggered the _compute_jacobian() method.

status

Status accessor.

time_stamps

Methods:

activate_time_stamps()

Activate the time stamps.

add_differentiated_inputs([inputs])

Add inputs to the differentiation list.

add_differentiated_outputs([outputs])

Add outputs to the differentiation list.

add_status_observer(obs)

Add an observer for the status.

auto_get_grammar_file([is_input, name, comp_dir])

Use a naming convention to associate a grammar file to a discipline.

check_input_data(input_data[, raise_exception])

Check the input data validity.

check_jacobian([input_data, derr_approx, …])

Check if the jacobian provided by the linearize() method is correct.

check_output_data([raise_exception])

Check the output data validity.

deactivate_time_stamps()

Deactivate the time stamps for storing start and end times of execution and linearizations.

deserialize(in_file)

Derialize the discipline from a file.

execute([input_data])

Execute the discipline.

get_all_inputs()

Accessor for the input data as a list of values.

get_all_outputs()

Accessor for the output data as a list of values.

get_attributes_to_serialize()

Define the attributes to be serialized.

get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

get_expected_dataflow()

Return the expected data exchange sequence.

get_expected_workflow()

Return the expected execution sequence.

get_input_data()

Accessor for the input data as a dict of values.

get_input_data_names()

Accessor for the input names as a list.

get_input_output_data_names()

Accessor for the input and output names as a list.

get_inputs_asarray()

Accessor for the outputs as a large numpy array.

get_inputs_by_name(data_names)

Accessor for the inputs as a list.

get_local_data_by_name(data_names)

Accessor for the local data of the discipline as a dict of values.

get_output_data()

Accessor for the output data as a dict of values.

get_output_data_names()

Accessor for the output names as a list.

get_outputs_asarray()

Accessor for the outputs as a large numpy array.

get_outputs_by_name(data_names)

Accessor for the outputs as a list.

get_sub_disciplines()

Gets the sub disciplines of self By default, empty.

is_all_inputs_existing(data_names)

Test if all the names in data_names are inputs of the discipline.

is_all_outputs_existing(data_names)

Test if all the names in data_names are outputs of the discipline.

is_input_existing(data_name)

Test if input named data_name is an input of the discipline.

is_output_existing(data_name)

Test if output named data_name is an output of the discipline.

is_scenario()

Return True if self is a scenario.

linearize([input_data, force_all, force_no_exec])

Execute the linearized version of the code.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

reset_statuses_for_run()

Sets all the statuses to PENDING.

serialize(out_file)

Serialize the discipline.

set_cache_policy([cache_type, …])

Set the type of cache to use and the tolerance level.

set_disciplines_statuses(status)

Set the sub disciplines statuses.

set_jacobian_approximation([…])

Set the jacobian approximation method.

set_optimal_fd_step([outputs, inputs, …])

Compute the optimal finite-difference step.

store_local_data(**kwargs)

Store discipline data in local data.

APPROX_MODES = ['finite_differences', 'complex_step']
AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')
COMPLEX_STEP = 'complex_step'
FINITE_DIFFERENCES = 'finite_differences'
HDF5_CACHE = 'HDF5Cache'
JSON_GRAMMAR_TYPE = 'JSON'
MEMORY_FULL_CACHE = 'MemoryFullCache'
N_CPUS = 2
RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'
RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'
SIMPLE_CACHE = 'SimpleCache'
SIMPLE_GRAMMAR_TYPE = 'Simple'
STATUS_DONE = 'DONE'
STATUS_FAILED = 'FAILED'
STATUS_PENDING = 'PENDING'
STATUS_RUNNING = 'RUNNING'
STATUS_VIRTUAL = 'VIRTUAL'
classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

add_differentiated_inputs(inputs=None)

Add inputs to the differentiation list.

This method updates self._differentiated_inputs with inputs

Parameters

inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)

add_differentiated_outputs(outputs=None)

Add outputs to the differentiation list.

Update self._differentiated_inputs with inputs.

Parameters

outputs – list of output variables to differentiate if None, all outputs of discipline are used

add_status_observer(obs)

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters

obs – the observer to add

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to a discipline.

This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

Parameters
  • is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)

  • name – the name of the discipline (Default value = None)

  • comp_dir – the containing directory if None, use self.comp_dir (Default value = None)

Returns

path to the grammar file

Return type

string

property cache_tol

Accessor to the cache input tolerance.

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters
  • input_data – the input data dict

  • raise_exception – Default value = True)

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)

Check if the jacobian provided by the linearize() method is correct.

Parameters
  • input_data – input data dict (Default value = None)

  • derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)

  • threshold – acceptance threshold for the jacobian error (Default value = 1e-8)

  • linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

  • inputs – list of inputs wrt which to differentiate (Default value = None)

  • outputs – list of outputs to differentiate (Default value = None)

  • step – the step for finite differences or complex step

  • parallel – if True, executes in parallel

  • n_processes – maximum number of processors on which to run

  • use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

  • wait_time_between_fork – time waited between two forks of the process /Thread

  • auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation

  • plot_result – plot the result of the validation (computed and approximate jacobians)

  • file_path – path to the output file if plot_result is True

  • show – if True, open the figure

  • figsize_x – x size of the figure in inches

  • figsize_y – y size of the figure in inches

Returns

True if the check is accepted, False otherwise

check_output_data(raise_exception=True)

Check the output data validity.

Parameters

raise_exception – if true, an exception is raised when data is invalid (Default value = True)

classmethod deactivate_time_stamps()

Deactivate the time stamps for storing start and end times of execution and linearizations.

property default_inputs

Accessor to the default inputs.

static deserialize(in_file)

Derialize the discipline from a file.

Parameters

in_file – input file for serialization

Returns

a discipline instance

property exec_time

Return the cumulated execution time.

Multiprocessing safe.

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

  • Adds default inputs to the input_data if some inputs are not defined

    in input_data but exist in self._default_data

  • Checks if the last execution of the discipline wan not called with

    identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)

  • Caches the inputs

  • Checks the input data against self.input_grammar

  • if self.data_processor is not None: runs the preprocessor

  • updates the status to RUNNING

  • calls the _run() method, that shall be defined

  • if self.data_processor is not None: runs the postprocessor

  • checks the output data

  • Caches the outputs

  • updates the status to DONE or FAILED

  • updates summed execution time

Parameters

input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)

Returns

the discipline local data after execution

Return type

dict

get_all_inputs()

Accessor for the input data as a list of values.

The order is given by self.get_input_data_names().

Returns

the data

get_all_outputs()

Accessor for the output data as a list of values.

The order is given by self.get_output_data_names().

Returns

the data

get_attributes_to_serialize()

Define the attributes to be serialized.

Shall be overloaded by disciplines

Returns

the list of attributes names

Return type

list

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.

Parameters
  • keys – a sting key or a list of keys

  • data_dict – the dict to get the data from

Returns

a data or a generator of data

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

Default to empty list See MDOFormulation.get_expected_dataflow

Returns

a list representing the data exchange arcs

get_expected_workflow()

Return the expected execution sequence.

This method is used for XDSM representation Default to the execution of the discipline itself See MDOFormulation.get_expected_workflow

get_input_data()

Accessor for the input data as a dict of values.

Returns

the data dict

get_input_data_names()

Accessor for the input names as a list.

Returns

the data names list

get_input_output_data_names()

Accessor for the input and output names as a list.

Returns

the data names list

get_inputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs().

Returns

the outputs array

Return type

ndarray

get_inputs_by_name(data_names)

Accessor for the inputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_local_data_by_name(data_names)

Accessor for the local data of the discipline as a dict of values.

Parameters

data_names – the names of the data which will be the keys of the dictionary

Returns

the data list

get_output_data()

Accessor for the output data as a dict of values.

Returns

the data dict

get_output_data_names()

Accessor for the output names as a list.

Returns

the data names list

get_outputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs()

Returns

the outputs array

Return type

ndarray

get_outputs_by_name(data_names)

Accessor for the outputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_sub_disciplines()

Gets the sub disciplines of self By default, empty.

Returns

the list of disciplines

is_all_inputs_existing(data_names)

Test if all the names in data_names are inputs of the discipline.

Parameters

data_names – the names of the inputs

Returns

True if data_names are all in input grammar

Return type

logical

is_all_outputs_existing(data_names)

Test if all the names in data_names are outputs of the discipline.

Parameters

data_names – the names of the outputs

Returns

True if data_names are all in output grammar

Return type

logical

is_input_existing(data_name)

Test if input named data_name is an input of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in input grammar

Return type

logical

is_output_existing(data_name)

Test if output named data_name is an output of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in output grammar

Return type

logical

static is_scenario()

Return True if self is a scenario.

Returns

True if self is a scenario

property linearization_mode

Accessor to the linearization mode.

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters
  • input_data – the input data dict needed to execute the disciplines according to the discipline input grammar

  • force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)

  • force_no_exec – if True, the discipline is not re executed, cache is loaded anyway

property n_calls

Return the number of calls to execute() which triggered the _run().

Multiprocessing safe.

property n_calls_linearize

Return the number of calls to linearize() which triggered the _compute_jacobian() method.

Multiprocessing safe.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

Parameters

obs – the observer to remove

reset_statuses_for_run()

Sets all the statuses to PENDING.

serialize(out_file)

Serialize the discipline.

Parameters

out_file – destination file for serialization

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache , or on the disk, e.g. HDF5Cache . CacheFactory.caches provides the list of available types of caches.

Parameters
  • cache_type (str) – type of cache to use.

  • cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal

  • cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used

  • cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used

  • is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.

set_disciplines_statuses(status)

Set the sub disciplines statuses.

To be implemented in subclasses. :param status: the status

set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)

Set the jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize

Parameters
  • jac_approx_type – “complex_step” or “finite_differences”

  • jax_approx_step – the step for finite differences or complex step

  • jac_approx_n_processes – maximum number of processors on which to run

  • jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

  • jac_approx_wait_time – time waited between two forks of the process /Thread

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.

Warning: this calls the discipline execution two times per input variables.

See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”

Parameters
  • inputs – inputs wrt the linearization is made. If None, use differentiated inputs

  • outputs – outputs of the linearization is made. If None, use differentiated outputs

  • force_all – if True, all inputs and outputs are used

  • print_errors – if True, displays the estimated errors

  • numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution

Returns

the estimated errors of truncation and cancelation error.

property status

Status accessor.

store_local_data(**kwargs)

Store discipline data in local data.

Parameters

kwargs – the data as key value pairs

time_stamps = None
class gemseo.problems.analytical.rosenbrock.Rosenbrock(n_x=2, l_b=- 2.0, u_b=2.0, scalar_var=False, initial_guess=None)[source]

Bases: gemseo.algos.opt_problem.OptimizationProblem

Rosenbrock OptimizationProblem uses the Rosenbrock objective function

\[f(x) = \sum_{i=2}^{n_x} 100(x_{i} - x_{i-1}^2)^2 + (1 - x_{i-1})^2\]

with the default DesignSpace \([-0.2,0.2]^{n_x}\).

The constructor initializes the Rosenbrock OptimizationProblem by defining the DesignSpace and the objective function.

Parameters
  • n_x (int) – problem dimension

  • l_b (float) – lower bound (common value to all variables)

  • u_b (float) – upper bound (common value to all variables)

  • scalar_var (bool) – if True the design space will contain only scalar variables (as many as the problem dimension); if False the design space will contain a single multidimensional variable (whose size equals the problem dimension)

  • initial_guess (numpy array) – initial guess for optimal solution

Attributes:

AVAILABLE_PB_TYPES

COMPLEX_STEP

CONSTRAINTS_GROUP

DESIGN_SPACE_ATTRS

DESIGN_SPACE_GROUP

DESIGN_VAR_NAMES

DESIGN_VAR_SIZE

DIFFERENTIATION_METHODS

FINITE_DIFFERENCES

FUNCTIONS_ATTRS

GGOBI_FORMAT

HDF5_FORMAT

LINEAR_PB

NON_LINEAR_PB

NO_DERIVATIVES

OBJECTIVE_GROUP

OPTIM_DESCRIPTION

OPT_DESCR_GROUP

SOLUTION_GROUP

USER_GRAD

dimension

The dimension of the design space.

objective

The objective function.

Methods:

add_callback(callback_func[, each_new_iter, …])

Add a callback function after each store operation or new iteration.

add_constraint(cstr_func[, value, …])

Add a constraint (equality and inequality) to the optimization problem.

add_eq_constraint(cstr_func[, value])

Add an equality constraint to the optimization problem.

add_ineq_constraint(cstr_func[, value, positive])

Add an inequality constraint to the optimization problem.

add_new_iter_listener(listener_func)

Add a listener to be called when a new iteration is stored to the database.

add_observable(obs_func[, new_iter])

Add a function to be observed.

add_store_listener(listener_func)

Add a listener to be called when an item is stored to the database.

aggregate_constraint(constr_id[, method, groups])

Aggregates a constraint to generate a reduced dimension constraint.

change_objective_sign()

Change the objective function sign in order to minimize its opposite.

check()

Check if the optimization problem is ready for run.

check_format(input_function)

Check that a function is an instance of MDOFunction.

clear_listeners()

Clear all the listeners.

evaluate_functions([x_vect, eval_jac, …])

Compute the objective and the constraints.

export_hdf(file_path[, append])

Export the optimization problem to an HDF file.

export_to_dataset(name[, by_group, …])

Export the database of the optimization problem to a Dataset.

get_active_ineq_constraints(x_vect[, tol])

For each constraint, indicate if its different components are active.

get_all_functions()

Retrieve all the functions of the optimization problem.

get_all_functions_names()

Retrieve the names of all the function of the optimization problem.

get_best_infeasible_point()

Retrieve the best infeasible point within a given tolerance.

get_constraints_names()

Retrieve the names of the constraints.

get_constraints_number()

Retrieve the number of constraints.

get_design_variable_names()

Retrieve the names of the design variables.

get_dimension()

Retrieve the total number of design variables.

get_eq_constraints()

Retrieve all the equality constraints.

get_eq_constraints_number()

Retrieve the number of equality constraints.

get_eq_cstr_total_dim()

Retrieve the total dimension of the equality constraints.

get_feasible_points()

Retrieve the feasible points within a given tolerance.

get_ineq_constraints()

Retrieve all the inequality constraints.

get_ineq_constraints_number()

Retrieve the number of inequality constraints.

get_ineq_cstr_total_dim()

Retrieve the total dimension of the inequality constraints.

get_nonproc_constraints()

Retrieve the non-processed constraints.

get_nonproc_objective()

Retrieve the non-processed objective function.

get_objective_name()

Retrieve the name of the objective function.

get_observable(name)

Retrieve an observable from its name.

get_optimum()

Return the optimum solution within a given feasibility tolerances.

get_solution()

Return the theoretical optimal value.

get_violation_criteria(x_vect)

Compute a violation measure associated to an iteration.

get_x0_normalized()

Return the current values of the design variables after normalization.

has_constraints()

Check if the problem has equality or inequality constraints.

has_eq_constraints()

Check if the problem has equality constraints.

has_ineq_constraints()

Check if the problem has inequality constraints.

has_nonlinear_constraints()

Check if the problem has non-linear constraints.

import_hdf(file_path[, x_tolerance])

Import an optimization history from an HDF file.

is_point_feasible(out_val[, constraints])

Check if a point is feasible.

preprocess_functions([normalize, …])

Pre-process all the functions and eventually the gradien.

repr_constraint(func, ctype[, value, positive])

Express a constraint as a string expression.

AVAILABLE_PB_TYPES = ['linear', 'non-linear']
COMPLEX_STEP = 'complex_step'
CONSTRAINTS_GROUP = 'constraints'
DESIGN_SPACE_ATTRS = ['u_bounds', 'l_bounds', 'x_0', 'x_names', 'dimension']
DESIGN_SPACE_GROUP = 'design_space'
DESIGN_VAR_NAMES = 'x_names'
DESIGN_VAR_SIZE = 'x_size'
DIFFERENTIATION_METHODS = ['user', 'complex_step', 'finite_differences', 'no_derivatives']
FINITE_DIFFERENCES = 'finite_differences'
FUNCTIONS_ATTRS = ['objective', 'constraints']
GGOBI_FORMAT = 'ggobi'
HDF5_FORMAT = 'hdf5'
LINEAR_PB = 'linear'
NON_LINEAR_PB = 'non-linear'
NO_DERIVATIVES = 'no_derivatives'
OBJECTIVE_GROUP = 'objective'
OPTIM_DESCRIPTION = ['minimize_objective', 'fd_step', 'differentiation_method', 'pb_type', 'ineq_tolerance', 'eq_tolerance']
OPT_DESCR_GROUP = 'opt_description'
SOLUTION_GROUP = 'solution'
USER_GRAD = 'user'
add_callback(callback_func, each_new_iter=True, each_store=False)

Add a callback function after each store operation or new iteration.

Parameters
  • callback_func (Callable) – A function to be called after some event.

  • each_new_iter (bool) – If True, then callback at every iteration.

  • each_store (bool) – If True, then callback at every call to Database.store.

Return type

None

add_constraint(cstr_func, value=None, cstr_type=None, positive=False)

Add a constraint (equality and inequality) to the optimization problem.

Parameters
  • cstr_func (MDOFunction) – The constraint.

  • value (Optional[value]) – The value for which the constraint is active. If None, this value is 0.

  • cstr_type (Optional[str]) – The type of the constraint. Either equality or inequality.

  • positive (bool) – If True, then the inequality constraint is positive.

Return type

None

add_eq_constraint(cstr_func, value=None)

Add an equality constraint to the optimization problem.

Parameters
  • cstr_func (gemseo.core.function.MDOFunction) – The constraint.

  • value (Optional[float]) – The value for which the constraint is active. If None, this value is 0.

Return type

None

add_ineq_constraint(cstr_func, value=None, positive=False)

Add an inequality constraint to the optimization problem.

Parameters
  • cstr_func (MDOFunction) – The constraint.

  • value (Optional[value]) – The value for which the constraint is active. If None, this value is 0.

  • positive (bool) – If True, then the inequality constraint is positive.

Return type

None

add_new_iter_listener(listener_func)

Add a listener to be called when a new iteration is stored to the database.

Parameters

listener_func (Callable) – The function to be called.

Raises

TypeError – If the argument is not a callable

Return type

None

add_observable(obs_func, new_iter=True)

Add a function to be observed.

Parameters
  • obs_func (gemseo.core.function.MDOFunction) – An observable to be observed.

  • new_iter (bool) – If True, then the observable will be called at each new iterate.

Return type

None

add_store_listener(listener_func)

Add a listener to be called when an item is stored to the database.

Parameters

listener_func (Callable) – The function to be called.

Raises

TypeError – If the argument is not a callable

Return type

None

aggregate_constraint(constr_id, method='max', groups=None, **options)

Aggregates a constraint to generate a reduced dimension constraint.

Parameters
  • constr_id (int) – index of the constraint in self.constraints

  • method (str or callable, that takes a function and returns a function) – aggregation method, among (‘max’,’KS’, ‘IKS’)

  • groups (tuple of ndarray) – if None, a single output constraint is produced otherwise, one output per group is produced.

change_objective_sign()

Change the objective function sign in order to minimize its opposite.

The OptimizationProblem expresses any optimization problem as a minimization problem. Then, an objective function originally expressed as a performance function to maximize must be converted into a cost function to minimize, by means of this method.

Return type

None

check()

Check if the optimization problem is ready for run.

Raises

ValueError – If the objective function is missing.

Return type

None

static check_format(input_function)

Check that a function is an instance of MDOFunction.

Parameters

input_function – The function to be tested.

Raises

TypeError – If the function is not a MDOFunction.

Return type

None

clear_listeners()

Clear all the listeners.

Return type

None

property dimension

The dimension of the design space.

evaluate_functions(x_vect=None, eval_jac=False, eval_obj=True, normalize=True, no_db_no_norm=False)

Compute the objective and the constraints.

Some optimization libraries require the number of constraints as an input parameter which is unknown by the formulation or the scenario. Evaluation of initial point allows to get this mandatory information. This is also used for design of experiments to evaluate samples.

Parameters
  • x_vect (Optional[numpy.ndarray]) – The input vector at which the functions must be evaluated; if None, x_0 is used.

  • eval_jac (bool) – If True, then the Jacobian is evaluated

  • eval_obj (bool) – If True, then the objective function is evaluated

  • normalize (bool) – If True, then input vector is considered normalized

  • no_db_no_norm (bool) – If True, then do not use the pre-processed functions, so we have no database, nor normalization.

Returns

The functions values and/or the Jacobian values according to the passed arguments.

Raises

ValueError – If both no_db_no_norm and normalize are True.

Return type

Tuple[Dict[str, Union[float, numpy.ndarray]], Dict[str, numpy.ndarray]]

export_hdf(file_path, append=False)

Export the optimization problem to an HDF file.

Parameters
  • file_path (str) – The file to store the data.

  • append (bool) – If True, then the data are appended to the file if not empty.

Return type

None

export_to_dataset(name, by_group=True, categorize=True, opt_naming=True, export_gradients=False)

Export the database of the optimization problem to a Dataset.

The variables can be classified into groups, separating the design variables and functions (objective function and constraints). This classification can use either an optimization naming, with Database.DESIGN_GROUP and Database.FUNCTION_GROUP or an input-output naming, with Database.INPUT_GROUP and Database.OUTPUT_GROUP

Parameters
  • name (str) – A name to be given to the dataset.

  • by_group (bool) – If True, then store the data by group. Otherwise, store them by variables.

  • categorize (bool) – If True, then distinguish between the different groups of variables.

  • opt_naming (bool) – If True, then use an optimization naming.

  • export_gradients (bool) – If True, then export also the gradients of the functions (objective function, constraints and observables) if the latter are available in the database of the optimization problem.

Returns

A dataset built from the database of the optimization problem.

Return type

gemseo.core.dataset.Dataset

get_active_ineq_constraints(x_vect, tol=1e-06)

For each constraint, indicate if its different components are active.

Parameters
  • x_vect (numpy.ndarray) – The vector of design variables.

  • tol (float) – The tolerance for deciding whether a constraint is active.

Returns

For each constraint, a boolean indicator of activation of its different components.

Return type

Dict[str, numpy.ndarray]

get_all_functions()

Retrieve all the functions of the optimization problem.

These functions are the constraints, the objective function and the observables.

Returns

All the functions of the optimization problem.

Return type

List[gemseo.core.function.MDOFunction]

get_all_functions_names()

Retrieve the names of all the function of the optimization problem.

These functions are the constraints, the objective function and the observables.

Returns

The names of all the functions of the optimization problem.

Return type

List[str]

get_best_infeasible_point()

Retrieve the best infeasible point within a given tolerance.

Returns

The best infeasible point expressed as the design variables values, the objective function value, the feasibility of the point and the functions values.

Return type

Tuple[Optional[numpy.ndarray], Optional[numpy.ndarray], bool, Dict[str, numpy.ndarray]]

get_constraints_names()

Retrieve the names of the constraints.

Returns

The names of the constraints.

Return type

List[str]

get_constraints_number()

Retrieve the number of constraints.

Returns

The number of constraints.

Return type

int

get_design_variable_names()

Retrieve the names of the design variables.

Returns

The names of the design variables.

Return type

List[str]

get_dimension()

Retrieve the total number of design variables.

Returns

The dimension of the design space.

Return type

int

get_eq_constraints()

Retrieve all the equality constraints.

Returns

The equality constraints.

Return type

List[gemseo.core.function.MDOFunction]

get_eq_constraints_number()

Retrieve the number of equality constraints.

Returns

The number of equality constraints.

Return type

int

get_eq_cstr_total_dim()

Retrieve the total dimension of the equality constraints.

This dimension is the sum of all the outputs dimensions of all the equality constraints.

Returns

The total dimension of the equality constraints.

Return type

int

get_feasible_points()

Retrieve the feasible points within a given tolerance.

This tolerance is defined by OptimizationProblem.eq_tolerance for equality constraints and OptimizationProblem.ineq_tolerance for inequality ones.

Returns

The values of the design variables and objective function for the feasible points.

Return type

Tuple[List[numpy.ndarray], List[Dict[str, Union[float, List[int]]]]]

get_ineq_constraints()

Retrieve all the inequality constraints.

Returns

The inequality constraints.

Return type

List[gemseo.core.function.MDOFunction]

get_ineq_constraints_number()

Retrieve the number of inequality constraints.

Returns

The number of inequality constraints.

Return type

int

get_ineq_cstr_total_dim()

Retrieve the total dimension of the inequality constraints.

This dimension is the sum of all the outputs dimensions of all the inequality constraints.

Returns

The total dimension of the inequality constraints.

Return type

int

get_nonproc_constraints()

Retrieve the non-processed constraints.

Returns

The non-processed constraints.

Return type

List[gemseo.core.function.MDOFunction]

get_nonproc_objective()

Retrieve the non-processed objective function.

Return type

gemseo.core.function.MDOFunction

get_objective_name()

Retrieve the name of the objective function.

Returns

The name of the objective function.

Return type

str

get_observable(name)

Retrieve an observable from its name.

Parameters

name (str) – The name of the observable.

Returns

The observable.

Raises

ValueError – If the observable cannot be found.

Return type

gemseo.core.function.MDOFunction

get_optimum()

Return the optimum solution within a given feasibility tolerances.

Returns

The optimum result, defined by:

  • the value of the objective function,

  • the value of the design variables,

  • the indicator of feasibility of the optimal solution,

  • the value of the constraints,

  • the value of the gradients of the constraints.

Return type

Tuple[numpy.ndarray, numpy.ndarray, bool, Dict[str, numpy.ndarray], Dict[str, numpy.ndarray]]

get_solution()[source]

Return the theoretical optimal value.

Returns

design variables values of optimized values, function value at optimum

Return type

numpy array

get_violation_criteria(x_vect)

Compute a violation measure associated to an iteration.

For each constraint, when it is violated, add the absolute distance to zero, in L2 norm.

If 0, all constraints are satisfied

Parameters

x_vect (numpy.ndarray) – The vector of the design variables values.

Returns

The feasibility of the point and the violation measure.

Return type

Tuple[bool, float]

get_x0_normalized()

Return the current values of the design variables after normalization.

Returns

The current values of the design variables normalized between 0 and 1 from their lower and upper bounds.

Return type

numpy.ndarray

has_constraints()

Check if the problem has equality or inequality constraints.

Returns

True if the problem has equality or inequality constraints.

has_eq_constraints()

Check if the problem has equality constraints.

Returns

True if the problem has equality constraints.

Return type

bool

has_ineq_constraints()

Check if the problem has inequality constraints.

Returns

True if the problem has inequality constraints.

Return type

bool

has_nonlinear_constraints()

Check if the problem has non-linear constraints.

Returns

True if the problem has equality or inequality constraints.

Return type

bool

classmethod import_hdf(file_path, x_tolerance=0.0)

Import an optimization history from an HDF file.

Parameters
  • file_path (str) – The file containing the optimization history.

  • x_tolerance (float) – The tolerance on the design variables when reading the file.

Returns

The read optimization problem.

Return type

gemseo.algos.opt_problem.OptimizationProblem

is_point_feasible(out_val, constraints=None)

Check if a point is feasible.

Note

If the value of a constraint is absent from this point, then this constraint will be considered satisfied.

Parameters
  • out_val (Dict[str, numpy.ndarray]) – The values of the objective function, and eventually constraints.

  • constraints (Optional[Iterable[gemseo.core.function.MDOFunction]]) – The constraints whose values are to be tested. If None, then take all constraints of the problem.

Returns

The feasibility of the point.

Return type

bool

property objective

The objective function.

preprocess_functions(normalize=True, use_database=True, round_ints=True)

Pre-process all the functions and eventually the gradien.

Required to wrap the objective function and constraints with the database and eventually the gradients by complex step or finite differences.

Parameters
  • normalize (bool) – If True, then the functions are normalized.

  • use_database (bool) – If True, then the functions are wrapped in the database.

  • round_ints (bool) – If True, then round the integer variables.

Return type

None

static repr_constraint(func, ctype, value=None, positive=False)

Express a constraint as a string expression.

Parameters
  • func (gemseo.core.function.MDOFunction) – The constraint function.

  • ctype (str) – The type of the constraint. Either equality or inequality.

  • value (Optional[float]) – The value for which the constraint is active. If None, this value is 0.

  • positive (bool) – If True, then the inequality constraint is positive.

Returns

A string representation of the constraint.

Return type

str