gemseo.formulations.base_formulation module#

The base class for all formulations.

class BaseFormulation(disciplines, objective_name, design_space, settings_model=None, **settings)[source]#

Bases: Generic[T]

Base MDO formulation class to be extended in subclasses for use.

This class creates the MDOFunction instances computing the constraints, objective and observables from the disciplines and add them to the attached optimization_problem.

It defines the multidisciplinary process, i.e. dataflow and workflow, implicitly.

By default,

  • the objective is minimized,

  • the type of a constraint is equality,

  • the activation value of a constraint is 0.

The link between the instances of Discipline, the design variables and the names of the discipline outputs used as constraints, objective and observables is made with the DisciplineAdapterGenerator, which generates instances of MDOFunction from the disciplines.

Initialize self. See help(type(self)) for accurate signature.

Parameters:
  • disciplines (Sequence[Discipline]) -- The disciplines.

  • objective_name (str | Sequence[str]) -- The name(s) of the discipline output(s) used as objective. If multiple names are passed, the objective will be a vector.

  • design_space (DesignSpace) -- The design space.

  • settings_model (T | None) -- The settings of the formulation as a Pydantic model. If None, use **settings.

  • **settings (Any) -- The settings of the formulation. This argument is ignored when settings_model is not None.

abstract add_constraint(output_name, constraint_type=ConstraintType.EQ, constraint_name='', value=0, positive=False)[source]#

Add an equality or inequality constraint to the optimization problem.

An equality constraint is written as \(c(x)=a\), a positive inequality constraint is written as \(c(x)\geq a\) and a negative inequality constraint is written as \(c(x)\leq a\).

This constraint is in addition to those created by the formulation, e.g. consistency constraints in IDF.

The strategy of repartition of the constraints is defined by the formulation.

Parameters:
  • output_name (str | Sequence[str]) -- The name(s) of the outputs computed by \(c(x)\). If several names are given, a single discipline must provide all outputs.

  • constraint_type (MDOFunction.ConstraintType) --

    The type of constraint.

    By default it is set to "eq".

  • constraint_name (str) --

    The name of the constraint to be stored. If empty, the name of the constraint is generated from output_name, constraint_type, value and positive.

    By default it is set to "".

  • value (float) --

    The value \(a\).

    By default it is set to 0.

  • positive (bool) --

    Whether the inequality constraint is positive.

    By default it is set to False.

Return type:

None

abstract add_observable(output_names, observable_name='', discipline=None)[source]#

Add an observable to the optimization problem.

The repartition strategy of the observable is defined in the formulation class.

Parameters:
  • output_names (str | Sequence[str]) -- The name(s) of the output(s) to observe.

  • observable_name (str) --

    The name of the observable. If empty, the output name is used by default.

    By default it is set to "".

  • discipline (Discipline | None) -- The discipline computing the observed outputs. If None, the discipline is detected from inner disciplines.

Return type:

None

classmethod get_default_sub_option_values(**options)[source]#

Return the default values of the sub-options of the formulation.

When some options of the formulation depend on higher level options, the default values of these sub-options may be obtained here, mainly for use in the API.

Parameters:

**options (str) -- The options required to deduce the sub-options grammar.

Returns:

Either None or the sub-options default values.

Return type:

StrKeyMapping

get_optim_variable_names()[source]#

Get the optimization unknown names to be provided to the optimizer.

This is different from the design variable names provided by the user, since it depends on the formulation, and can include target values for coupling for instance in IDF.

Returns:

The optimization variable names.

Return type:

list[str]

classmethod get_sub_options_grammar(**options)[source]#

Get the sub-options grammar.

When some options of the formulation depend on higher level options, the schema of the sub-options may be obtained here, mainly for use in the API.

Parameters:

**options (str) -- The options required to deduce the sub-options grammar.

Returns:

Either None or the sub-options grammar.

Return type:

JSONGrammar

get_sub_scenarios()[source]#

List the disciplines that are actually scenarios.

Returns:

The scenarios.

Return type:

list[BaseScenario]

abstract get_top_level_disciplines()[source]#

Return the disciplines which inputs are required to run the scenario.

A formulation seeks to compute the objective and constraints from the input variables. It structures the optimization problem into multiple levels of disciplines. The disciplines directly depending on these inputs are called top level disciplines.

By default, this method returns all disciplines. This method can be overloaded by subclasses.

Returns:

The top level disciplines.

Return type:

tuple[Discipline, ...]

get_x_mask_x_swap_order(masking_data_names, all_data_names=())[source]#

Mask a vector from a subset of names, with respect to a set of names.

This method eventually swaps the order of the values if the order of the data names is inconsistent between these sets.

Parameters:
  • masking_data_names (Iterable[str]) -- The names of the kept data.

  • all_data_names (Iterable[str]) --

    The set of all names. If empty, use the design variables stored in the design space.

    By default it is set to ().

Returns:

The masked version of the input vector.

Raises:

ValueError -- If the sizes or the sizes of variables are inconsistent.

Return type:

ndarray

get_x_names_of_disc(discipline)[source]#

Get the design variables names of a given discipline.

Parameters:

discipline (Discipline) -- The discipline.

Returns:

The names of the design variables.

Return type:

list[str]

mask_x_swap_order(masking_data_names, x_vect, all_data_names=())[source]#

Mask a vector from a subset of names, with respect to a set of names.

This method eventually swaps the order of the values if the order of the data names is inconsistent between these sets.

Parameters:
  • masking_data_names (Iterable[str]) -- The names of the kept data.

  • x_vect (ndarray) -- The vector to mask.

  • all_data_names (Iterable[str]) --

    The set of all names. If empty, use the design variables stored in the design space.

    By default it is set to ().

Returns:

The masked version of the input vector.

Raises:

IndexError -- when the sizes of variables are inconsistent.

Return type:

ndarray

unmask_x_swap_order(masking_data_names, x_masked, all_data_names=(), x_full=None)[source]#

Unmask a vector or matrix from names, with respect to other names.

This method eventually swaps the order of the values if the order of the data names is inconsistent between these sets.

Parameters:
  • masking_data_names (Iterable[str]) -- The names of the kept data.

  • x_masked (ndarray) -- The vector or matrix to unmask.

  • all_data_names (Iterable[str]) --

    The set of all names. If empty, use the design variables stored in the design space.

    By default it is set to ().

  • x_full (ndarray | None) -- The default values for the full vector or matrix. If None, use the zero vector or matrix.

Returns:

The vector or matrix related to the input mask.

Raises:

IndexError -- when the sizes of variables are inconsistent.

Return type:

ndarray

DEFAULT_SCENARIO_RESULT_CLASS_NAME: ClassVar[str] = 'ScenarioResult'#

The name of the ScenarioResult class to be used for post-processing.

Settings: ClassVar[type[T]]#

The Pydantic model class for the settings of the formulation.

property design_space: DesignSpace#

The design space on which the formulation is applied.

property differentiated_input_names_substitute: Sequence[str]#

The names of the inputs against which to differentiate the functions.

If empty, consider the variables of their input space.

property disciplines: tuple[Discipline, ...]#

The disciplines.

optimization_problem: OptimizationProblem#

The optimization problem generated by the formulation from the disciplines.

variable_sizes: dict[str, int]#

The sizes of the design variables and differentiated inputs substitutes.