.. Copyright 2021 IRT Saint-Exupéry, https://www.irt-saintexupery.com This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. .. _gen_formulation_algos: MDO formulations ================ .. warning:: Some capabilities may require the :ref:`installation ` of |g| with :ref:`all its features ` and some others may depend on :ref:`plugins `. .. warning:: All the features of the wrapped libraries may not be exposed through |g|. .. note:: The algorithm settings can be passed to a function of the form .. code-block:: python function(..., settings_model: AlgorithmSettings | None = None, **settings: Any) either one by one: .. code-block:: python function(..., setting_name_1=setting_name_1, setting_name_2=setting_name_2, ...) or using the argument name ``"settings_model"`` and the Pydantic model associated with the algorithm: .. code-block:: python settings_model = AlgorithmSettings(setting_name_1=setting_name_1, setting_name_2=setting_name_2, ...) function(..., settings_model=settings_model) .. _BiLevel_options: BiLevel ------- Module: :class:`gemseo.formulations.bilevel` .. code-block:: python :caption: Pydantic model of the settings for BiLevel from gemseo.settings.formulations import BiLevel_Settings .. raw:: html
Optional settings
  • **apply_cstr_to_system** : *, optional* Whether the :meth:`.add_constraint` method adds the constraint to the optimization problem of the system scenario. By default it is set to True. .. raw:: html
  • **apply_cstr_tosub_scenarios** : *, optional* Whether the :meth:`.add_constraint` method adds the constraint to the optimization problem of the sub-scenario capable of computing the constraint. By default it is set to True. .. raw:: html
  • **differentiated_input_names_substitute** : *collections.abc.Sequence[str], optional* The names of the discipline inputs with respect to which to differentiate the discipline outputs used as objective, constraints and observables. If empty, consider the inputs of these functions. More precisely, for each function, an :class:`.MDOFunction` is built from the ``disciplines``, which depend on input variables :math:`x_1,\ldots,x_d,x_{d+1}`, and over an input space spanned by the input variables :math:`x_1,\ldots,x_d` and depending on both the MDO formulation and the ``design_space``. Then, the methods :meth:`.MDOFunction.evaluate` and :meth:`.MDOFunction.jac` are called at a given point of the input space and return the output value and the Jacobian matrix, i.e. the matrix concatenating the partial derivatives with respect to the inputs :math:`x_1,\ldots,x_d` at this point of the input space. This argument can be used to compute the matrix concatenating the partial derivatives at the same point of the input space but with respect to custom inputs, e.g. :math:`x_{d-1}` and :math:`x_{d+1}`. Mathematically speaking, this matrix returned by :meth:`.MDOFunction.jac` is no longer a Jacobian. By default it is set to (). .. raw:: html
  • **keep_opt_history** : *, optional* Whether to keep database copies of the sub-scenario adapters after each execution. Depending on the size of the databases and the number of consecutive executions, this can be very memory consuming. If the adapter will be executed in parallel, the databases will not be saved to the main process by the sub-processes, so this setting should be set to ``False`` to avoid unnecessary memory use in the sub-processes. By default it is set to True. .. raw:: html
  • **main_mda_name** : *, optional* The name of the class of the main MDA. Typically the :class:`.MDAChain`, but one can force to use :class:`.MDAGaussSeidel` for instance. By default it is set to MDAChain. .. raw:: html
  • **main_mda_settings** : *collections.abc.Mapping[str, typing.Any] | gemseo.mda.base_mda_settings.BaseMDASettings, optional* The settings of the main MDA. These settings may include those of the inner-MDA. By default it is set to {}. .. raw:: html
  • **multithread_scenarios** : *, optional* If ``True`` and parallel_scenarios=True, the sub-scenarios are run in parallel using multi-threading; if False and parallel_scenarios=True, multiprocessing is used. By default it is set to True. .. raw:: html
  • **naming** : *, optional* The way of naming the database files. When the adapter will be executed in parallel, this method shall be set to ``UUID`` because this method is multiprocess-safe. By default it is set to NUMBERED. .. raw:: html
  • **parallel_scenarios** : *, optional* Whether to run the sub-scenarios in parallel. By default it is set to False. .. raw:: html
  • **reset_x0_before_opt** : *, optional* Whether to restart the sub optimizations from the initial guesses, otherwise warm start them. By default it is set to False. .. raw:: html
  • **save_opt_history** : *, optional* Whether to save the optimization history to an HDF5 file after each execution. By default it is set to False. .. raw:: html
  • **sub_scenarios_log_level** : *int | None, optional* The level of the root logger during the sub-scenarios executions. If ``None``, do not change the level of the root logger. By default it is set to None. .. raw:: html
.. _BiLevelBCD_options: BiLevelBCD ---------- Module: :class:`gemseo.formulations.bilevel_bcd` .. code-block:: python :caption: Pydantic model of the settings for BiLevelBCD from gemseo.settings.formulations import BiLevel_BCD_Settings .. raw:: html
Optional settings
  • **apply_cstr_to_system** : *, optional* Whether the :meth:`.add_constraint` method adds the constraint to the optimization problem of the system scenario. By default it is set to True. .. raw:: html
  • **apply_cstr_tosub_scenarios** : *, optional* Whether the :meth:`.add_constraint` method adds the constraint to the optimization problem of the sub-scenario capable of computing the constraint. By default it is set to True. .. raw:: html
  • **bcd_mda_settings** : *, optional* The settings for the MDA used in the BCD method. By default it is set to coupling_structure=None linear_solver= linear_solver_settings={} linear_solver_tolerance=1e-12 log_convergence=False max_mda_iter=20 max_consecutive_unsuccessful_iterations=8 name='' tolerance=1e-06 use_lu_fact=False warm_start=True acceleration_method= over_relaxation_factor=1.0. .. raw:: html
  • **differentiated_input_names_substitute** : *collections.abc.Sequence[str], optional* The names of the discipline inputs with respect to which to differentiate the discipline outputs used as objective, constraints and observables. If empty, consider the inputs of these functions. More precisely, for each function, an :class:`.MDOFunction` is built from the ``disciplines``, which depend on input variables :math:`x_1,\ldots,x_d,x_{d+1}`, and over an input space spanned by the input variables :math:`x_1,\ldots,x_d` and depending on both the MDO formulation and the ``design_space``. Then, the methods :meth:`.MDOFunction.evaluate` and :meth:`.MDOFunction.jac` are called at a given point of the input space and return the output value and the Jacobian matrix, i.e. the matrix concatenating the partial derivatives with respect to the inputs :math:`x_1,\ldots,x_d` at this point of the input space. This argument can be used to compute the matrix concatenating the partial derivatives at the same point of the input space but with respect to custom inputs, e.g. :math:`x_{d-1}` and :math:`x_{d+1}`. Mathematically speaking, this matrix returned by :meth:`.MDOFunction.jac` is no longer a Jacobian. By default it is set to (). .. raw:: html
  • **keep_opt_history** : *, optional* Whether to keep database copies of the sub-scenario adapters after each execution. Depending on the size of the databases and the number of consecutive executions, this can be very memory consuming. If the adapter will be executed in parallel, the databases will not be saved to the main process by the sub-processes, so this setting should be set to ``False`` to avoid unnecessary memory use in the sub-processes. By default it is set to True. .. raw:: html
  • **main_mda_name** : *, optional* The name of the class of the main MDA. Typically the :class:`.MDAChain`, but one can force to use :class:`.MDAGaussSeidel` for instance. By default it is set to MDAChain. .. raw:: html
  • **main_mda_settings** : *collections.abc.Mapping[str, typing.Any] | gemseo.mda.base_mda_settings.BaseMDASettings, optional* The settings of the main MDA. These settings may include those of the inner-MDA. By default it is set to {}. .. raw:: html
  • **multithread_scenarios** : *, optional* If ``True`` and parallel_scenarios=True, the sub-scenarios are run in parallel using multi-threading; if False and parallel_scenarios=True, multiprocessing is used. By default it is set to True. .. raw:: html
  • **naming** : *, optional* The way of naming the database files. When the adapter will be executed in parallel, this method shall be set to ``UUID`` because this method is multiprocess-safe. By default it is set to NUMBERED. .. raw:: html
  • **parallel_scenarios** : *, optional* Whether to run the sub-scenarios in parallel. By default it is set to False. .. raw:: html
  • **reset_x0_before_opt** : *, optional* Whether to restart the sub optimizations from the initial guesses, otherwise warm start them. By default it is set to False. .. raw:: html
  • **save_opt_history** : *, optional* Whether to save the optimization history to an HDF5 file after each execution. By default it is set to False. .. raw:: html
  • **sub_scenarios_log_level** : *int | None, optional* The level of the root logger during the sub-scenarios executions. If ``None``, do not change the level of the root logger. By default it is set to None. .. raw:: html
.. _DisciplinaryOpt_options: DisciplinaryOpt --------------- Module: :class:`gemseo.formulations.disciplinary_opt` .. code-block:: python :caption: Pydantic model of the settings for DisciplinaryOpt from gemseo.settings.formulations import DisciplinaryOpt_Settings .. raw:: html
Optional settings
  • **differentiated_input_names_substitute** : *collections.abc.Sequence[str], optional* The names of the discipline inputs with respect to which to differentiate the discipline outputs used as objective, constraints and observables. If empty, consider the inputs of these functions. More precisely, for each function, an :class:`.MDOFunction` is built from the ``disciplines``, which depend on input variables :math:`x_1,\ldots,x_d,x_{d+1}`, and over an input space spanned by the input variables :math:`x_1,\ldots,x_d` and depending on both the MDO formulation and the ``design_space``. Then, the methods :meth:`.MDOFunction.evaluate` and :meth:`.MDOFunction.jac` are called at a given point of the input space and return the output value and the Jacobian matrix, i.e. the matrix concatenating the partial derivatives with respect to the inputs :math:`x_1,\ldots,x_d` at this point of the input space. This argument can be used to compute the matrix concatenating the partial derivatives at the same point of the input space but with respect to custom inputs, e.g. :math:`x_{d-1}` and :math:`x_{d+1}`. Mathematically speaking, this matrix returned by :meth:`.MDOFunction.jac` is no longer a Jacobian. By default it is set to (). .. raw:: html
.. _IDF_options: IDF --- Module: :class:`gemseo.formulations.idf` .. code-block:: python :caption: Pydantic model of the settings for IDF from gemseo.settings.formulations import IDF_Settings .. raw:: html
Optional settings
  • **differentiated_input_names_substitute** : *collections.abc.Sequence[str], optional* The names of the discipline inputs with respect to which to differentiate the discipline outputs used as objective, constraints and observables. If empty, consider the inputs of these functions. More precisely, for each function, an :class:`.MDOFunction` is built from the ``disciplines``, which depend on input variables :math:`x_1,\ldots,x_d,x_{d+1}`, and over an input space spanned by the input variables :math:`x_1,\ldots,x_d` and depending on both the MDO formulation and the ``design_space``. Then, the methods :meth:`.MDOFunction.evaluate` and :meth:`.MDOFunction.jac` are called at a given point of the input space and return the output value and the Jacobian matrix, i.e. the matrix concatenating the partial derivatives with respect to the inputs :math:`x_1,\ldots,x_d` at this point of the input space. This argument can be used to compute the matrix concatenating the partial derivatives at the same point of the input space but with respect to custom inputs, e.g. :math:`x_{d-1}` and :math:`x_{d+1}`. Mathematically speaking, this matrix returned by :meth:`.MDOFunction.jac` is no longer a Jacobian. By default it is set to (). .. raw:: html
  • **mda_chain_settings_for_start_at_equilibrium** : *collections.abc.Mapping[str, typing.Any] | gemseo.mda.mda_chain_settings.MDAChain_Settings, optional* The settings for the MDA when ``start_at_equilibrium=True``. See detailed settings in :class:`.MDAChain`. By default it is set to {}. .. raw:: html
  • **n_processes** : *, optional* The maximum simultaneous number of threads if ``use_threading`` is True, or processes otherwise, used to parallelize the execution. By default it is set to 1. .. raw:: html
  • **normalize_constraints** : *, optional* Whether the outputs of the coupling consistency constraints are scaled. By default it is set to True. .. raw:: html
  • **start_at_equilibrium** : *, optional* Whether an MDA is used to initialize the coupling variables. By default it is set to False. .. raw:: html
  • **use_threading** : *, optional* Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory. This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing. By default it is set to True. .. raw:: html
.. _MDF_options: MDF --- Module: :class:`gemseo.formulations.mdf` .. code-block:: python :caption: Pydantic model of the settings for MDF from gemseo.settings.formulations import MDF_Settings .. raw:: html
Optional settings
  • **differentiated_input_names_substitute** : *collections.abc.Sequence[str], optional* The names of the discipline inputs with respect to which to differentiate the discipline outputs used as objective, constraints and observables. If empty, consider the inputs of these functions. More precisely, for each function, an :class:`.MDOFunction` is built from the ``disciplines``, which depend on input variables :math:`x_1,\ldots,x_d,x_{d+1}`, and over an input space spanned by the input variables :math:`x_1,\ldots,x_d` and depending on both the MDO formulation and the ``design_space``. Then, the methods :meth:`.MDOFunction.evaluate` and :meth:`.MDOFunction.jac` are called at a given point of the input space and return the output value and the Jacobian matrix, i.e. the matrix concatenating the partial derivatives with respect to the inputs :math:`x_1,\ldots,x_d` at this point of the input space. This argument can be used to compute the matrix concatenating the partial derivatives at the same point of the input space but with respect to custom inputs, e.g. :math:`x_{d-1}` and :math:`x_{d+1}`. Mathematically speaking, this matrix returned by :meth:`.MDOFunction.jac` is no longer a Jacobian. By default it is set to (). .. raw:: html
  • **main_mda_name** : *, optional* The name of the class of the main MDA. Typically the :class:`.MDAChain`, but one can force to use :class:`.MDAGaussSeidel` for instance. By default it is set to MDAChain. .. raw:: html
  • **main_mda_settings** : *collections.abc.Mapping[str, typing.Any] | gemseo.mda.base_mda_settings.BaseMDASettings, optional* The settings of the main MDA. These settings may include those of the inner-MDA. By default it is set to {}. .. raw:: html