.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "examples/mdo/plot_sobieski_use_case.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_examples_mdo_plot_sobieski_use_case.py: Application: Sobieski's Super-Sonic Business Jet (MDO) ====================================================== .. GENERATED FROM PYTHON SOURCE LINES 26-59 This section describes how to setup and solve the MDO problem relative to the :ref:`Sobieski test case ` with |g|. .. seealso:: To begin with a more simple MDO problem, and have a detailed description of how to plug a test case to |g|, see :ref:`sellar_mdo`. .. _sobieski_use_case: Solving with an :ref:`MDF formulation ` -------------------------------------------------------- In this example, we solve the range optimization using the following :ref:`MDF formulation `: - The :ref:`MDF formulation ` couples all the disciplines during the :ref:`mda` at each optimization iteration. - All the :term:`design variables` are equally treated, concatenated in a single vector and given to a single :term:`optimization algorithm` as the unknowns of the problem. - There is no specific :term:`constraint` due to the :ref:`MDF formulation `. - Only the design :term:`constraints` :math:`g\_1`, :math:`g\_2` and :math:`g\_3` are added to the problem. - The :term:`objective function` is the range (the :math:`y\_4` variable in the model), computed after the :ref:`mda`. Imports ------- All the imports needed for the tutorials are performed here. Note that some of the imports are related to the Python 2/3 compatibility. .. GENERATED FROM PYTHON SOURCE LINES 59-72 .. code-block:: Python from __future__ import annotations from gemseo import configure_logger from gemseo import create_discipline from gemseo import create_scenario from gemseo import get_available_formulations from gemseo.core.derivatives.jacobian_assembly import JacobianAssembly from gemseo.disciplines.utils import get_all_inputs from gemseo.disciplines.utils import get_all_outputs from gemseo.problems.mdo.sobieski.core.design_space import SobieskiDesignSpace configure_logger() .. rst-class:: sphx-glr-script-out .. code-block:: none .. GENERATED FROM PYTHON SOURCE LINES 73-79 Step 1: Creation of :class:`.MDODiscipline` ------------------------------------------- To build the scenario, we first instantiate the disciplines. Here, the disciplines themselves have already been developed and interfaced with |g| (see :ref:`benchmark_problems`). .. GENERATED FROM PYTHON SOURCE LINES 79-87 .. code-block:: Python disciplines = create_discipline([ "SobieskiPropulsion", "SobieskiAerodynamics", "SobieskiMission", "SobieskiStructure", ]) .. GENERATED FROM PYTHON SOURCE LINES 88-95 .. tip:: For the disciplines that are not interfaced with |g|, the |g|'s :mod:`~gemseo` eases the creation of disciplines without having to import them. See :ref:`api`. .. GENERATED FROM PYTHON SOURCE LINES 97-113 Step 2: Creation of :class:`.Scenario` -------------------------------------- The scenario delegates the creation of the optimization problem to the :ref:`MDO formulation `. Therefore, it needs the list of ``disciplines``, the names of the formulation, the name of the objective function and the design space. - The ``design_space`` (shown below for reference, as ``design_space.txt``) defines the unknowns of the optimization problem, and their bounds. It contains all the design variables needed by the :ref:`MDF formulation `. It can be imported from a text file, or created from scratch with the methods :func:`.create_design_space` and :meth:`~gemseo.algos.design_space.DesignSpace.add_variable`. In this case, we will create it directly from the API. .. GENERATED FROM PYTHON SOURCE LINES 113-114 .. code-block:: Python design_space = SobieskiDesignSpace() .. GENERATED FROM PYTHON SOURCE LINES 115-148 .. code:: vi design_space.csv name lower_bound value upper_bound type x_shared 0.01 0.05 0.09 float x_shared 30000.0 45000.0 60000.0 float x_shared 1.4 1.6 1.8 float x_shared 2.5 5.5 8.5 float x_shared 40.0 55.0 70.0 float x_shared 500.0 1000.0 1500.0 float x_1 0.1 0.25 0.4 float x_1 0.75 1.0 1.25 float x_2 0.75 1.0 1.25 float x_3 0.1 0.5 1.0 float y_14 24850.0 50606.9741711 77100.0 float y_14 -7700.0 7306.20262124 45000.0 float y_32 0.235 0.50279625 0.795 float y_31 2960.0 6354.32430691 10185.0 float y_24 0.44 4.15006276 11.13 float y_34 0.44 1.10754577 1.98 float y_23 3365.0 12194.2671934 26400.0 float y_21 24850.0 50606.9741711 77250.0 float y_12 24850.0 50606.9742 77250.0 float y_12 0.45 0.95 1.5 float - The available :ref:`MDO formulations ` are located in the **gemseo.formulations** package, see :ref:`extending-gemseo` for extending GEMSEO with other formulations. - The ``formulation`` classname (here, ``"MDF"``) shall be passed to the scenario to select them. - The list of available formulations can be obtained by using :func:`.get_available_formulations`. .. GENERATED FROM PYTHON SOURCE LINES 148-149 .. code-block:: Python get_available_formulations() .. rst-class:: sphx-glr-script-out .. code-block:: none ['BiLevel', 'DisciplinaryOpt', 'IDF', 'MDF'] .. GENERATED FROM PYTHON SOURCE LINES 150-154 - :math:`y\_4` corresponds to the ``objective_name``. This name must be one of the disciplines outputs, here the "SobieskiMission" discipline. The list of all outputs of the disciplines can be obtained by using :meth:`~gemseo.disciplines.utils.get_all_outputs`: .. GENERATED FROM PYTHON SOURCE LINES 154-156 .. code-block:: Python get_all_outputs(disciplines) get_all_inputs(disciplines) .. rst-class:: sphx-glr-script-out .. code-block:: none ['c_0', 'c_1', 'c_2', 'c_3', 'c_4', 'x_1', 'x_2', 'x_3', 'x_shared', 'y_12', 'y_14', 'y_21', 'y_23', 'y_24', 'y_31', 'y_32', 'y_34'] .. GENERATED FROM PYTHON SOURCE LINES 157-160 From these :class:`~gemseo.core.discipline.MDODiscipline`, design space filename, :ref:`MDO formulation ` name and objective function name, we build the scenario: .. GENERATED FROM PYTHON SOURCE LINES 160-167 .. code-block:: Python scenario = create_scenario( disciplines, "MDF", "y_4", design_space, maximize_objective=True, ) .. GENERATED FROM PYTHON SOURCE LINES 168-182 The range function (:math:`y\_4`) should be maximized. However, optimizers minimize functions by default. Which is why, when creating the scenario, the argument ``maximize_objective`` shall be set to ``True``. Scenario options ~~~~~~~~~~~~~~~~ We may provide additional options to the scenario: **Function derivatives.** As analytical disciplinary derivatives are vailable for Sobieski test-case, they can be used instead of computing the derivatives with finite-differences or with the complex-step method. The easiest way to set a method is to let the optimizer determine it: .. GENERATED FROM PYTHON SOURCE LINES 182-183 .. code-block:: Python scenario.set_differentiation_method() .. GENERATED FROM PYTHON SOURCE LINES 184-209 The default behavior of the optimizer triggers :term:`finite differences`. It corresponds to: .. code:: scenario.set_differentiation_method("finite_differences",1e-7) It it also possible to differentiate functions by means of the :term:`complex step` method: .. code:: scenario.set_differentiation_method("complex_step",1e-30j) Constraints ~~~~~~~~~~~ Similarly to the objective function, the constraints names are a subset of the disciplines' outputs. They can be obtained by using :meth:`~gemseo.disciplines.utils.get_all_outputs`. The formulation has a powerful feature to automatically dispatch the constraints (:math:`g\_1, g\_2, g\_3`) and plug them to the optimizers depending on the formulation. To do that, we use the method :meth:`~gemseo.core.scenario.Scenario.add_constraint`: .. GENERATED FROM PYTHON SOURCE LINES 210-212 .. code-block:: Python for constraint in ["g_1", "g_2", "g_3"]: scenario.add_constraint(constraint, constraint_type="ineq") .. GENERATED FROM PYTHON SOURCE LINES 213-218 Step 3: Execution and visualization of the results -------------------------------------------------- The algorithm arguments are provided as a dictionary to the execution method of the scenario: .. GENERATED FROM PYTHON SOURCE LINES 218-219 .. code-block:: Python algo_args = {"max_iter": 10, "algo": "SLSQP"} .. GENERATED FROM PYTHON SOURCE LINES 220-230 .. warning:: The mandatory arguments are the maximum number of iterations and the algorithm name. Any other options of the optimization algorithm can be prescribed through the argument ``algo_options`` with a dictionary, e.g. :code:`algo_args = {"max_iter": 10, "algo": "SLSQP": "algo_options": {"ftol_rel": 1e-6}}`. This list of available algorithm options are detailed here: :ref:`gen_opt_algos`. The scenario is executed by means of the line: .. GENERATED FROM PYTHON SOURCE LINES 230-231 .. code-block:: Python scenario.execute(algo_args) .. rst-class:: sphx-glr-script-out .. code-block:: none INFO - 08:56:19: INFO - 08:56:19: *** Start MDOScenario execution *** INFO - 08:56:19: MDOScenario INFO - 08:56:19: Disciplines: SobieskiAerodynamics SobieskiMission SobieskiPropulsion SobieskiStructure INFO - 08:56:19: MDO formulation: MDF INFO - 08:56:19: Optimization problem: INFO - 08:56:19: minimize -y_4(x_shared, x_1, x_2, x_3) INFO - 08:56:19: with respect to x_1, x_2, x_3, x_shared INFO - 08:56:19: subject to constraints: INFO - 08:56:19: g_1(x_shared, x_1, x_2, x_3) <= 0.0 INFO - 08:56:19: g_2(x_shared, x_1, x_2, x_3) <= 0.0 INFO - 08:56:19: g_3(x_shared, x_1, x_2, x_3) <= 0.0 INFO - 08:56:19: over the design space: INFO - 08:56:19: +-------------+-------------+-------+-------------+-------+ INFO - 08:56:19: | Name | Lower bound | Value | Upper bound | Type | INFO - 08:56:19: +-------------+-------------+-------+-------------+-------+ INFO - 08:56:19: | x_shared[0] | 0.01 | 0.05 | 0.09 | float | INFO - 08:56:19: | x_shared[1] | 30000 | 45000 | 60000 | float | INFO - 08:56:19: | x_shared[2] | 1.4 | 1.6 | 1.8 | float | INFO - 08:56:19: | x_shared[3] | 2.5 | 5.5 | 8.5 | float | INFO - 08:56:19: | x_shared[4] | 40 | 55 | 70 | float | INFO - 08:56:19: | x_shared[5] | 500 | 1000 | 1500 | float | INFO - 08:56:19: | x_1[0] | 0.1 | 0.25 | 0.4 | float | INFO - 08:56:19: | x_1[1] | 0.75 | 1 | 1.25 | float | INFO - 08:56:19: | x_2 | 0.75 | 1 | 1.25 | float | INFO - 08:56:19: | x_3 | 0.1 | 0.5 | 1 | float | INFO - 08:56:19: +-------------+-------------+-------+-------------+-------+ INFO - 08:56:19: Solving optimization problem with algorithm SLSQP: INFO - 08:56:19: 10%|█ | 1/10 [00:00<00:00, 10.95 it/sec, obj=-536] INFO - 08:56:19: 20%|██ | 2/10 [00:00<00:01, 7.84 it/sec, obj=-2.12e+3] WARNING - 08:56:19: MDAJacobi has reached its maximum number of iterations but the normed residual 1.7130677857005655e-05 is still above the tolerance 1e-06. INFO - 08:56:19: 30%|███ | 3/10 [00:00<00:01, 6.57 it/sec, obj=-3.75e+3] INFO - 08:56:19: 40%|████ | 4/10 [00:00<00:00, 6.29 it/sec, obj=-3.96e+3] INFO - 08:56:20: 50%|█████ | 5/10 [00:00<00:00, 6.14 it/sec, obj=-3.96e+3] INFO - 08:56:20: Optimization result: INFO - 08:56:20: Optimizer info: INFO - 08:56:20: Status: 8 INFO - 08:56:20: Message: Positive directional derivative for linesearch INFO - 08:56:20: Number of calls to the objective function by the optimizer: 6 INFO - 08:56:20: Solution: INFO - 08:56:20: The solution is feasible. INFO - 08:56:20: Objective: -3963.408265187933 INFO - 08:56:20: Standardized constraints: INFO - 08:56:20: g_1 = [-0.01806104 -0.03334642 -0.04424946 -0.0518346 -0.05732607 -0.13720865 INFO - 08:56:20: -0.10279135] INFO - 08:56:20: g_2 = 3.333278582928756e-06 INFO - 08:56:20: g_3 = [-7.67181773e-01 -2.32818227e-01 8.30379541e-07 -1.83255000e-01] INFO - 08:56:20: Design space: INFO - 08:56:20: +-------------+-------------+---------------------+-------------+-------+ INFO - 08:56:20: | Name | Lower bound | Value | Upper bound | Type | INFO - 08:56:20: +-------------+-------------+---------------------+-------------+-------+ INFO - 08:56:20: | x_shared[0] | 0.01 | 0.06000083331964572 | 0.09 | float | INFO - 08:56:20: | x_shared[1] | 30000 | 60000 | 60000 | float | INFO - 08:56:20: | x_shared[2] | 1.4 | 1.4 | 1.8 | float | INFO - 08:56:20: | x_shared[3] | 2.5 | 2.5 | 8.5 | float | INFO - 08:56:20: | x_shared[4] | 40 | 70 | 70 | float | INFO - 08:56:20: | x_shared[5] | 500 | 1500 | 1500 | float | INFO - 08:56:20: | x_1[0] | 0.1 | 0.4 | 0.4 | float | INFO - 08:56:20: | x_1[1] | 0.75 | 0.75 | 1.25 | float | INFO - 08:56:20: | x_2 | 0.75 | 0.75 | 1.25 | float | INFO - 08:56:20: | x_3 | 0.1 | 0.1562448753887276 | 1 | float | INFO - 08:56:20: +-------------+-------------+---------------------+-------------+-------+ INFO - 08:56:20: *** End MDOScenario execution (time: 0:00:00.930844) *** {'max_iter': 10, 'algo': 'SLSQP'} .. GENERATED FROM PYTHON SOURCE LINES 232-239 Post-processing options ~~~~~~~~~~~~~~~~~~~~~~~ A whole variety of visualizations may be displayed for both MDO and DOE scenarios. These features are illustrated on the SSBJ use case in :ref:`post_processing`. To visualize the optimization history: .. GENERATED FROM PYTHON SOURCE LINES 239-241 .. code-block:: Python scenario.post_process("OptHistoryView", save=False, show=True) .. rst-class:: sphx-glr-horizontal * .. image-sg:: /examples/mdo/images/sphx_glr_plot_sobieski_use_case_001.png :alt: Evolution of the optimization variables :srcset: /examples/mdo/images/sphx_glr_plot_sobieski_use_case_001.png :class: sphx-glr-multi-img * .. image-sg:: /examples/mdo/images/sphx_glr_plot_sobieski_use_case_002.png :alt: Evolution of the objective value :srcset: /examples/mdo/images/sphx_glr_plot_sobieski_use_case_002.png :class: sphx-glr-multi-img * .. image-sg:: /examples/mdo/images/sphx_glr_plot_sobieski_use_case_003.png :alt: Distance to the optimum :srcset: /examples/mdo/images/sphx_glr_plot_sobieski_use_case_003.png :class: sphx-glr-multi-img * .. image-sg:: /examples/mdo/images/sphx_glr_plot_sobieski_use_case_004.png :alt: Hessian diagonal approximation :srcset: /examples/mdo/images/sphx_glr_plot_sobieski_use_case_004.png :class: sphx-glr-multi-img * .. image-sg:: /examples/mdo/images/sphx_glr_plot_sobieski_use_case_005.png :alt: Evolution of the inequality constraints :srcset: /examples/mdo/images/sphx_glr_plot_sobieski_use_case_005.png :class: sphx-glr-multi-img .. rst-class:: sphx-glr-script-out .. code-block:: none .. GENERATED FROM PYTHON SOURCE LINES 242-250 Influence of gradient computation method on performance ------------------------------------------------------- As mentioned in :ref:`jacobian_assembly`, several methods are available in order to perform the gradient computations: classical finite differences, complex step and :ref:`mda` linearization in direct or adjoint mode. These modes are automatically selected by |g| to minimize the CPU time. Yet, they can be forced on demand in each :ref:`mda`: .. GENERATED FROM PYTHON SOURCE LINES 250-252 .. code-block:: Python scenario.formulation.mda.linearization_mode = JacobianAssembly.DerivationMode.DIRECT scenario.formulation.mda.matrix_type = JacobianAssembly.JacobianType.LINEAR_OPERATOR .. GENERATED FROM PYTHON SOURCE LINES 253-258 The method used to solve the adjoint or direct linear problem may also be selected. |g| can either assemble a sparse residual Jacobian matrix of the :ref:`mda` from the disciplines matrices. This has the advantage that LU factorizations may be stored to solve multiple right hand sides problems in a cheap way. But this requires extra memory. .. GENERATED FROM PYTHON SOURCE LINES 258-260 .. code-block:: Python scenario.formulation.mda.matrix_type = JacobianAssembly.JacobianType.MATRIX scenario.formulation.mda.use_lu_fact = True .. GENERATED FROM PYTHON SOURCE LINES 261-265 Altenatively, |g| can implicitly create a matrix-vector product operator, which is sufficient for GMRES-like solvers. It avoids to create an additional data structure. This can also be mandatory if the disciplines do not provide full Jacobian matrices but only matrix-vector product operators. .. GENERATED FROM PYTHON SOURCE LINES 265-266 .. code-block:: Python scenario.formulation.mda.matrix_type = JacobianAssembly.JacobianType.LINEAR_OPERATOR .. GENERATED FROM PYTHON SOURCE LINES 267-288 The next table shows the performance of each method for solving the Sobieski use case with :ref:`MDF ` and :ref:`IDF ` formulations. Efficiency of linearization is clearly visible has it takes from 10 to 20 times less CPU time to compute analytic derivatives of an :ref:`mda` compared to finite difference and complex step. For :ref:`IDF `, improvements are less consequent, but direct linearization is more than 2.5 times faster than other methods. .. tabularcolumns:: |l|c|c| +-----------------------+------------------------------+------------------------------+ | | Execution time (s) | + Derivation Method +------------------------------+------------------------------+ | | :ref:`MDF ` | :ref:`IDF ` | +=======================+==============================+==============================+ | Finite differences | 8.22 | 1.93 | +-----------------------+------------------------------+------------------------------+ | Complex step | 18.11 | 2.07 | +-----------------------+------------------------------+------------------------------+ | Linearized (direct) | 0.90 | 0.68 | +-----------------------+------------------------------+------------------------------+ .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 2.009 seconds) .. _sphx_glr_download_examples_mdo_plot_sobieski_use_case.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_sobieski_use_case.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_sobieski_use_case.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_