Note
Go to the end to download the full example code.
Application: Sobieski's SuperSonic Business Jet (MDO)#
This section describes how to setup and solve the MDO problem relative to the Sobieski test case with GEMSEO.
See also
To begin with a more simple MDO problem, and have a detailed description of how to plug a test case to GEMSEO, see Tutorial: How to solve an MDO problem.
Solving with an MDF formulation#
In this example, we solve the range optimization using the following MDF formulation:
The MDF formulation couples all the disciplines during the Multi Disciplinary Analyses at each optimization iteration.
All the design variables are equally treated, concatenated in a single vector and given to a single optimization algorithm as the unknowns of the problem.
There is no specific constraint due to the MDF formulation.
Only the design constraints \(g\_1\), \(g\_2\) and \(g\_3\) are added to the problem.
The objective function is the range (the \(y\_4\) variable in the model), computed after the Multi Disciplinary Analyses.
Imports#
All the imports needed for the tutorials are performed here. Note that some of the imports are related to the Python 2/3 compatibility.
from __future__ import annotations
from gemseo import configure_logger
from gemseo import create_discipline
from gemseo import create_scenario
from gemseo import get_available_formulations
from gemseo.core.derivatives.jacobian_assembly import JacobianAssembly
from gemseo.disciplines.utils import get_all_inputs
from gemseo.disciplines.utils import get_all_outputs
from gemseo.problems.mdo.sobieski.core.design_space import SobieskiDesignSpace
configure_logger()
<RootLogger root (INFO)>
Step 1: Creation of Discipline
#
To build the scenario, we first instantiate the disciplines. Here, the disciplines themselves have already been developed and interfaced with GEMSEO (see Benchmark problems).
disciplines = create_discipline([
"SobieskiPropulsion",
"SobieskiAerodynamics",
"SobieskiMission",
"SobieskiStructure",
])
Tip
For the disciplines that are not interfaced with GEMSEO, the GEMSEO's
gemseo
eases the creation of disciplines without having
to import them.
See API reference.
Step 2: Creation of Scenario
#
The scenario delegates the creation of the optimization problem to the MDO formulation.
Therefore, it needs the list of disciplines
, the names of the formulation,
the name of the objective function and the design space.
The
design_space
(shown below for reference, asdesign_space.txt
) defines the unknowns of the optimization problem, and their bounds. It contains all the design variables needed by the MDF formulation. It can be imported from a text file, or created from scratch with the methodscreate_design_space()
andadd_variable()
. In this case, we will create it directly from the API.
design_space = SobieskiDesignSpace()
vi design_space.csv
name lower_bound value upper_bound type
x_shared 0.01 0.05 0.09 float
x_shared 30000.0 45000.0 60000.0 float
x_shared 1.4 1.6 1.8 float
x_shared 2.5 5.5 8.5 float
x_shared 40.0 55.0 70.0 float
x_shared 500.0 1000.0 1500.0 float
x_1 0.1 0.25 0.4 float
x_1 0.75 1.0 1.25 float
x_2 0.75 1.0 1.25 float
x_3 0.1 0.5 1.0 float
y_14 24850.0 50606.9741711 77100.0 float
y_14 7700.0 7306.20262124 45000.0 float
y_32 0.235 0.50279625 0.795 float
y_31 2960.0 6354.32430691 10185.0 float
y_24 0.44 4.15006276 11.13 float
y_34 0.44 1.10754577 1.98 float
y_23 3365.0 12194.2671934 26400.0 float
y_21 24850.0 50606.9741711 77250.0 float
y_12 24850.0 50606.9742 77250.0 float
y_12 0.45 0.95 1.5 float
The available MDO formulations are located in the gemseo.formulations package, see Extend GEMSEO features for extending GEMSEO with other formulations.
The
formulation
classname (here,"MDF"
) shall be passed to the scenario to select them.The list of available formulations can be obtained by using
get_available_formulations()
.
get_available_formulations()
['BiLevel', 'DisciplinaryOpt', 'IDF', 'MDF']
\(y\_4\) corresponds to the
objective_name
. This name must be one of the disciplines outputs, here the "SobieskiMission" discipline. The list of all outputs of the disciplines can be obtained by usingget_all_outputs()
:
get_all_outputs(disciplines)
get_all_inputs(disciplines)
['c_0', 'c_1', 'c_2', 'c_3', 'c_4', 'x_1', 'x_2', 'x_3', 'x_shared', 'y_12', 'y_14', 'y_21', 'y_23', 'y_24', 'y_31', 'y_32', 'y_34']
From these Discipline
, design space filename,
MDO formulation name and objective function name,
we build the scenario:
scenario = create_scenario(
disciplines,
"y_4",
design_space,
formulation_name="MDF",
maximize_objective=True,
)
WARNING  08:34:17: Unsupported feature 'minItems' in JSONGrammar 'SobieskiMission_discipline_output' for property 'y_4' in conversion to SimpleGrammar.
WARNING  08:34:17: Unsupported feature 'maxItems' in JSONGrammar 'SobieskiMission_discipline_output' for property 'y_4' in conversion to SimpleGrammar.
The range function (\(y\_4\)) should be maximized. However, optimizers
minimize functions by default. Which is why, when creating the scenario, the argument
maximize_objective
shall be set to True
.
Scenario options#
We may provide additional options to the scenario:
Function derivatives. As analytical disciplinary derivatives are vailable for Sobieski testcase, they can be used instead of computing the derivatives with finitedifferences or with the complexstep method. The easiest way to set a method is to let the optimizer determine it:
scenario.set_differentiation_method()
The default behavior of the optimizer triggers finite differences. It corresponds to:
scenario.set_differentiation_method("finite_differences",1e7)
It it also possible to differentiate functions by means of the complex step method:
scenario.set_differentiation_method("complex_step",1e30j)
Constraints#
Similarly to the objective function, the constraints names are a subset
of the disciplines' outputs. They can be obtained by using
get_all_outputs()
.
The formulation has a powerful feature to automatically dispatch the constraints
(\(g\_1, g\_2, g\_3\)) and plug them to the optimizers depending on
the formulation. To do that, we use the method
add_constraint()
:
for constraint in ["g_1", "g_2", "g_3"]:
scenario.add_constraint(constraint, constraint_type="ineq")
Step 3: Execution and visualization of the results#
The scenario is executed from
an optimization algorithm name (see Optimization algorithms),
a maximum number of iterations
and possibly a few options.
The maximum number of iterations and the options can be passed
either as keyword arguments
e.g. scenario.execute(algo_name="SLSQP", max_iter=10, ftol_rel=1e6)
or as a Pydantic model of settings,
e.g. scenario.execute(NLOPTSLSQPSettings(max_iter=10, ftol_rel=1e6))
where the Pydantic model NLOPTSLSQPSettings
is imported from gemseo.settings.opt
.
In this example, we do not use any option:
scenario.execute(algo_name="SLSQP", max_iter=10)
INFO  08:34:17:
INFO  08:34:17: *** Start MDOScenario execution ***
INFO  08:34:17: MDOScenario
INFO  08:34:17: Disciplines: SobieskiAerodynamics SobieskiMission SobieskiPropulsion SobieskiStructure
INFO  08:34:17: MDO formulation: MDF
INFO  08:34:17: Optimization problem:
INFO  08:34:17: minimize y_4(x_shared, x_1, x_2, x_3)
INFO  08:34:17: with respect to x_1, x_2, x_3, x_shared
INFO  08:34:17: subject to constraints:
INFO  08:34:17: g_1(x_shared, x_1, x_2, x_3) <= 0
INFO  08:34:17: g_2(x_shared, x_1, x_2, x_3) <= 0
INFO  08:34:17: g_3(x_shared, x_1, x_2, x_3) <= 0
INFO  08:34:17: over the design space:
INFO  08:34:17: ++++++
INFO  08:34:17:  Name  Lower bound  Value  Upper bound  Type 
INFO  08:34:17: ++++++
INFO  08:34:17:  x_shared[0]  0.01  0.05  0.09  float 
INFO  08:34:17:  x_shared[1]  30000  45000  60000  float 
INFO  08:34:17:  x_shared[2]  1.4  1.6  1.8  float 
INFO  08:34:17:  x_shared[3]  2.5  5.5  8.5  float 
INFO  08:34:17:  x_shared[4]  40  55  70  float 
INFO  08:34:17:  x_shared[5]  500  1000  1500  float 
INFO  08:34:17:  x_1[0]  0.1  0.25  0.4  float 
INFO  08:34:17:  x_1[1]  0.75  1  1.25  float 
INFO  08:34:17:  x_2  0.75  1  1.25  float 
INFO  08:34:17:  x_3  0.1  0.5  1  float 
INFO  08:34:17: ++++++
INFO  08:34:17: Solving optimization problem with algorithm SLSQP:
INFO  08:34:17: 10%█  1/10 [00:00<00:00, 17.21 it/sec, obj=536]
INFO  08:34:17: 20%██  2/10 [00:00<00:00, 12.59 it/sec, obj=2.12e+3]
WARNING  08:34:18: MDAJacobi has reached its maximum number of iterations but the normed residual 5.741449586530469e06 is still above the tolerance 1e06.
INFO  08:34:18: 30%███  3/10 [00:00<00:00, 10.20 it/sec, obj=3.46e+3]
INFO  08:34:18: 40%████  4/10 [00:00<00:00, 9.72 it/sec, obj=3.96e+3]
INFO  08:34:18: 50%█████  5/10 [00:00<00:00, 9.90 it/sec, obj=4.61e+3]
INFO  08:34:18: 60%██████  6/10 [00:00<00:00, 10.56 it/sec, obj=4.5e+3]
INFO  08:34:18: 70%███████  7/10 [00:00<00:00, 10.93 it/sec, obj=4.26e+3]
INFO  08:34:18: 80%████████  8/10 [00:00<00:00, 11.22 it/sec, obj=4.11e+3]
INFO  08:34:18: 90%█████████  9/10 [00:00<00:00, 11.48 it/sec, obj=4.02e+3]
INFO  08:34:18: 100%██████████ 10/10 [00:00<00:00, 11.69 it/sec, obj=3.99e+3]
INFO  08:34:18: Optimization result:
INFO  08:34:18: Optimizer info:
INFO  08:34:18: Status: None
INFO  08:34:18: Message: Maximum number of iterations reached. GEMSEO stopped the driver.
INFO  08:34:18: Number of calls to the objective function by the optimizer: 12
INFO  08:34:18: Solution:
INFO  08:34:18: The solution is feasible.
INFO  08:34:18: Objective: 3463.120411437138
INFO  08:34:18: Standardized constraints:
INFO  08:34:18: g_1 = [0.01112145 0.02847064 0.04049911 0.04878943 0.05476349 0.14014207
INFO  08:34:18: 0.09985793]
INFO  08:34:18: g_2 = 0.0020925663903177405
INFO  08:34:18: g_3 = [0.71359843 0.28640157 0.05926796 0.183255 ]
INFO  08:34:18: Design space:
INFO  08:34:18: ++++++
INFO  08:34:18:  Name  Lower bound  Value  Upper bound  Type 
INFO  08:34:18: ++++++
INFO  08:34:18:  x_shared[0]  0.01  0.05947685840242058  0.09  float 
INFO  08:34:18:  x_shared[1]  30000  59246.692998739  60000  float 
INFO  08:34:18:  x_shared[2]  1.4  1.4  1.8  float 
INFO  08:34:18:  x_shared[3]  2.5  2.64097355362077  8.5  float 
INFO  08:34:18:  x_shared[4]  40  69.32144380869019  70  float 
INFO  08:34:18:  x_shared[5]  500  1478.031626737187  1500  float 
INFO  08:34:18:  x_1[0]  0.1  0.4  0.4  float 
INFO  08:34:18:  x_1[1]  0.75  0.7608797907508461  1.25  float 
INFO  08:34:18:  x_2  0.75  0.7607584987262048  1.25  float 
INFO  08:34:18:  x_3  0.1  0.1514057659459843  1  float 
INFO  08:34:18: ++++++
INFO  08:34:18: *** End MDOScenario execution (time: 0:00:00.866657) ***
Postprocessing options#
A whole variety of visualizations may be displayed for both MDO and DOE scenarios. These features are illustrated on the SSBJ use case in How to deal with postprocessing.
To visualize the optimization history:
scenario.post_process(post_name="OptHistoryView", save=False, show=True)
<gemseo.post.opt_history_view.OptHistoryView object at 0x7f18ab19af70>
Influence of gradient computation method on performance#
As mentioned in Coupled derivatives and gradients computation, several methods are available in order to perform the gradient computations: classical finite differences, complex step and Multi Disciplinary Analyses linearization in direct or adjoint mode. These modes are automatically selected by GEMSEO to minimize the CPU time. Yet, they can be forced on demand in each Multi Disciplinary Analyses:
scenario.formulation.mda.linearization_mode = JacobianAssembly.DerivationMode.DIRECT
scenario.formulation.mda.matrix_type = JacobianAssembly.JacobianType.LINEAR_OPERATOR
The method used to solve the adjoint or direct linear problem may also be selected. GEMSEO can either assemble a sparse residual Jacobian matrix of the Multi Disciplinary Analyses from the disciplines matrices. This has the advantage that LU factorizations may be stored to solve multiple right hand sides problems in a cheap way. But this requires extra memory.
scenario.formulation.mda.matrix_type = JacobianAssembly.JacobianType.MATRIX
scenario.formulation.mda.use_lu_fact = True
Alternatively, GEMSEO can implicitly create a matrixvector product operator, which is sufficient for GMRESlike solvers. It avoids to create an additional data structure. This can also be mandatory if the disciplines do not provide full Jacobian matrices but only matrixvector product operators.
scenario.formulation.mda.matrix_type = JacobianAssembly.JacobianType.LINEAR_OPERATOR
The next table shows the performance of each method for solving the Sobieski use case with MDF and IDF formulations. Efficiency of linearization is clearly visible has it takes from 10 to 20 times less CPU time to compute analytic derivatives of an Multi Disciplinary Analyses compared to finite difference and complex step. For IDF, improvements are less consequent, but direct linearization is more than 2.5 times faster than other methods.
Derivation Method 
Execution time (s) 


Finite differences 
8.22 
1.93 
Complex step 
18.11 
2.07 
Linearized (direct) 
0.90 
0.68 
Total running time of the script: (0 minutes 2.031 seconds)