Note
Click here to download the full example code
MDF-based DOE on the Sobieski SSBJ test case¶
from __future__ import division, unicode_literals
from matplotlib import pyplot as plt
from gemseo.api import configure_logger, create_discipline, create_scenario
from gemseo.problems.sobieski.core import SobieskiProblem
configure_logger()
Out:
<RootLogger root (INFO)>
Instantiate the disciplines¶
First, we instantiate the four disciplines of the use case:
SobieskiPropulsion
,
SobieskiAerodynamics
,
SobieskiMission
and SobieskiStructure
.
disciplines = create_discipline(
[
"SobieskiPropulsion",
"SobieskiAerodynamics",
"SobieskiMission",
"SobieskiStructure",
]
)
Build, execute and post-process the scenario¶
Then, we build the scenario which links the disciplines
with the formulation and the optimization algorithm. Here, we use the
BiLevel
formulation. We tell the scenario to minimize -y_4
instead of minimizing y_4 (range), which is the default option.
We need to define the design space.
design_space = SobieskiProblem().read_design_space()
Instantiate the scenario¶
scenario = create_scenario(
disciplines,
formulation="MDF",
objective_name="y_4",
design_space=design_space,
maximize_objective=True,
scenario_type="DOE",
)
Set the design constraints¶
for constraint in ["g_1", "g_2", "g_3"]:
scenario.add_constraint(constraint, "ineq")
Execute the scenario¶
Use provided analytic derivatives
scenario.set_differentiation_method("user")
Multiprocessing¶
It is possible to run a DOE in parallel using multiprocessing, in order to do this, we specify the number of processes to be used for the computation of the samples.
Warning
The multiprocessing option has some limitations on Windows.
For Python versions < 3.7 and Numpy < 1.20.0, subprocesses may get hung
randomly during execution. It is strongly recommended to update your
environment to avoid this problem.
The features MemoryFullCache
and HDF5Cache
are not
available for multiprocessing on Windows.
As an alternative, we recommend the method
DOEScenario.set_optimization_history_backup()
.
n_processes = 4
We define the algorithm options. Here the criterion = center option of pyDOE centers the points within the sampling intervals.
algo_options = {
"criterion": "center",
# Evaluate gradient of the MDA
# with coupled adjoint
"eval_jac": True,
# Run in parallel on 4 processors
"n_processes": n_processes,
}
run_inputs = {"n_samples": 30, "algo": "lhs", "algo_options": algo_options}
Warning
When running a parallel DOE on Windows, the execution must be protected to avoid recursive calls:
if __name__ == "__main__":
scenario.execute(run_inputs)
Out:
INFO - 21:52:33:
INFO - 21:52:33: *** Start DOE Scenario execution ***
INFO - 21:52:33: DOEScenario
INFO - 21:52:33: Disciplines: SobieskiPropulsion SobieskiAerodynamics SobieskiMission SobieskiStructure
INFO - 21:52:33: MDOFormulation: MDF
INFO - 21:52:33: Algorithm: lhs
INFO - 21:52:33: Optimization problem:
INFO - 21:52:33: Minimize: -y_4(x_shared, x_1, x_2, x_3)
INFO - 21:52:33: With respect to: x_shared, x_1, x_2, x_3
INFO - 21:52:33: Subject to constraints:
INFO - 21:52:33: g_1(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 21:52:33: g_2(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 21:52:33: g_3(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 21:52:33: DOE sampling: 0%| | 0/30 [00:00<?, ?it]
INFO - 21:52:33: Running DOE in parallel on n_processes = 4
INFO - 21:52:33: DOE sampling: 3%|▎ | 1/30 [00:00<00:00, 62.68 it/sec, obj=222]
INFO - 21:52:34: DOE sampling: 17%|█▋ | 5/30 [00:00<00:00, 36.93 it/sec, obj=429]
INFO - 21:52:34: DOE sampling: 23%|██▎ | 7/30 [00:00<00:00, 32.04 it/sec, obj=481]
INFO - 21:52:34: DOE sampling: 30%|███ | 9/30 [00:01<00:00, 25.87 it/sec, obj=419]
INFO - 21:52:34: DOE sampling: 37%|███▋ | 11/30 [00:01<00:00, 23.54 it/sec, obj=306]
INFO - 21:52:34: DOE sampling: 43%|████▎ | 13/30 [00:01<00:00, 19.95 it/sec, obj=495]
INFO - 21:52:35: DOE sampling: 57%|█████▋ | 17/30 [00:01<00:00, 16.29 it/sec, obj=1.07e+3]
INFO - 21:52:35: DOE sampling: 67%|██████▋ | 20/30 [00:01<00:00, 15.31 it/sec, obj=602]
INFO - 21:52:35: DOE sampling: 73%|███████▎ | 22/30 [00:02<00:00, 13.17 it/sec, obj=1.18e+3]
INFO - 21:52:35: DOE sampling: 83%|████████▎ | 25/30 [00:02<00:00, 11.72 it/sec, obj=624]
INFO - 21:52:36: DOE sampling: 97%|█████████▋| 29/30 [00:02<00:00, 10.79 it/sec, obj=405]
INFO - 21:52:36: DOE sampling: 100%|██████████| 30/30 [00:02<00:00, 10.55 it/sec, obj=485]
INFO - 21:52:36: Optimization result:
INFO - 21:52:36: Objective value = 485.49220045924955
INFO - 21:52:36: The result is feasible.
INFO - 21:52:36: Status: None
INFO - 21:52:36: Optimizer message: None
INFO - 21:52:36: Number of calls to the objective function by the optimizer: 30
INFO - 21:52:36: Constraints values:
INFO - 21:52:36: g_1 = [-0.11350951 -0.10812292 -0.1045109 -0.10204971 -0.10028641 -0.01838903
INFO - 21:52:36: -0.22161097]
INFO - 21:52:36: g_2 = -0.02400000000000002
INFO - 21:52:36: g_3 = [-0.33063157 -0.66936843 -0.73821755 -0.07789536]
INFO - 21:52:36: Design space:
INFO - 21:52:36: +----------+-------------+---------------------+-------------+-------+
INFO - 21:52:36: | name | lower_bound | value | upper_bound | type |
INFO - 21:52:36: +----------+-------------+---------------------+-------------+-------+
INFO - 21:52:36: | x_shared | 0.01 | 0.05400000000000001 | 0.09 | float |
INFO - 21:52:36: | x_shared | 30000 | 46500 | 60000 | float |
INFO - 21:52:36: | x_shared | 1.4 | 1.686666666666667 | 1.8 | float |
INFO - 21:52:36: | x_shared | 2.5 | 5.2 | 8.5 | float |
INFO - 21:52:36: | x_shared | 40 | 66.5 | 70 | float |
INFO - 21:52:36: | x_shared | 500 | 583.3333333333334 | 1500 | float |
INFO - 21:52:36: | x_1 | 0.1 | 0.185 | 0.4 | float |
INFO - 21:52:36: | x_1 | 0.75 | 0.9416666666666667 | 1.25 | float |
INFO - 21:52:36: | x_2 | 0.75 | 0.775 | 1.25 | float |
INFO - 21:52:36: | x_3 | 0.1 | 0.115 | 1 | float |
INFO - 21:52:36: +----------+-------------+---------------------+-------------+-------+
INFO - 21:52:36: *** DOE Scenario run terminated ***
Warning
On Windows, the progress bar may show duplicated instances during the initialization of each subprocess. In some cases it may also print the conclusion of an iteration ahead of another one that was concluded first. This is a consequence of the pickling process and does not affect the computations of the scenario.
Plot the optimization history view¶
scenario.post_process("OptHistoryView", show=False, save=False)
Out:
<gemseo.post.opt_history_view.OptHistoryView object at 0x7f6183d0ac10>
Tip
Each post-processing method requires different inputs and offers a variety
of customization options. Use the API function
get_post_processing_options_schema()
to print a table with
the attributes for any post-processing algo. Or refer to our dedicated page:
Options for Post-processing algorithms.
Plot the scatter matrix¶
scenario.post_process(
"ScatterPlotMatrix", show=False, save=False, variables_list=["y_4", "x_shared"]
)

Out:
<gemseo.post.scatter_mat.ScatterPlotMatrix object at 0x7f6183c21880>
Plot correlations¶
scenario.post_process("Correlations", show=False, save=False)
# Workaround for HTML rendering, instead of ``show=True``
plt.show()

Out:
INFO - 21:52:39: Detected 10 correlations > 0.95
Total running time of the script: ( 0 minutes 6.638 seconds)