Note
Go to the end to download the full example code
Self-Organizing Map¶
In this example, we illustrate the use of the SOM
plot
on the Sobieski’s SSBJ problem.
from __future__ import annotations
from gemseo import configure_logger
from gemseo import create_discipline
from gemseo import create_scenario
from gemseo.problems.sobieski.core.problem import SobieskiProblem
Import¶
The first step is to import some high-level functions and a method to get the design space.
configure_logger()
<RootLogger root (INFO)>
Description¶
The SOM
post-processing performs a Self Organizing Map
clustering on the optimization history.
A SOM
is a 2D representation of a design of experiments
which requires dimensionality reduction since it may be in a very high dimension.
A SOM is built by using an unsupervised artificial neural network
[KSH01].
A map of size n_x.n_y
is generated, where
n_x
is the number of neurons in the \(x\) direction and n_y
is the number of neurons in the \(y\) direction. The design space
(whatever the dimension) is reduced to a 2D representation based on
n_x.n_y
neurons. Samples are clustered to a neuron when their design
variables are close in terms of their L2 norm. A neuron is always located at the
same place on a map. Each neuron is colored according to the average value for
a given criterion. This helps to qualitatively analyze whether parts of the design
space are good according to some criteria and not for others, and where
compromises should be made. A white neuron has no sample associated with
it: not enough evaluations were provided to train the SOM.
SOM’s provide a qualitative view of the objective function, the constraints, and of their relative behaviors.
Create disciplines¶
At this point, we instantiate the disciplines of Sobieski’s SSBJ problem: Propulsion, Aerodynamics, Structure and Mission
disciplines = create_discipline(
[
"SobieskiPropulsion",
"SobieskiAerodynamics",
"SobieskiStructure",
"SobieskiMission",
]
)
Create design space¶
We also read the design space from the SobieskiProblem
.
design_space = SobieskiProblem().design_space
Create and execute scenario¶
The next step is to build an MDO scenario in order to maximize the range, encoded ‘y_4’, with respect to the design parameters, while satisfying the inequality constraints ‘g_1’, ‘g_2’ and ‘g_3’. We can use the MDF formulation, the Monte Carlo DOE algorithm and 30 samples.
scenario = create_scenario(
disciplines,
formulation="MDF",
objective_name="y_4",
maximize_objective=True,
design_space=design_space,
scenario_type="DOE",
)
scenario.set_differentiation_method()
for constraint in ["g_1", "g_2", "g_3"]:
scenario.add_constraint(constraint, "ineq")
scenario.execute({"algo": "OT_MONTE_CARLO", "n_samples": 30})
INFO - 16:12:32:
INFO - 16:12:32: *** Start DOEScenario execution ***
INFO - 16:12:32: DOEScenario
INFO - 16:12:32: Disciplines: SobieskiAerodynamics SobieskiMission SobieskiPropulsion SobieskiStructure
INFO - 16:12:32: MDO formulation: MDF
INFO - 16:12:32: Optimization problem:
INFO - 16:12:32: minimize -y_4(x_shared, x_1, x_2, x_3)
INFO - 16:12:32: with respect to x_1, x_2, x_3, x_shared
INFO - 16:12:32: subject to constraints:
INFO - 16:12:32: g_1(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 16:12:32: g_2(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 16:12:32: g_3(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 16:12:32: over the design space:
INFO - 16:12:32: +-------------+-------------+-------+-------------+-------+
INFO - 16:12:32: | name | lower_bound | value | upper_bound | type |
INFO - 16:12:32: +-------------+-------------+-------+-------------+-------+
INFO - 16:12:32: | x_shared[0] | 0.01 | 0.05 | 0.09 | float |
INFO - 16:12:32: | x_shared[1] | 30000 | 45000 | 60000 | float |
INFO - 16:12:32: | x_shared[2] | 1.4 | 1.6 | 1.8 | float |
INFO - 16:12:32: | x_shared[3] | 2.5 | 5.5 | 8.5 | float |
INFO - 16:12:32: | x_shared[4] | 40 | 55 | 70 | float |
INFO - 16:12:32: | x_shared[5] | 500 | 1000 | 1500 | float |
INFO - 16:12:32: | x_1[0] | 0.1 | 0.25 | 0.4 | float |
INFO - 16:12:32: | x_1[1] | 0.75 | 1 | 1.25 | float |
INFO - 16:12:32: | x_2 | 0.75 | 1 | 1.25 | float |
INFO - 16:12:32: | x_3 | 0.1 | 0.5 | 1 | float |
INFO - 16:12:32: +-------------+-------------+-------+-------------+-------+
INFO - 16:12:32: Solving optimization problem with algorithm OT_MONTE_CARLO:
INFO - 16:12:32: ... 0%| | 0/30 [00:00<?, ?it]
INFO - 16:12:33: ... 3%|▎ | 1/30 [00:00<00:03, 9.31 it/sec, obj=-166]
INFO - 16:12:33: ... 7%|▋ | 2/30 [00:00<00:02, 13.83 it/sec, obj=-484]
INFO - 16:12:33: ... 10%|█ | 3/30 [00:00<00:01, 16.52 it/sec, obj=-481]
INFO - 16:12:33: ... 13%|█▎ | 4/30 [00:00<00:01, 18.33 it/sec, obj=-384]
INFO - 16:12:33: ... 17%|█▋ | 5/30 [00:00<00:01, 19.60 it/sec, obj=-1.14e+3]
INFO - 16:12:33: ... 20%|██ | 6/30 [00:00<00:01, 20.58 it/sec, obj=-290]
INFO - 16:12:33: ... 23%|██▎ | 7/30 [00:00<00:01, 21.05 it/sec, obj=-630]
INFO - 16:12:33: ... 27%|██▋ | 8/30 [00:00<00:01, 21.21 it/sec, obj=-346]
INFO - 16:12:33: ... 30%|███ | 9/30 [00:00<00:00, 21.55 it/sec, obj=-626]
INFO - 16:12:33: ... 33%|███▎ | 10/30 [00:00<00:00, 21.80 it/sec, obj=-621]
INFO - 16:12:33: ... 37%|███▋ | 11/30 [00:00<00:00, 21.84 it/sec, obj=-280]
INFO - 16:12:33: ... 40%|████ | 12/30 [00:00<00:00, 21.39 it/sec, obj=-288]
INFO - 16:12:33: ... 43%|████▎ | 13/30 [00:00<00:00, 21.17 it/sec, obj=-257]
INFO - 16:12:33: ... 47%|████▋ | 14/30 [00:00<00:00, 20.98 it/sec, obj=-367]
INFO - 16:12:33: ... 50%|█████ | 15/30 [00:00<00:00, 20.95 it/sec, obj=-1.08e+3]
INFO - 16:12:33: ... 53%|█████▎ | 16/30 [00:00<00:00, 21.13 it/sec, obj=-344]
INFO - 16:12:33: ... 57%|█████▋ | 17/30 [00:00<00:00, 20.95 it/sec, obj=-368]
INFO - 16:12:33: ... 60%|██████ | 18/30 [00:00<00:00, 20.93 it/sec, obj=-253]
INFO - 16:12:33: ... 63%|██████▎ | 19/30 [00:00<00:00, 20.79 it/sec, obj=-129]
INFO - 16:12:33: ... 67%|██████▋ | 20/30 [00:00<00:00, 20.76 it/sec, obj=-1.07e+3]
INFO - 16:12:33: ... 70%|███████ | 21/30 [00:01<00:00, 20.99 it/sec, obj=-341]
INFO - 16:12:33: ... 73%|███████▎ | 22/30 [00:01<00:00, 21.12 it/sec, obj=-1e+3]
INFO - 16:12:34: ... 77%|███████▋ | 23/30 [00:01<00:00, 20.85 it/sec, obj=-586]
INFO - 16:12:34: ... 80%|████████ | 24/30 [00:01<00:00, 20.98 it/sec, obj=-483]
INFO - 16:12:34: ... 83%|████████▎ | 25/30 [00:01<00:00, 21.10 it/sec, obj=-392]
INFO - 16:12:34: ... 87%|████████▋ | 26/30 [00:01<00:00, 21.28 it/sec, obj=-406]
INFO - 16:12:34: ... 90%|█████████ | 27/30 [00:01<00:00, 21.19 it/sec, obj=-207]
INFO - 16:12:34: ... 93%|█████████▎| 28/30 [00:01<00:00, 21.36 it/sec, obj=-702]
INFO - 16:12:34: ... 97%|█████████▋| 29/30 [00:01<00:00, 21.58 it/sec, obj=-423]
INFO - 16:12:34: ... 100%|██████████| 30/30 [00:01<00:00, 21.65 it/sec, obj=-664]
INFO - 16:12:34: Optimization result:
INFO - 16:12:34: Optimizer info:
INFO - 16:12:34: Status: None
INFO - 16:12:34: Message: None
INFO - 16:12:34: Number of calls to the objective function by the optimizer: 30
INFO - 16:12:34: Solution:
INFO - 16:12:34: The solution is feasible.
INFO - 16:12:34: Objective: -367.45728393799953
INFO - 16:12:34: Standardized constraints:
INFO - 16:12:34: g_1 = [-0.02478574 -0.00310924 -0.00855146 -0.01702654 -0.02484732 -0.04764585
INFO - 16:12:34: -0.19235415]
INFO - 16:12:34: g_2 = -0.09000000000000008
INFO - 16:12:34: g_3 = [-0.98722984 -0.01277016 -0.60760341 -0.0557087 ]
INFO - 16:12:34: Design space:
INFO - 16:12:34: +-------------+-------------+---------------------+-------------+-------+
INFO - 16:12:34: | name | lower_bound | value | upper_bound | type |
INFO - 16:12:34: +-------------+-------------+---------------------+-------------+-------+
INFO - 16:12:34: | x_shared[0] | 0.01 | 0.01230934749207792 | 0.09 | float |
INFO - 16:12:34: | x_shared[1] | 30000 | 43456.87364611478 | 60000 | float |
INFO - 16:12:34: | x_shared[2] | 1.4 | 1.731884935123487 | 1.8 | float |
INFO - 16:12:34: | x_shared[3] | 2.5 | 3.894765253193514 | 8.5 | float |
INFO - 16:12:34: | x_shared[4] | 40 | 57.92631048228255 | 70 | float |
INFO - 16:12:34: | x_shared[5] | 500 | 520.4048463450415 | 1500 | float |
INFO - 16:12:34: | x_1[0] | 0.1 | 0.3994784918586811 | 0.4 | float |
INFO - 16:12:34: | x_1[1] | 0.75 | 0.9500312867674923 | 1.25 | float |
INFO - 16:12:34: | x_2 | 0.75 | 1.205851870260564 | 1.25 | float |
INFO - 16:12:34: | x_3 | 0.1 | 0.2108042391973412 | 1 | float |
INFO - 16:12:34: +-------------+-------------+---------------------+-------------+-------+
INFO - 16:12:34: *** End DOEScenario execution (time: 0:00:01.403930) ***
{'eval_jac': False, 'n_samples': 30, 'algo': 'OT_MONTE_CARLO'}
Post-process scenario¶
Lastly, we post-process the scenario by means of the
SOM
plot which performs a self organizing map
clustering on optimization history.
Tip
Each post-processing method requires different inputs and offers a variety
of customization options. Use the high-level function
get_post_processing_options_schema()
to print a table with
the options for any post-processing algorithm.
Or refer to our dedicated page:
Post-processing algorithms.
scenario.post_process("SOM", save=False, show=True)

INFO - 16:12:34: Building Self Organizing Map from optimization history:
INFO - 16:12:34: Number of neurons in x direction = 4
INFO - 16:12:34: Number of neurons in y direction = 4
<gemseo.post.som.SOM object at 0x7f1764a07790>
Figure SOM example on the Sobieski problem. illustrates another SOM on the Sobieski
use case. The optimization method is a (costly) derivative free algorithm
(NLOPT_COBYLA
), indeed all the relevant information for the optimization
is obtained at the cost of numerous evaluations of the functions. For
more details, please read the paper by
[KJO+06] on wing MDO post-processing
using SOM.

SOM example on the Sobieski problem.¶
A DOE may also be a good way to produce SOM maps. Figure SOM example on the Sobieski problem with a 10 000 samples DOE. shows an example with 10000 points on the same test case. This produces more relevant SOM plots.

SOM example on the Sobieski problem with a 10 000 samples DOE.¶
Total running time of the script: ( 0 minutes 2.490 seconds)