Self-Organizing Map

In this example, we illustrate the use of the SOM plot on the Sobieski’s SSBJ problem.

from __future__ import annotations

from gemseo import configure_logger
from gemseo import create_discipline
from gemseo import create_scenario
from gemseo.problems.sobieski.core.design_space import SobieskiDesignSpace

Import

The first step is to import some high-level functions and a method to get the design space.

configure_logger()
<RootLogger root (INFO)>

Description

The SOM post-processing performs a Self Organizing Map clustering on the optimization history. A SOM is a 2D representation of a design of experiments which requires dimensionality reduction since it may be in a very high dimension.

A SOM is built by using an unsupervised artificial neural network [KSH01]. A map of size n_x.n_y is generated, where n_x is the number of neurons in the \(x\) direction and n_y is the number of neurons in the \(y\) direction. The design space (whatever the dimension) is reduced to a 2D representation based on n_x.n_y neurons. Samples are clustered to a neuron when their design variables are close in terms of their L2 norm. A neuron is always located at the same place on a map. Each neuron is colored according to the average value for a given criterion. This helps to qualitatively analyze whether parts of the design space are good according to some criteria and not for others, and where compromises should be made. A white neuron has no sample associated with it: not enough evaluations were provided to train the SOM.

SOM’s provide a qualitative view of the objective function, the constraints, and of their relative behaviors.

Create disciplines

At this point, we instantiate the disciplines of Sobieski’s SSBJ problem: Propulsion, Aerodynamics, Structure and Mission

disciplines = create_discipline([
    "SobieskiPropulsion",
    "SobieskiAerodynamics",
    "SobieskiStructure",
    "SobieskiMission",
])

Create design space

We also create the SobieskiDesignSpace.

design_space = SobieskiDesignSpace()

Create and execute scenario

The next step is to build an MDO scenario in order to maximize the range, encoded ‘y_4’, with respect to the design parameters, while satisfying the inequality constraints ‘g_1’, ‘g_2’ and ‘g_3’. We can use the MDF formulation, the Monte Carlo DOE algorithm and 30 samples.

scenario = create_scenario(
    disciplines,
    formulation="MDF",
    objective_name="y_4",
    maximize_objective=True,
    design_space=design_space,
    scenario_type="DOE",
)
scenario.set_differentiation_method()
for constraint in ["g_1", "g_2", "g_3"]:
    scenario.add_constraint(constraint, "ineq")
scenario.execute({"algo": "OT_MONTE_CARLO", "n_samples": 30})
    INFO - 10:55:52:
    INFO - 10:55:52: *** Start DOEScenario execution ***
    INFO - 10:55:52: DOEScenario
    INFO - 10:55:52:    Disciplines: SobieskiAerodynamics SobieskiMission SobieskiPropulsion SobieskiStructure
    INFO - 10:55:52:    MDO formulation: MDF
    INFO - 10:55:52: Optimization problem:
    INFO - 10:55:52:    minimize -y_4(x_shared, x_1, x_2, x_3)
    INFO - 10:55:52:    with respect to x_1, x_2, x_3, x_shared
    INFO - 10:55:52:    subject to constraints:
    INFO - 10:55:52:       g_1(x_shared, x_1, x_2, x_3) <= 0.0
    INFO - 10:55:52:       g_2(x_shared, x_1, x_2, x_3) <= 0.0
    INFO - 10:55:52:       g_3(x_shared, x_1, x_2, x_3) <= 0.0
    INFO - 10:55:52:    over the design space:
    INFO - 10:55:52:       +-------------+-------------+-------+-------------+-------+
    INFO - 10:55:52:       | Name        | Lower bound | Value | Upper bound | Type  |
    INFO - 10:55:52:       +-------------+-------------+-------+-------------+-------+
    INFO - 10:55:52:       | x_shared[0] |     0.01    |  0.05 |     0.09    | float |
    INFO - 10:55:52:       | x_shared[1] |    30000    | 45000 |    60000    | float |
    INFO - 10:55:52:       | x_shared[2] |     1.4     |  1.6  |     1.8     | float |
    INFO - 10:55:52:       | x_shared[3] |     2.5     |  5.5  |     8.5     | float |
    INFO - 10:55:52:       | x_shared[4] |      40     |   55  |      70     | float |
    INFO - 10:55:52:       | x_shared[5] |     500     |  1000 |     1500    | float |
    INFO - 10:55:52:       | x_1[0]      |     0.1     |  0.25 |     0.4     | float |
    INFO - 10:55:52:       | x_1[1]      |     0.75    |   1   |     1.25    | float |
    INFO - 10:55:52:       | x_2         |     0.75    |   1   |     1.25    | float |
    INFO - 10:55:52:       | x_3         |     0.1     |  0.5  |      1      | float |
    INFO - 10:55:52:       +-------------+-------------+-------+-------------+-------+
    INFO - 10:55:52: Solving optimization problem with algorithm OT_MONTE_CARLO:
    INFO - 10:55:52:      3%|▎         | 1/30 [00:00<00:03,  7.54 it/sec, obj=-166]
    INFO - 10:55:52:      7%|▋         | 2/30 [00:00<00:02, 11.01 it/sec, obj=-484]
    INFO - 10:55:52:     10%|█         | 3/30 [00:00<00:02, 13.08 it/sec, obj=-481]
    INFO - 10:55:52:     13%|█▎        | 4/30 [00:00<00:01, 14.41 it/sec, obj=-384]
    INFO - 10:55:52:     17%|█▋        | 5/30 [00:00<00:01, 15.34 it/sec, obj=-1.14e+3]
    INFO - 10:55:52:     20%|██        | 6/30 [00:00<00:01, 16.04 it/sec, obj=-290]
    INFO - 10:55:52:     23%|██▎       | 7/30 [00:00<00:01, 16.37 it/sec, obj=-630]
    INFO - 10:55:52:     27%|██▋       | 8/30 [00:00<00:01, 16.42 it/sec, obj=-346]
    INFO - 10:55:53:     30%|███       | 9/30 [00:00<00:01, 16.62 it/sec, obj=-626]
    INFO - 10:55:53:     33%|███▎      | 10/30 [00:00<00:01, 16.80 it/sec, obj=-621]
    INFO - 10:55:53:     37%|███▋      | 11/30 [00:00<00:01, 16.80 it/sec, obj=-280]
    INFO - 10:55:53:     40%|████      | 12/30 [00:00<00:01, 16.36 it/sec, obj=-288]
    INFO - 10:55:53:     43%|████▎     | 13/30 [00:00<00:01, 16.15 it/sec, obj=-257]
    INFO - 10:55:53:     47%|████▋     | 14/30 [00:00<00:01, 15.99 it/sec, obj=-367]
    INFO - 10:55:53:     50%|█████     | 15/30 [00:00<00:00, 15.95 it/sec, obj=-1.08e+3]
    INFO - 10:55:53:     53%|█████▎    | 16/30 [00:00<00:00, 16.09 it/sec, obj=-344]
    INFO - 10:55:53:     57%|█████▋    | 17/30 [00:01<00:00, 15.94 it/sec, obj=-368]
    INFO - 10:55:53:     60%|██████    | 18/30 [00:01<00:00, 15.91 it/sec, obj=-253]
    INFO - 10:55:53:     63%|██████▎   | 19/30 [00:01<00:00, 15.82 it/sec, obj=-129]
    INFO - 10:55:53:     67%|██████▋   | 20/30 [00:01<00:00, 15.80 it/sec, obj=-1.07e+3]
    INFO - 10:55:53:     70%|███████   | 21/30 [00:01<00:00, 15.98 it/sec, obj=-341]
    INFO - 10:55:53:     73%|███████▎  | 22/30 [00:01<00:00, 16.09 it/sec, obj=-1e+3]
    INFO - 10:55:53:     77%|███████▋  | 23/30 [00:01<00:00, 15.87 it/sec, obj=-586]
    INFO - 10:55:53:     80%|████████  | 24/30 [00:01<00:00, 15.96 it/sec, obj=-483]
    INFO - 10:55:54:     83%|████████▎ | 25/30 [00:01<00:00, 16.04 it/sec, obj=-392]
    INFO - 10:55:54:     87%|████████▋ | 26/30 [00:01<00:00, 16.19 it/sec, obj=-406]
    INFO - 10:55:54:     90%|█████████ | 27/30 [00:01<00:00, 16.11 it/sec, obj=-207]
    INFO - 10:55:54:     93%|█████████▎| 28/30 [00:01<00:00, 16.24 it/sec, obj=-702]
    INFO - 10:55:54:     97%|█████████▋| 29/30 [00:01<00:00, 16.41 it/sec, obj=-423]
    INFO - 10:55:54:    100%|██████████| 30/30 [00:01<00:00, 16.48 it/sec, obj=-664]
    INFO - 10:55:54: Optimization result:
    INFO - 10:55:54:    Optimizer info:
    INFO - 10:55:54:       Status: None
    INFO - 10:55:54:       Message: None
    INFO - 10:55:54:       Number of calls to the objective function by the optimizer: 30
    INFO - 10:55:54:    Solution:
    INFO - 10:55:54:       The solution is feasible.
    INFO - 10:55:54:       Objective: -367.45728393799953
    INFO - 10:55:54:       Standardized constraints:
    INFO - 10:55:54:          g_1 = [-0.02478574 -0.00310924 -0.00855146 -0.01702654 -0.02484732 -0.04764585
    INFO - 10:55:54:  -0.19235415]
    INFO - 10:55:54:          g_2 = -0.09000000000000008
    INFO - 10:55:54:          g_3 = [-0.98722984 -0.01277016 -0.60760341 -0.0557087 ]
    INFO - 10:55:54:       Design space:
    INFO - 10:55:54:          +-------------+-------------+---------------------+-------------+-------+
    INFO - 10:55:54:          | Name        | Lower bound |        Value        | Upper bound | Type  |
    INFO - 10:55:54:          +-------------+-------------+---------------------+-------------+-------+
    INFO - 10:55:54:          | x_shared[0] |     0.01    | 0.01230934749207792 |     0.09    | float |
    INFO - 10:55:54:          | x_shared[1] |    30000    |  43456.87364611478  |    60000    | float |
    INFO - 10:55:54:          | x_shared[2] |     1.4     |  1.731884935123487  |     1.8     | float |
    INFO - 10:55:54:          | x_shared[3] |     2.5     |  3.894765253193514  |     8.5     | float |
    INFO - 10:55:54:          | x_shared[4] |      40     |  57.92631048228255  |      70     | float |
    INFO - 10:55:54:          | x_shared[5] |     500     |  520.4048463450415  |     1500    | float |
    INFO - 10:55:54:          | x_1[0]      |     0.1     |  0.3994784918586811 |     0.4     | float |
    INFO - 10:55:54:          | x_1[1]      |     0.75    |  0.9500312867674923 |     1.25    | float |
    INFO - 10:55:54:          | x_2         |     0.75    |  1.205851870260564  |     1.25    | float |
    INFO - 10:55:54:          | x_3         |     0.1     |  0.2108042391973412 |      1      | float |
    INFO - 10:55:54:          +-------------+-------------+---------------------+-------------+-------+
    INFO - 10:55:54: *** End DOEScenario execution (time: 0:00:01.841564) ***

{'eval_jac': False, 'n_samples': 30, 'algo': 'OT_MONTE_CARLO'}

Post-process scenario

Lastly, we post-process the scenario by means of the SOM plot which performs a self organizing map clustering on optimization history.

Tip

Each post-processing method requires different inputs and offers a variety of customization options. Use the high-level function get_post_processing_options_schema() to print a table with the options for any post-processing algorithm. Or refer to our dedicated page: Post-processing algorithms.

scenario.post_process("SOM", save=False, show=True)
Self Organizing Maps of the design space, -y_4, g_1[0], g_1[1], g_1[2], g_1[3], g_1[4], g_1[5], g_1[6], g_2, g_3[0], g_3[1], g_3[2], g_3[3]
    INFO - 10:55:54: Building Self Organizing Map from optimization history:
    INFO - 10:55:54:     Number of neurons in x direction = 4
    INFO - 10:55:54:     Number of neurons in y direction = 4

<gemseo.post.som.SOM object at 0x7f1dd02a4bb0>

Figure SOM example on the Sobieski problem. illustrates another SOM on the Sobieski use case. The optimization method is a (costly) derivative free algorithm (NLOPT_COBYLA), indeed all the relevant information for the optimization is obtained at the cost of numerous evaluations of the functions. For more details, please read the paper by [KJO+06] on wing MDO post-processing using SOM.

../../../_images/MDOScenario_SOM_v100.png

SOM example on the Sobieski problem.

A DOE may also be a good way to produce SOM maps. Figure SOM example on the Sobieski problem with a 10 000 samples DOE. shows an example with 10000 points on the same test case. This produces more relevant SOM plots.

../../../_images/som_fine.png

SOM example on the Sobieski problem with a 10 000 samples DOE.

Total running time of the script: (0 minutes 2.804 seconds)

Gallery generated by Sphinx-Gallery