.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "examples/post_process/algorithms/plot_som.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_examples_post_process_algorithms_plot_som.py: Self-Organizing Map =================== In this example, we illustrate the use of the :class:`.SOM` plot on the Sobieski's SSBJ problem. .. GENERATED FROM PYTHON SOURCE LINES 28-36 .. code-block:: Python from __future__ import annotations from gemseo import configure_logger from gemseo import create_discipline from gemseo import create_scenario from gemseo.problems.sobieski.core.design_space import SobieskiDesignSpace .. GENERATED FROM PYTHON SOURCE LINES 37-41 Import ------ The first step is to import some high-level functions and a method to get the design space. .. GENERATED FROM PYTHON SOURCE LINES 41-44 .. code-block:: Python configure_logger() .. rst-class:: sphx-glr-script-out .. code-block:: none .. GENERATED FROM PYTHON SOURCE LINES 45-69 Description ----------- The :class:`.SOM` post-processing performs a Self Organizing Map clustering on the optimization history. A :class:`.SOM` is a 2D representation of a design of experiments which requires dimensionality reduction since it may be in a very high dimension. A :term:`SOM` is built by using an unsupervised artificial neural network :cite:`Kohonen:2001`. A map of size ``n_x.n_y`` is generated, where ``n_x`` is the number of neurons in the :math:`x` direction and ``n_y`` is the number of neurons in the :math:`y` direction. The design space (whatever the dimension) is reduced to a 2D representation based on ``n_x.n_y`` neurons. Samples are clustered to a neuron when their design variables are close in terms of their L2 norm. A neuron is always located at the same place on a map. Each neuron is colored according to the average value for a given criterion. This helps to qualitatively analyze whether parts of the design space are good according to some criteria and not for others, and where compromises should be made. A white neuron has no sample associated with it: not enough evaluations were provided to train the SOM. SOM's provide a qualitative view of the :term:`objective function`, the :term:`constraints`, and of their relative behaviors. .. GENERATED FROM PYTHON SOURCE LINES 71-75 Create disciplines ------------------ At this point, we instantiate the disciplines of Sobieski's SSBJ problem: Propulsion, Aerodynamics, Structure and Mission .. GENERATED FROM PYTHON SOURCE LINES 75-82 .. code-block:: Python disciplines = create_discipline([ "SobieskiPropulsion", "SobieskiAerodynamics", "SobieskiStructure", "SobieskiMission", ]) .. GENERATED FROM PYTHON SOURCE LINES 83-86 Create design space ------------------- We also create the :class:`.SobieskiDesignSpace`. .. GENERATED FROM PYTHON SOURCE LINES 86-88 .. code-block:: Python design_space = SobieskiDesignSpace() .. GENERATED FROM PYTHON SOURCE LINES 89-95 Create and execute scenario --------------------------- The next step is to build an MDO scenario in order to maximize the range, encoded 'y_4', with respect to the design parameters, while satisfying the inequality constraints 'g_1', 'g_2' and 'g_3'. We can use the MDF formulation, the Monte Carlo DOE algorithm and 30 samples. .. GENERATED FROM PYTHON SOURCE LINES 95-107 .. code-block:: Python scenario = create_scenario( disciplines, "MDF", "y_4", design_space, maximize_objective=True, scenario_type="DOE", ) for constraint in ["g_1", "g_2", "g_3"]: scenario.add_constraint(constraint, constraint_type="ineq") scenario.execute({"algo": "OT_MONTE_CARLO", "n_samples": 30}) .. rst-class:: sphx-glr-script-out .. code-block:: none INFO - 13:57:36: INFO - 13:57:36: *** Start DOEScenario execution *** INFO - 13:57:36: DOEScenario INFO - 13:57:36: Disciplines: SobieskiAerodynamics SobieskiMission SobieskiPropulsion SobieskiStructure INFO - 13:57:36: MDO formulation: MDF INFO - 13:57:36: Optimization problem: INFO - 13:57:36: minimize -y_4(x_shared, x_1, x_2, x_3) INFO - 13:57:36: with respect to x_1, x_2, x_3, x_shared INFO - 13:57:36: subject to constraints: INFO - 13:57:36: g_1(x_shared, x_1, x_2, x_3) <= 0.0 INFO - 13:57:36: g_2(x_shared, x_1, x_2, x_3) <= 0.0 INFO - 13:57:36: g_3(x_shared, x_1, x_2, x_3) <= 0.0 INFO - 13:57:36: over the design space: INFO - 13:57:36: +-------------+-------------+-------+-------------+-------+ INFO - 13:57:36: | Name | Lower bound | Value | Upper bound | Type | INFO - 13:57:36: +-------------+-------------+-------+-------------+-------+ INFO - 13:57:36: | x_shared[0] | 0.01 | 0.05 | 0.09 | float | INFO - 13:57:36: | x_shared[1] | 30000 | 45000 | 60000 | float | INFO - 13:57:36: | x_shared[2] | 1.4 | 1.6 | 1.8 | float | INFO - 13:57:36: | x_shared[3] | 2.5 | 5.5 | 8.5 | float | INFO - 13:57:36: | x_shared[4] | 40 | 55 | 70 | float | INFO - 13:57:36: | x_shared[5] | 500 | 1000 | 1500 | float | INFO - 13:57:36: | x_1[0] | 0.1 | 0.25 | 0.4 | float | INFO - 13:57:36: | x_1[1] | 0.75 | 1 | 1.25 | float | INFO - 13:57:36: | x_2 | 0.75 | 1 | 1.25 | float | INFO - 13:57:36: | x_3 | 0.1 | 0.5 | 1 | float | INFO - 13:57:36: +-------------+-------------+-------+-------------+-------+ INFO - 13:57:36: Solving optimization problem with algorithm OT_MONTE_CARLO: INFO - 13:57:36: 3%|▎ | 1/30 [00:00<00:03, 8.76 it/sec, obj=-166] INFO - 13:57:36: 7%|▋ | 2/30 [00:00<00:02, 12.58 it/sec, obj=-484] INFO - 13:57:36: 10%|█ | 3/30 [00:00<00:01, 14.77 it/sec, obj=-481] INFO - 13:57:36: 13%|█▎ | 4/30 [00:00<00:01, 16.18 it/sec, obj=-384] INFO - 13:57:37: 17%|█▋ | 5/30 [00:00<00:01, 17.13 it/sec, obj=-1.14e+3] INFO - 13:57:37: 20%|██ | 6/30 [00:00<00:01, 17.83 it/sec, obj=-290] INFO - 13:57:37: 23%|██▎ | 7/30 [00:00<00:01, 18.15 it/sec, obj=-630] INFO - 13:57:37: 27%|██▋ | 8/30 [00:00<00:01, 18.16 it/sec, obj=-346] INFO - 13:57:37: 30%|███ | 9/30 [00:00<00:01, 18.37 it/sec, obj=-626] INFO - 13:57:37: 33%|███▎ | 10/30 [00:00<00:01, 18.51 it/sec, obj=-621] INFO - 13:57:37: 37%|███▋ | 11/30 [00:00<00:01, 18.47 it/sec, obj=-280] INFO - 13:57:37: 40%|████ | 12/30 [00:00<00:00, 18.01 it/sec, obj=-288] INFO - 13:57:37: 43%|████▎ | 13/30 [00:00<00:00, 17.75 it/sec, obj=-257] INFO - 13:57:37: 47%|████▋ | 14/30 [00:00<00:00, 17.54 it/sec, obj=-367] INFO - 13:57:37: 50%|█████ | 15/30 [00:00<00:00, 17.47 it/sec, obj=-1.08e+3] INFO - 13:57:37: 53%|█████▎ | 16/30 [00:00<00:00, 17.61 it/sec, obj=-344] INFO - 13:57:37: 57%|█████▋ | 17/30 [00:00<00:00, 17.44 it/sec, obj=-368] INFO - 13:57:37: 60%|██████ | 18/30 [00:01<00:00, 17.38 it/sec, obj=-253] INFO - 13:57:37: 63%|██████▎ | 19/30 [00:01<00:00, 17.25 it/sec, obj=-129] INFO - 13:57:37: 67%|██████▋ | 20/30 [00:01<00:00, 17.22 it/sec, obj=-1.07e+3] INFO - 13:57:37: 70%|███████ | 21/30 [00:01<00:00, 17.41 it/sec, obj=-341] INFO - 13:57:37: 73%|███████▎ | 22/30 [00:01<00:00, 17.51 it/sec, obj=-1e+3] INFO - 13:57:38: 77%|███████▋ | 23/30 [00:01<00:00, 17.26 it/sec, obj=-586] INFO - 13:57:38: 80%|████████ | 24/30 [00:01<00:00, 17.36 it/sec, obj=-483] INFO - 13:57:38: 83%|████████▎ | 25/30 [00:01<00:00, 17.45 it/sec, obj=-392] INFO - 13:57:38: 87%|████████▋ | 26/30 [00:01<00:00, 17.60 it/sec, obj=-406] INFO - 13:57:38: 90%|█████████ | 27/30 [00:01<00:00, 17.50 it/sec, obj=-207] INFO - 13:57:38: 93%|█████████▎| 28/30 [00:01<00:00, 17.64 it/sec, obj=-702] INFO - 13:57:38: 97%|█████████▋| 29/30 [00:01<00:00, 17.83 it/sec, obj=-423] INFO - 13:57:38: 100%|██████████| 30/30 [00:01<00:00, 17.89 it/sec, obj=-664] INFO - 13:57:38: Optimization result: INFO - 13:57:38: Optimizer info: INFO - 13:57:38: Status: None INFO - 13:57:38: Message: None INFO - 13:57:38: Number of calls to the objective function by the optimizer: 30 INFO - 13:57:38: Solution: INFO - 13:57:38: The solution is feasible. INFO - 13:57:38: Objective: -367.45728393799953 INFO - 13:57:38: Standardized constraints: INFO - 13:57:38: g_1 = [-0.02478574 -0.00310924 -0.00855146 -0.01702654 -0.02484732 -0.04764585 INFO - 13:57:38: -0.19235415] INFO - 13:57:38: g_2 = -0.09000000000000008 INFO - 13:57:38: g_3 = [-0.98722984 -0.01277016 -0.60760341 -0.0557087 ] INFO - 13:57:38: Design space: INFO - 13:57:38: +-------------+-------------+---------------------+-------------+-------+ INFO - 13:57:38: | Name | Lower bound | Value | Upper bound | Type | INFO - 13:57:38: +-------------+-------------+---------------------+-------------+-------+ INFO - 13:57:38: | x_shared[0] | 0.01 | 0.01230934749207792 | 0.09 | float | INFO - 13:57:38: | x_shared[1] | 30000 | 43456.87364611478 | 60000 | float | INFO - 13:57:38: | x_shared[2] | 1.4 | 1.731884935123487 | 1.8 | float | INFO - 13:57:38: | x_shared[3] | 2.5 | 3.894765253193514 | 8.5 | float | INFO - 13:57:38: | x_shared[4] | 40 | 57.92631048228255 | 70 | float | INFO - 13:57:38: | x_shared[5] | 500 | 520.4048463450415 | 1500 | float | INFO - 13:57:38: | x_1[0] | 0.1 | 0.3994784918586811 | 0.4 | float | INFO - 13:57:38: | x_1[1] | 0.75 | 0.9500312867674923 | 1.25 | float | INFO - 13:57:38: | x_2 | 0.75 | 1.205851870260564 | 1.25 | float | INFO - 13:57:38: | x_3 | 0.1 | 0.2108042391973412 | 1 | float | INFO - 13:57:38: +-------------+-------------+---------------------+-------------+-------+ INFO - 13:57:38: *** End DOEScenario execution (time: 0:00:01.697310) *** {'eval_jac': False, 'n_samples': 30, 'algo': 'OT_MONTE_CARLO'} .. GENERATED FROM PYTHON SOURCE LINES 108-113 Post-process scenario --------------------- Lastly, we post-process the scenario by means of the :class:`.SOM` plot which performs a self organizing map clustering on optimization history. .. GENERATED FROM PYTHON SOURCE LINES 115-123 .. tip:: Each post-processing method requires different inputs and offers a variety of customization options. Use the high-level function :func:`.get_post_processing_options_schema` to print a table with the options for any post-processing algorithm. Or refer to our dedicated page: :ref:`gen_post_algos`. .. GENERATED FROM PYTHON SOURCE LINES 123-126 .. code-block:: Python scenario.post_process("SOM", save=False, show=True) .. image-sg:: /examples/post_process/algorithms/images/sphx_glr_plot_som_001.png :alt: Self Organizing Maps of the design space, -y_4, g_1[0], g_1[1], g_1[2], g_1[3], g_1[4], g_1[5], g_1[6], g_2, g_3[0], g_3[1], g_3[2], g_3[3] :srcset: /examples/post_process/algorithms/images/sphx_glr_plot_som_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none INFO - 13:57:38: Building Self Organizing Map from optimization history: INFO - 13:57:38: Number of neurons in x direction = 4 INFO - 13:57:38: Number of neurons in y direction = 4 .. GENERATED FROM PYTHON SOURCE LINES 127-146 The following figure illustrates another :term:`SOM` on the Sobieski use case. The optimization method is a (costly) derivative free algorithm (``NLOPT_COBYLA``), indeed all the relevant information for the optimization is obtained at the cost of numerous evaluations of the functions. For more details, please read the paper by :cite:`kumano2006multidisciplinary` on wing MDO post-processing using SOM. .. figure:: /tutorials/ssbj/figs/MDOScenario_SOM_v100.png SOM example on the Sobieski problem. A DOE may also be a good way to produce SOM maps. The following figure shows an example with 10000 points on the same test case. This produces more relevant SOM plots. .. figure:: /tutorials/ssbj/figs/som_fine.png SOM example on the Sobieski problem with a 10 000 samples DOE. .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 2.605 seconds) .. _sphx_glr_download_examples_post_process_algorithms_plot_som.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_som.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_som.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_