Quadratic approximations

In this example, we illustrate the use of the QuadApprox plot on the Sobieski’s SSBJ problem.

from __future__ import annotations

from gemseo import configure_logger
from gemseo import create_discipline
from gemseo import create_scenario
from gemseo.problems.sobieski.core.design_space import SobieskiDesignSpace

Import

The first step is to import some high-level functions and a method to get the design space.

configure_logger()
<RootLogger root (INFO)>

Description

The QuadApprox post-processing performs a quadratic approximation of a given function from an optimization history and plot the results as cuts of the approximation.

Create disciplines

Then, we instantiate the disciplines of the Sobieski’s SSBJ problem: Propulsion, Aerodynamics, Structure and Mission

disciplines = create_discipline([
    "SobieskiPropulsion",
    "SobieskiAerodynamics",
    "SobieskiStructure",
    "SobieskiMission",
])

Create design space

We also create the SobieskiDesignSpace.

design_space = SobieskiDesignSpace()

Create and execute scenario

The next step is to build an MDO scenario in order to maximize the range, encoded ‘y_4’, with respect to the design parameters, while satisfying the inequality constraints ‘g_1’, ‘g_2’ and ‘g_3’. We can use the MDF formulation, the SLSQP optimization algorithm and a maximum number of iterations equal to 100.

scenario = create_scenario(
    disciplines,
    "MDF",
    "y_4",
    design_space,
    maximize_objective=True,
)
scenario.set_differentiation_method()
for constraint in ["g_1", "g_2", "g_3"]:
    scenario.add_constraint(constraint, constraint_type="ineq")
scenario.execute({"algo": "SLSQP", "max_iter": 10})
    INFO - 13:12:00:
    INFO - 13:12:00: *** Start MDOScenario execution ***
    INFO - 13:12:00: MDOScenario
    INFO - 13:12:00:    Disciplines: SobieskiAerodynamics SobieskiMission SobieskiPropulsion SobieskiStructure
    INFO - 13:12:00:    MDO formulation: MDF
    INFO - 13:12:00: Optimization problem:
    INFO - 13:12:00:    minimize -y_4(x_shared, x_1, x_2, x_3)
    INFO - 13:12:00:    with respect to x_1, x_2, x_3, x_shared
    INFO - 13:12:00:    subject to constraints:
    INFO - 13:12:00:       g_1(x_shared, x_1, x_2, x_3) <= 0.0
    INFO - 13:12:00:       g_2(x_shared, x_1, x_2, x_3) <= 0.0
    INFO - 13:12:00:       g_3(x_shared, x_1, x_2, x_3) <= 0.0
    INFO - 13:12:00:    over the design space:
    INFO - 13:12:00:       +-------------+-------------+-------+-------------+-------+
    INFO - 13:12:00:       | Name        | Lower bound | Value | Upper bound | Type  |
    INFO - 13:12:00:       +-------------+-------------+-------+-------------+-------+
    INFO - 13:12:00:       | x_shared[0] |     0.01    |  0.05 |     0.09    | float |
    INFO - 13:12:00:       | x_shared[1] |    30000    | 45000 |    60000    | float |
    INFO - 13:12:00:       | x_shared[2] |     1.4     |  1.6  |     1.8     | float |
    INFO - 13:12:00:       | x_shared[3] |     2.5     |  5.5  |     8.5     | float |
    INFO - 13:12:00:       | x_shared[4] |      40     |   55  |      70     | float |
    INFO - 13:12:00:       | x_shared[5] |     500     |  1000 |     1500    | float |
    INFO - 13:12:00:       | x_1[0]      |     0.1     |  0.25 |     0.4     | float |
    INFO - 13:12:00:       | x_1[1]      |     0.75    |   1   |     1.25    | float |
    INFO - 13:12:00:       | x_2         |     0.75    |   1   |     1.25    | float |
    INFO - 13:12:00:       | x_3         |     0.1     |  0.5  |      1      | float |
    INFO - 13:12:00:       +-------------+-------------+-------+-------------+-------+
    INFO - 13:12:00: Solving optimization problem with algorithm SLSQP:
    INFO - 13:12:00:     10%|█         | 1/10 [00:00<00:00, 10.09 it/sec, obj=-536]
    INFO - 13:12:00:     20%|██        | 2/10 [00:00<00:01,  7.11 it/sec, obj=-2.12e+3]
 WARNING - 13:12:01: MDAJacobi has reached its maximum number of iterations but the normed residual 2.338273970736908e-06 is still above the tolerance 1e-06.
    INFO - 13:12:01:     30%|███       | 3/10 [00:00<00:01,  5.95 it/sec, obj=-3.56e+3]
    INFO - 13:12:01:     40%|████      | 4/10 [00:00<00:01,  5.66 it/sec, obj=-3.96e+3]
    INFO - 13:12:01:     50%|█████     | 5/10 [00:00<00:00,  5.54 it/sec, obj=-3.96e+3]
    INFO - 13:12:01: Optimization result:
    INFO - 13:12:01:    Optimizer info:
    INFO - 13:12:01:       Status: 8
    INFO - 13:12:01:       Message: Positive directional derivative for linesearch
    INFO - 13:12:01:       Number of calls to the objective function by the optimizer: 6
    INFO - 13:12:01:    Solution:
    INFO - 13:12:01:       The solution is feasible.
    INFO - 13:12:01:       Objective: -3963.403105287515
    INFO - 13:12:01:       Standardized constraints:
    INFO - 13:12:01:          g_1 = [-0.01806054 -0.03334606 -0.04424918 -0.05183437 -0.05732588 -0.13720864
    INFO - 13:12:01:  -0.10279136]
    INFO - 13:12:01:          g_2 = 3.1658077606078194e-06
    INFO - 13:12:01:          g_3 = [-7.67177346e-01 -2.32822654e-01 -5.57051011e-06 -1.83255000e-01]
    INFO - 13:12:01:       Design space:
    INFO - 13:12:01:          +-------------+-------------+---------------------+-------------+-------+
    INFO - 13:12:01:          | Name        | Lower bound |        Value        | Upper bound | Type  |
    INFO - 13:12:01:          +-------------+-------------+---------------------+-------------+-------+
    INFO - 13:12:01:          | x_shared[0] |     0.01    | 0.06000079145194018 |     0.09    | float |
    INFO - 13:12:01:          | x_shared[1] |    30000    |        60000        |    60000    | float |
    INFO - 13:12:01:          | x_shared[2] |     1.4     |         1.4         |     1.8     | float |
    INFO - 13:12:01:          | x_shared[3] |     2.5     |         2.5         |     8.5     | float |
    INFO - 13:12:01:          | x_shared[4] |      40     |          70         |      70     | float |
    INFO - 13:12:01:          | x_shared[5] |     500     |         1500        |     1500    | float |
    INFO - 13:12:01:          | x_1[0]      |     0.1     |  0.3999999322608766 |     0.4     | float |
    INFO - 13:12:01:          | x_1[1]      |     0.75    |         0.75        |     1.25    | float |
    INFO - 13:12:01:          | x_2         |     0.75    |         0.75        |     1.25    | float |
    INFO - 13:12:01:          | x_3         |     0.1     |  0.1562438752833519 |      1      | float |
    INFO - 13:12:01:          +-------------+-------------+---------------------+-------------+-------+
    INFO - 13:12:01: *** End MDOScenario execution (time: 0:00:01.027900) ***

{'max_iter': 10, 'algo': 'SLSQP'}

Post-process scenario

Lastly, we post-process the scenario by means of the QuadApprox plot which performs a quadratic approximation of a given function from an optimization history and plot the results as cuts of the approximation.

Tip

Each post-processing method requires different inputs and offers a variety of customization options. Use the high-level function get_post_processing_options_schema() to print a table with the options for any post-processing algorithm. Or refer to our dedicated page: Post-processing algorithms.

The first plot shows an approximation of the Hessian matrix \(\frac{\partial^2 f}{\partial x_i \partial x_j}\) based on the Symmetric Rank 1 method (SR1) [NW06]. The color map uses a symmetric logarithmic (symlog) scale. This plots the cross influence of the design variables on the objective function or constraints. For instance, on the last figure, the maximal second-order sensitivity is \(\frac{\partial^2 -y_4}{\partial^2 x_0} = 2.10^5\), which means that the \(x_0\) is the most influential variable. Then, the cross derivative \(\frac{\partial^2 -y_4}{\partial x_0 \partial x_2} = 5.10^4\) is positive and relatively high compared to the previous one but the combined effects of \(x_0\) and \(x_2\) are non-negligible in comparison.

scenario.post_process("QuadApprox", function="-y_4", save=False, show=True)
  • Hessian matrix SR1 approximation of -y_4
  • plot quad approx
<gemseo.post.quad_approx.QuadApprox object at 0x7f6b9bcf5a90>

The second plot represents the quadratic approximation of the objective around the optimal solution : \(a_{i}(t)=0.5 (t-x^*_i)^2 \frac{\partial^2 f}{\partial x_i^2} + (t-x^*_i) \frac{\partial f}{\partial x_i} + f(x^*)\), where \(x^*\) is the optimal solution. This approximation highlights the sensitivity of the objective function with respect to the design variables: we notice that the design variables \(x\_1, x\_5, x\_6\) have little influence , whereas \(x\_0, x\_2, x\_9\) have a huge influence on the objective. This trend is also noted in the diagonal terms of the Hessian matrix \(\frac{\partial^2 f}{\partial x_i^2}\).

Total running time of the script: (0 minutes 1.942 seconds)

Gallery generated by Sphinx-Gallery