Optimization History View

In this example, we illustrate the use of the OptHistoryView plot on the Sobieski’s SSBJ problem.

from __future__ import annotations

from gemseo import configure_logger
from gemseo import create_discipline
from gemseo import create_scenario
from gemseo.problems.sobieski.core.problem import SobieskiProblem

Import

The first step is to import some high-level functions and a method to get the design space.

configure_logger()
<RootLogger root (INFO)>

Description

The OptHistoryView post-processing creates a series of plots:

  • The design variables history - This graph shows the normalized values of the design variables, the \(y\) axis is the index of the inputs in the vector; and the \(x\) axis represents the iterations.

  • The objective function history - It shows the evolution of the objective value during the optimization.

  • The distance to the best design variables - Plots the vector \(log( ||x-x^*|| )\) in log scale.

  • The history of the Hessian approximation of the objective - Plots an approximation of the second order derivatives of the objective function \(\frac{\partial^2 f(x)}{\partial x^2}\), which is a measure of the sensitivity of the function with respect to the design variables, and of the anisotropy of the problem (differences of curvatures in the design space).

  • The inequality constraint history - Portrays the evolution of the values of the constraints. The inequality constraints must be non-positive, that is why the plot must be green or white for satisfied constraints (white = active, red = violated). For an IDF formulation, an additional plot is created to track the equality constraint history.

Create disciplines

At this point we instantiate the disciplines of Sobieski’s SSBJ problem: Propulsion, Aerodynamics, Structure and Mission

disciplines = create_discipline(
    [
        "SobieskiPropulsion",
        "SobieskiAerodynamics",
        "SobieskiStructure",
        "SobieskiMission",
    ]
)

Create design space

We also read the design space from the SobieskiProblem.

design_space = SobieskiProblem().design_space

Create and execute scenario

The next step is to build an MDO scenario in order to maximize the range, encoded ‘y_4’, with respect to the design parameters, while satisfying the inequality constraints ‘g_1’, ‘g_2’ and ‘g_3’. We can use the MDF formulation, the SLSQP optimization algorithm and a maximum number of iterations equal to 100.

scenario = create_scenario(
    disciplines,
    formulation="MDF",
    objective_name="y_4",
    maximize_objective=True,
    design_space=design_space,
)
scenario.set_differentiation_method()
for constraint in ["g_1", "g_2", "g_3"]:
    scenario.add_constraint(constraint, "ineq")
scenario.execute({"algo": "SLSQP", "max_iter": 100})
    INFO - 13:56:04:
    INFO - 13:56:04: *** Start MDOScenario execution ***
    INFO - 13:56:04: MDOScenario
    INFO - 13:56:04:    Disciplines: SobieskiAerodynamics SobieskiMission SobieskiPropulsion SobieskiStructure
    INFO - 13:56:04:    MDO formulation: MDF
    INFO - 13:56:04: Optimization problem:
    INFO - 13:56:04:    minimize -y_4(x_shared, x_1, x_2, x_3)
    INFO - 13:56:04:    with respect to x_1, x_2, x_3, x_shared
    INFO - 13:56:04:    subject to constraints:
    INFO - 13:56:04:       g_1(x_shared, x_1, x_2, x_3) <= 0.0
    INFO - 13:56:04:       g_2(x_shared, x_1, x_2, x_3) <= 0.0
    INFO - 13:56:04:       g_3(x_shared, x_1, x_2, x_3) <= 0.0
    INFO - 13:56:04:    over the design space:
    INFO - 13:56:04:    +-------------+-------------+-------+-------------+-------+
    INFO - 13:56:04:    | name        | lower_bound | value | upper_bound | type  |
    INFO - 13:56:04:    +-------------+-------------+-------+-------------+-------+
    INFO - 13:56:04:    | x_shared[0] |     0.01    |  0.05 |     0.09    | float |
    INFO - 13:56:04:    | x_shared[1] |    30000    | 45000 |    60000    | float |
    INFO - 13:56:04:    | x_shared[2] |     1.4     |  1.6  |     1.8     | float |
    INFO - 13:56:04:    | x_shared[3] |     2.5     |  5.5  |     8.5     | float |
    INFO - 13:56:04:    | x_shared[4] |      40     |   55  |      70     | float |
    INFO - 13:56:04:    | x_shared[5] |     500     |  1000 |     1500    | float |
    INFO - 13:56:04:    | x_1[0]      |     0.1     |  0.25 |     0.4     | float |
    INFO - 13:56:04:    | x_1[1]      |     0.75    |   1   |     1.25    | float |
    INFO - 13:56:04:    | x_2         |     0.75    |   1   |     1.25    | float |
    INFO - 13:56:04:    | x_3         |     0.1     |  0.5  |      1      | float |
    INFO - 13:56:04:    +-------------+-------------+-------+-------------+-------+
    INFO - 13:56:04: Solving optimization problem with algorithm SLSQP:
    INFO - 13:56:04: ...   0%|          | 0/100 [00:00<?, ?it]
    INFO - 13:56:04: ...   1%|          | 1/100 [00:00<00:11,  8.82 it/sec, obj=-536]
    INFO - 13:56:04: ...   2%|▏         | 2/100 [00:00<00:15,  6.47 it/sec, obj=-2.12e+3]
 WARNING - 13:56:05: MDAJacobi has reached its maximum number of iterations but the normed residual 1.7130677857005655e-05 is still above the tolerance 1e-06.
    INFO - 13:56:05: ...   3%|▎         | 3/100 [00:00<00:17,  5.48 it/sec, obj=-3.75e+3]
    INFO - 13:56:05: ...   4%|▍         | 4/100 [00:00<00:18,  5.26 it/sec, obj=-3.96e+3]
    INFO - 13:56:05: ...   5%|▌         | 5/100 [00:00<00:18,  5.13 it/sec, obj=-3.96e+3]
    INFO - 13:56:05: Optimization result:
    INFO - 13:56:05:    Optimizer info:
    INFO - 13:56:05:       Status: 8
    INFO - 13:56:05:       Message: Positive directional derivative for linesearch
    INFO - 13:56:05:       Number of calls to the objective function by the optimizer: 6
    INFO - 13:56:05:    Solution:
    INFO - 13:56:05:       The solution is feasible.
    INFO - 13:56:05:       Objective: -3963.408265187933
    INFO - 13:56:05:       Standardized constraints:
    INFO - 13:56:05:          g_1 = [-0.01806104 -0.03334642 -0.04424946 -0.0518346  -0.05732607 -0.13720865
    INFO - 13:56:05:  -0.10279135]
    INFO - 13:56:05:          g_2 = 3.333278582928756e-06
    INFO - 13:56:05:          g_3 = [-7.67181773e-01 -2.32818227e-01  8.30379541e-07 -1.83255000e-01]
    INFO - 13:56:05:       Design space:
    INFO - 13:56:05:       +-------------+-------------+---------------------+-------------+-------+
    INFO - 13:56:05:       | name        | lower_bound |        value        | upper_bound | type  |
    INFO - 13:56:05:       +-------------+-------------+---------------------+-------------+-------+
    INFO - 13:56:05:       | x_shared[0] |     0.01    | 0.06000083331964572 |     0.09    | float |
    INFO - 13:56:05:       | x_shared[1] |    30000    |        60000        |    60000    | float |
    INFO - 13:56:05:       | x_shared[2] |     1.4     |         1.4         |     1.8     | float |
    INFO - 13:56:05:       | x_shared[3] |     2.5     |         2.5         |     8.5     | float |
    INFO - 13:56:05:       | x_shared[4] |      40     |          70         |      70     | float |
    INFO - 13:56:05:       | x_shared[5] |     500     |         1500        |     1500    | float |
    INFO - 13:56:05:       | x_1[0]      |     0.1     |         0.4         |     0.4     | float |
    INFO - 13:56:05:       | x_1[1]      |     0.75    |         0.75        |     1.25    | float |
    INFO - 13:56:05:       | x_2         |     0.75    |         0.75        |     1.25    | float |
    INFO - 13:56:05:       | x_3         |     0.1     |  0.1562448753887276 |      1      | float |
    INFO - 13:56:05:       +-------------+-------------+---------------------+-------------+-------+
    INFO - 13:56:05: *** End MDOScenario execution (time: 0:00:01.110488) ***

{'max_iter': 100, 'algo': 'SLSQP'}

Post-process scenario

Lastly, we post-process the scenario by means of the OptHistoryView plot which plots the history of optimization for both objective function, constraints, design parameters and distance to the optimum.

Tip

Each post-processing method requires different inputs and offers a variety of customization options. Use the high-level function get_post_processing_options_schema() to print a table with the options for any post-processing algorithm. Or refer to our dedicated page: Post-processing algorithms.

scenario.post_process(
    "OptHistoryView", save=False, show=True, variable_names=["x_2", "x_1"]
)
  • Evolution of the optimization variables
  • Evolution of the objective value
  • Distance to the optimum
  • Hessian diagonal approximation
  • Evolution of the inequality constraints
<gemseo.post.opt_history_view.OptHistoryView object at 0x7f006c322910>

Total running time of the script: (0 minutes 2.301 seconds)

Gallery generated by Sphinx-Gallery