Note

Go to the end to download the full example code

# Optimization History View¶

In this example, we illustrate the use of the `OptHistoryView`

plot
on the Sobieski’s SSBJ problem.

```
from __future__ import annotations
from gemseo import configure_logger
from gemseo import create_discipline
from gemseo import create_scenario
from gemseo.problems.sobieski.core.problem import SobieskiProblem
```

## Import¶

The first step is to import some high-level functions and a method to get the design space.

```
configure_logger()
```

```
<RootLogger root (INFO)>
```

## Description¶

The **OptHistoryView** post-processing
creates a series of plots:

The design variables history - This graph shows the normalized values of the design variables, the \(y\) axis is the index of the inputs in the vector; and the \(x\) axis represents the iterations.

The objective function history - It shows the evolution of the objective value during the optimization.

The distance to the best design variables - Plots the vector \(log( ||x-x^*|| )\) in log scale.

The history of the Hessian approximation of the objective - Plots an approximation of the second order derivatives of the objective function \(\frac{\partial^2 f(x)}{\partial x^2}\), which is a measure of the sensitivity of the function with respect to the design variables, and of the anisotropy of the problem (differences of curvatures in the design space).

The inequality constraint history - Portrays the evolution of the values of the constraints. The inequality constraints must be non-positive, that is why the plot must be green or white for satisfied constraints (white = active, red = violated). For an IDF formulation, an additional plot is created to track the equality constraint history.

## Create disciplines¶

At this point we instantiate the disciplines of Sobieski’s SSBJ problem: Propulsion, Aerodynamics, Structure and Mission

```
disciplines = create_discipline(
[
"SobieskiPropulsion",
"SobieskiAerodynamics",
"SobieskiStructure",
"SobieskiMission",
]
)
```

## Create design space¶

We also read the design space from the `SobieskiProblem`

.

```
design_space = SobieskiProblem().design_space
```

## Create and execute scenario¶

The next step is to build an MDO scenario in order to maximize the range, encoded ‘y_4’, with respect to the design parameters, while satisfying the inequality constraints ‘g_1’, ‘g_2’ and ‘g_3’. We can use the MDF formulation, the SLSQP optimization algorithm and a maximum number of iterations equal to 100.

```
scenario = create_scenario(
disciplines,
formulation="MDF",
objective_name="y_4",
maximize_objective=True,
design_space=design_space,
)
scenario.set_differentiation_method()
for constraint in ["g_1", "g_2", "g_3"]:
scenario.add_constraint(constraint, "ineq")
scenario.execute({"algo": "SLSQP", "max_iter": 100})
```

```
INFO - 08:27:44:
INFO - 08:27:44: *** Start MDOScenario execution ***
INFO - 08:27:44: MDOScenario
INFO - 08:27:44: Disciplines: SobieskiAerodynamics SobieskiMission SobieskiPropulsion SobieskiStructure
INFO - 08:27:44: MDO formulation: MDF
INFO - 08:27:44: Optimization problem:
INFO - 08:27:44: minimize -y_4(x_shared, x_1, x_2, x_3)
INFO - 08:27:44: with respect to x_1, x_2, x_3, x_shared
INFO - 08:27:44: subject to constraints:
INFO - 08:27:44: g_1(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 08:27:44: g_2(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 08:27:44: g_3(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 08:27:44: over the design space:
INFO - 08:27:44: +-------------+-------------+-------+-------------+-------+
INFO - 08:27:44: | name | lower_bound | value | upper_bound | type |
INFO - 08:27:44: +-------------+-------------+-------+-------------+-------+
INFO - 08:27:44: | x_shared[0] | 0.01 | 0.05 | 0.09 | float |
INFO - 08:27:44: | x_shared[1] | 30000 | 45000 | 60000 | float |
INFO - 08:27:44: | x_shared[2] | 1.4 | 1.6 | 1.8 | float |
INFO - 08:27:44: | x_shared[3] | 2.5 | 5.5 | 8.5 | float |
INFO - 08:27:44: | x_shared[4] | 40 | 55 | 70 | float |
INFO - 08:27:44: | x_shared[5] | 500 | 1000 | 1500 | float |
INFO - 08:27:44: | x_1[0] | 0.1 | 0.25 | 0.4 | float |
INFO - 08:27:44: | x_1[1] | 0.75 | 1 | 1.25 | float |
INFO - 08:27:44: | x_2 | 0.75 | 1 | 1.25 | float |
INFO - 08:27:44: | x_3 | 0.1 | 0.5 | 1 | float |
INFO - 08:27:44: +-------------+-------------+-------+-------------+-------+
INFO - 08:27:44: Solving optimization problem with algorithm SLSQP:
INFO - 08:27:44: ... 0%| | 0/100 [00:00<?, ?it]
INFO - 08:27:44: ... 1%| | 1/100 [00:00<00:09, 10.16 it/sec, obj=-536]
INFO - 08:27:44: ... 2%|▏ | 2/100 [00:00<00:13, 7.49 it/sec, obj=-2.12e+3]
WARNING - 08:27:45: MDAJacobi has reached its maximum number of iterations but the normed residual 1.7130677857005655e-05 is still above the tolerance 1e-06.
INFO - 08:27:45: ... 3%|▎ | 3/100 [00:00<00:15, 6.28 it/sec, obj=-3.75e+3]
INFO - 08:27:45: ... 4%|▍ | 4/100 [00:00<00:16, 6.00 it/sec, obj=-3.96e+3]
INFO - 08:27:45: ... 5%|▌ | 5/100 [00:00<00:16, 5.84 it/sec, obj=-3.96e+3]
INFO - 08:27:45: Optimization result:
INFO - 08:27:45: Optimizer info:
INFO - 08:27:45: Status: 8
INFO - 08:27:45: Message: Positive directional derivative for linesearch
INFO - 08:27:45: Number of calls to the objective function by the optimizer: 6
INFO - 08:27:45: Solution:
INFO - 08:27:45: The solution is feasible.
INFO - 08:27:45: Objective: -3963.408265187933
INFO - 08:27:45: Standardized constraints:
INFO - 08:27:45: g_1 = [-0.01806104 -0.03334642 -0.04424946 -0.0518346 -0.05732607 -0.13720865
INFO - 08:27:45: -0.10279135]
INFO - 08:27:45: g_2 = 3.333278582928756e-06
INFO - 08:27:45: g_3 = [-7.67181773e-01 -2.32818227e-01 8.30379541e-07 -1.83255000e-01]
INFO - 08:27:45: Design space:
INFO - 08:27:45: +-------------+-------------+---------------------+-------------+-------+
INFO - 08:27:45: | name | lower_bound | value | upper_bound | type |
INFO - 08:27:45: +-------------+-------------+---------------------+-------------+-------+
INFO - 08:27:45: | x_shared[0] | 0.01 | 0.06000083331964572 | 0.09 | float |
INFO - 08:27:45: | x_shared[1] | 30000 | 60000 | 60000 | float |
INFO - 08:27:45: | x_shared[2] | 1.4 | 1.4 | 1.8 | float |
INFO - 08:27:45: | x_shared[3] | 2.5 | 2.5 | 8.5 | float |
INFO - 08:27:45: | x_shared[4] | 40 | 70 | 70 | float |
INFO - 08:27:45: | x_shared[5] | 500 | 1500 | 1500 | float |
INFO - 08:27:45: | x_1[0] | 0.1 | 0.4 | 0.4 | float |
INFO - 08:27:45: | x_1[1] | 0.75 | 0.75 | 1.25 | float |
INFO - 08:27:45: | x_2 | 0.75 | 0.75 | 1.25 | float |
INFO - 08:27:45: | x_3 | 0.1 | 0.1562448753887276 | 1 | float |
INFO - 08:27:45: +-------------+-------------+---------------------+-------------+-------+
INFO - 08:27:45: *** End MDOScenario execution (time: 0:00:00.972598) ***
{'max_iter': 100, 'algo': 'SLSQP'}
```

## Post-process scenario¶

Lastly, we post-process the scenario by means of the `OptHistoryView`

plot which plots the history of optimization for both objective function,
constraints, design parameters and distance to the optimum.

Tip

Each post-processing method requires different inputs and offers a variety
of customization options. Use the high-level function
`get_post_processing_options_schema()`

to print a table with
the options for any post-processing algorithm.
Or refer to our dedicated page:
Post-processing algorithms.

```
scenario.post_process(
"OptHistoryView", save=False, show=True, variable_names=["x_2", "x_1"]
)
```

```
<gemseo.post.opt_history_view.OptHistoryView object at 0x7f0cb47c2be0>
```

**Total running time of the script:** (0 minutes 2.040 seconds)