Note
Go to the end to download the full example code
Gradient Sensitivity¶
In this example, we illustrate the use of the GradientSensitivity
plot on the Sobieski’s SSBJ problem.
from __future__ import annotations
from gemseo import configure_logger
from gemseo import create_discipline
from gemseo import create_scenario
from gemseo.problems.sobieski.core.design_space import SobieskiDesignSpace
Import¶
The first step is to import some high-level functions and a method to get the design space.
configure_logger()
<RootLogger root (INFO)>
Description¶
The GradientSensitivity
post-processor
builds histograms of derivatives of the objective and the constraints.
Create disciplines¶
At this point, we instantiate the disciplines of Sobieski’s SSBJ problem: Propulsion, Aerodynamics, Structure and Mission
disciplines = create_discipline([
"SobieskiPropulsion",
"SobieskiAerodynamics",
"SobieskiStructure",
"SobieskiMission",
])
Create design space¶
We also create the SobieskiDesignSpace
.
design_space = SobieskiDesignSpace()
Create and execute scenario¶
The next step is to build an MDO scenario in order to maximize the range,
encoded "y_4"
, with respect to the design parameters, while satisfying the
inequality constraints "g_1"
, "g_2"
and "g_3"
. We can use the MDF
formulation, the SLSQP optimization algorithm and a maximum number of iterations
equal to 100.
scenario = create_scenario(
disciplines,
"MDF",
"y_4",
design_space,
maximize_objective=True,
)
The differentiation method used by default is "user"
, which means that the
gradient will be evaluated from the Jacobian defined in each discipline. However, some
disciplines may not provide one, in that case, the gradient may be approximated
with the techniques "finite_differences"
or "complex_step"
with the method
set_differentiation_method()
. The following line is shown as an
example, it has no effect because it does not change the default method.
scenario.set_differentiation_method()
for constraint in ["g_1", "g_2", "g_3"]:
scenario.add_constraint(constraint, constraint_type="ineq")
scenario.execute({"algo": "SLSQP", "max_iter": 10})
INFO - 09:03:14:
INFO - 09:03:14: *** Start MDOScenario execution ***
INFO - 09:03:14: MDOScenario
INFO - 09:03:14: Disciplines: SobieskiAerodynamics SobieskiMission SobieskiPropulsion SobieskiStructure
INFO - 09:03:14: MDO formulation: MDF
INFO - 09:03:14: Optimization problem:
INFO - 09:03:14: minimize -y_4(x_shared, x_1, x_2, x_3)
INFO - 09:03:14: with respect to x_1, x_2, x_3, x_shared
INFO - 09:03:14: subject to constraints:
INFO - 09:03:14: g_1(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 09:03:14: g_2(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 09:03:14: g_3(x_shared, x_1, x_2, x_3) <= 0.0
INFO - 09:03:14: over the design space:
INFO - 09:03:14: +-------------+-------------+-------+-------------+-------+
INFO - 09:03:14: | Name | Lower bound | Value | Upper bound | Type |
INFO - 09:03:14: +-------------+-------------+-------+-------------+-------+
INFO - 09:03:14: | x_shared[0] | 0.01 | 0.05 | 0.09 | float |
INFO - 09:03:14: | x_shared[1] | 30000 | 45000 | 60000 | float |
INFO - 09:03:14: | x_shared[2] | 1.4 | 1.6 | 1.8 | float |
INFO - 09:03:14: | x_shared[3] | 2.5 | 5.5 | 8.5 | float |
INFO - 09:03:14: | x_shared[4] | 40 | 55 | 70 | float |
INFO - 09:03:14: | x_shared[5] | 500 | 1000 | 1500 | float |
INFO - 09:03:14: | x_1[0] | 0.1 | 0.25 | 0.4 | float |
INFO - 09:03:14: | x_1[1] | 0.75 | 1 | 1.25 | float |
INFO - 09:03:14: | x_2 | 0.75 | 1 | 1.25 | float |
INFO - 09:03:14: | x_3 | 0.1 | 0.5 | 1 | float |
INFO - 09:03:14: +-------------+-------------+-------+-------------+-------+
INFO - 09:03:14: Solving optimization problem with algorithm SLSQP:
INFO - 09:03:15: 10%|█ | 1/10 [00:00<00:00, 9.06 it/sec, obj=-536]
INFO - 09:03:15: 20%|██ | 2/10 [00:00<00:01, 6.41 it/sec, obj=-2.12e+3]
WARNING - 09:03:15: MDAJacobi has reached its maximum number of iterations but the normed residual 1.7130677857005655e-05 is still above the tolerance 1e-06.
INFO - 09:03:15: 30%|███ | 3/10 [00:00<00:01, 5.41 it/sec, obj=-3.75e+3]
INFO - 09:03:15: 40%|████ | 4/10 [00:00<00:01, 5.17 it/sec, obj=-3.96e+3]
INFO - 09:03:15: 50%|█████ | 5/10 [00:00<00:00, 5.04 it/sec, obj=-3.96e+3]
INFO - 09:03:16: Optimization result:
INFO - 09:03:16: Optimizer info:
INFO - 09:03:16: Status: 8
INFO - 09:03:16: Message: Positive directional derivative for linesearch
INFO - 09:03:16: Number of calls to the objective function by the optimizer: 6
INFO - 09:03:16: Solution:
INFO - 09:03:16: The solution is feasible.
INFO - 09:03:16: Objective: -3963.408265187933
INFO - 09:03:16: Standardized constraints:
INFO - 09:03:16: g_1 = [-0.01806104 -0.03334642 -0.04424946 -0.0518346 -0.05732607 -0.13720865
INFO - 09:03:16: -0.10279135]
INFO - 09:03:16: g_2 = 3.333278582928756e-06
INFO - 09:03:16: g_3 = [-7.67181773e-01 -2.32818227e-01 8.30379541e-07 -1.83255000e-01]
INFO - 09:03:16: Design space:
INFO - 09:03:16: +-------------+-------------+---------------------+-------------+-------+
INFO - 09:03:16: | Name | Lower bound | Value | Upper bound | Type |
INFO - 09:03:16: +-------------+-------------+---------------------+-------------+-------+
INFO - 09:03:16: | x_shared[0] | 0.01 | 0.06000083331964572 | 0.09 | float |
INFO - 09:03:16: | x_shared[1] | 30000 | 60000 | 60000 | float |
INFO - 09:03:16: | x_shared[2] | 1.4 | 1.4 | 1.8 | float |
INFO - 09:03:16: | x_shared[3] | 2.5 | 2.5 | 8.5 | float |
INFO - 09:03:16: | x_shared[4] | 40 | 70 | 70 | float |
INFO - 09:03:16: | x_shared[5] | 500 | 1500 | 1500 | float |
INFO - 09:03:16: | x_1[0] | 0.1 | 0.4 | 0.4 | float |
INFO - 09:03:16: | x_1[1] | 0.75 | 0.75 | 1.25 | float |
INFO - 09:03:16: | x_2 | 0.75 | 0.75 | 1.25 | float |
INFO - 09:03:16: | x_3 | 0.1 | 0.1562448753887276 | 1 | float |
INFO - 09:03:16: +-------------+-------------+---------------------+-------------+-------+
INFO - 09:03:16: *** End MDOScenario execution (time: 0:00:01.134051) ***
{'max_iter': 10, 'algo': 'SLSQP'}
Post-process scenario¶
Lastly, we post-process the scenario by means of the GradientSensitivity
post-processor which builds histograms of derivatives of objective and constraints.
The sensitivities shown in the plot are calculated with the gradient at the optimum
or the least-non feasible point when the result is not feasible. One may choose any
other iteration instead.
Note
In some cases, the iteration that is being used to compute the sensitivities
corresponds to a point for which the algorithm did not request the evaluation of
the gradients, and a ValueError
is raised. A way to avoid this issue is to set
the option compute_missing_gradients
of GradientSensitivity
to
True
, this way GEMSEO will compute the gradients for the requested iteration if
they are not available.
Warning
Please note that this extra computation may be expensive depending on the
OptimizationProblem
defined by the user. Additionally, keep in mind that
GEMSEO cannot compute missing gradients for an OptimizationProblem
that was
imported from an HDF5 file.
Tip
Each post-processing method requires different inputs and offers a variety
of customization options. Use the high-level function
get_post_processing_options_schema()
to print a table with
the options for any post-processing algorithm.
Or refer to our dedicated page:
Post-processing algorithms.
scenario.post_process(
"GradientSensitivity",
compute_missing_gradients=True,
save=False,
show=True,
)
<gemseo.post.gradient_sensitivity.GradientSensitivity object at 0x7f8adc33e520>
Total running time of the script: (0 minutes 2.170 seconds)