Create an MDO Scenario

from __future__ import annotations

from gemseo import configure_logger
from gemseo import create_design_space
from gemseo import create_discipline
from gemseo import create_scenario
from gemseo import get_available_opt_algorithms
from gemseo import get_available_post_processings
from numpy import ones

configure_logger()
<RootLogger root (INFO)>

Let \((P)\) be a simple optimization problem:

\[\begin{split}(P) = \left\{ \begin{aligned} & \underset{x}{\text{minimize}} & & f(x) = \sin(x) - \exp(x) \\ & \text{subject to} & & -2 \leq x \leq 2 \end{aligned} \right.\end{split}\]

In this subsection, we will see how to use GEMSEO to solve this problem \((P)\) by means of an optimization algorithm.

Define the discipline

Firstly, by means of the high-level function create_discipline(), we create an MDODiscipline of AnalyticDiscipline type from a Python function:

expressions = {"y": "sin(x)-exp(x)"}
discipline = create_discipline("AnalyticDiscipline", expressions=expressions)

We can quickly access the most relevant information of any discipline (name, inputs, and outputs) with Python’s print() function. Moreover, we can get the default input values of a discipline with the attribute MDODiscipline.default_inputs

print(discipline)
print(f"Default inputs: {discipline.default_inputs}")
AnalyticDiscipline
Default inputs: {'x': array([0.])}

Now, we can to minimize this MDODiscipline over a design space, by means of a quasi-Newton method from the initial point \(0.5\).

Define the design space

For that, by means of the high-level function create_design_space(), we define the DesignSpace \([-2, 2]\) with initial value \(0.5\) by using its DesignSpace.add_variable() method.

design_space = create_design_space()
design_space.add_variable("x", l_b=-2.0, u_b=2.0, value=-0.5 * ones(1))

Define the MDO scenario

Then, by means of the create_scenario() API function, we define an MDOScenario from the MDODiscipline and the DesignSpace defined above:

scenario = create_scenario(discipline, "DisciplinaryOpt", "y", design_space)

What about the differentiation method?

The AnalyticDiscipline automatically differentiates the expressions to obtain the Jacobian matrices. Therefore, there is no need to define a differentiation method in this case. Keep in mind that for a generic discipline with no defined Jacobian function, you can use the Scenario.set_differentiation_method() method to define a numerical approximation of the gradients.

scenario.set_differentiation_method("finite_differences")

Execute the MDO scenario

Lastly, we solve the OptimizationProblem included in the MDOScenario defined above by minimizing the objective function over the DesignSpace. Precisely, we choose the L-BFGS-B algorithm implemented in the function scipy.optimize.fmin_l_bfgs_b.

scenario.execute({"algo": "L-BFGS-B", "max_iter": 100})
    INFO - 16:25:50:
    INFO - 16:25:50: *** Start MDOScenario execution ***
    INFO - 16:25:50: MDOScenario
    INFO - 16:25:50:    Disciplines: AnalyticDiscipline
    INFO - 16:25:50:    MDO formulation: DisciplinaryOpt
    INFO - 16:25:50: Optimization problem:
    INFO - 16:25:50:    minimize y(x)
    INFO - 16:25:50:    with respect to x
    INFO - 16:25:50:    over the design space:
    INFO - 16:25:50:    +------+-------------+-------+-------------+-------+
    INFO - 16:25:50:    | name | lower_bound | value | upper_bound | type  |
    INFO - 16:25:50:    +------+-------------+-------+-------------+-------+
    INFO - 16:25:50:    | x    |      -2     |  -0.5 |      2      | float |
    INFO - 16:25:50:    +------+-------------+-------+-------------+-------+
    INFO - 16:25:50: Solving optimization problem with algorithm L-BFGS-B:
    INFO - 16:25:50: ...   0%|          | 0/100 [00:00<?, ?it]
    INFO - 16:25:50: ...   1%|          | 1/100 [00:00<00:00, 359.62 it/sec, obj=-1.09]
    INFO - 16:25:50: ...   2%|▏         | 2/100 [00:00<00:00, 441.83 it/sec, obj=-1.04]
    INFO - 16:25:50: ...   3%|▎         | 3/100 [00:00<00:00, 526.33 it/sec, obj=-1.24]
    INFO - 16:25:50: ...   4%|▍         | 4/100 [00:00<00:00, 543.85 it/sec, obj=-1.23]
    INFO - 16:25:50: ...   5%|▌         | 5/100 [00:00<00:00, 545.22 it/sec, obj=-1.24]
    INFO - 16:25:50: ...   6%|▌         | 6/100 [00:00<00:00, 521.07 it/sec, obj=-1.24]
    INFO - 16:25:50: ...   7%|▋         | 7/100 [00:00<00:00, 506.19 it/sec, obj=-1.24]
    INFO - 16:25:50: Optimization result:
    INFO - 16:25:50:    Optimizer info:
    INFO - 16:25:50:       Status: 0
    INFO - 16:25:50:       Message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
    INFO - 16:25:50:       Number of calls to the objective function by the optimizer: 8
    INFO - 16:25:50:    Solution:
    INFO - 16:25:50:       Objective: -1.236108341859242
    INFO - 16:25:50:       Design space:
    INFO - 16:25:50:       +------+-------------+--------------------+-------------+-------+
    INFO - 16:25:50:       | name | lower_bound |       value        | upper_bound | type  |
    INFO - 16:25:50:       +------+-------------+--------------------+-------------+-------+
    INFO - 16:25:50:       | x    |      -2     | -1.292695718944152 |      2      | float |
    INFO - 16:25:50:       +------+-------------+--------------------+-------------+-------+
    INFO - 16:25:50: *** End MDOScenario execution (time: 0:00:00.027434) ***

{'max_iter': 100, 'algo': 'L-BFGS-B'}

The optimum results can be found in the execution log. It is also possible to access them with Scenario.optimization_result:

optimization_result = scenario.optimization_result
print(
    "The solution of P is "
    f"(x*, f(x*)) = ({optimization_result.x_opt}, {optimization_result.f_opt})"
)
The solution of P is (x*, f(x*)) = ([-1.29269572], -1.236108341859242)

See also

You can find the SciPy implementation of the L-BFGS-B algorithm algorithm by clicking here. # noqa

Available algorithms

In order to get the list of available optimization algorithms, use:

algo_list = get_available_opt_algorithms()
print(f"Available algorithms: {algo_list}")
Available algorithms: ['MMA', 'NLOPT_MMA', 'NLOPT_COBYLA', 'NLOPT_SLSQP', 'NLOPT_BOBYQA', 'NLOPT_BFGS', 'NLOPT_NEWUOA', 'PDFO_COBYLA', 'PDFO_BOBYQA', 'PDFO_NEWUOA', 'PSEVEN', 'PSEVEN_FD', 'PSEVEN_MOM', 'PSEVEN_NCG', 'PSEVEN_NLS', 'PSEVEN_POWELL', 'PSEVEN_QP', 'PSEVEN_SQP', 'PSEVEN_SQ2P', 'PYMOO_GA', 'PYMOO_NSGA2', 'PYMOO_NSGA3', 'PYMOO_UNSGA3', 'PYMOO_RNSGA3', 'DUAL_ANNEALING', 'SHGO', 'DIFFERENTIAL_EVOLUTION', 'LINEAR_INTERIOR_POINT', 'REVISED_SIMPLEX', 'SIMPLEX', 'SLSQP', 'L-BFGS-B', 'TNC', 'SBO']

Available post-processing

In order to get the list of available post-processing algorithms, use:

post_list = get_available_post_processings()
print(f"Available algorithms: {post_list}")
Available algorithms: ['Animation', 'BasicHistory', 'Compromise', 'ConstraintsHistory', 'Correlations', 'GradientSensitivity', 'HighTradeOff', 'MultiObjectiveDiagram', 'ObjConstrHist', 'OptHistoryView', 'ParallelCoordinates', 'ParetoFront', 'Petal', 'QuadApprox', 'Radar', 'RadarChart', 'Robustness', 'SOM', 'ScatterPareto', 'ScatterPlotMatrix', 'TopologyView', 'VariableInfluence']

Exporting the problem data.

After the execution of the scenario, you may want to export your data to use it elsewhere. The Scenario.to_dataset() will allow you to export your results to a Dataset, the basic GEMSEO class to store data.

dataset = scenario.to_dataset("a_name_for_my_dataset")

You can also look at the examples:

Total running time of the script: ( 0 minutes 0.061 seconds)

Gallery generated by Sphinx-Gallery