Create an MDO Scenario

from gemseo.api import configure_logger
from gemseo.api import create_design_space
from gemseo.api import create_discipline
from gemseo.api import create_scenario
from gemseo.api import get_available_opt_algorithms
from gemseo.api import get_available_post_processings
from numpy import ones

configure_logger()

Out:

<RootLogger root (INFO)>

Let \((P)\) be a simple optimization problem:

\[\begin{split}(P) = \left\{ \begin{aligned} & \underset{x}{\text{minimize}} & & f(x) = \sin(x) - \exp(x) \\ & \text{subject to} & & -2 \leq x \leq 2 \end{aligned} \right.\end{split}\]

In this subsection, we will see how to use GEMSEO to solve this problem \((P)\) by means of an optimization algorithm.

Define the discipline

Firstly, by means of the create_discipline() API function, we create an MDODiscipline of AnalyticDiscipline type from a Python function:

expressions = {"y": "sin(x)-exp(x)"}
discipline = create_discipline("AnalyticDiscipline", expressions=expressions)

Now, we can to minimize this MDODiscipline over a design space, by means of a quasi-Newton method from the initial point \(0.5\).

Define the design space

For that, by means of the create_design_space() API function, we define the DesignSpace \([-2, 2]\) with initial value \(0.5\) by using its DesignSpace.add_variable() method.

design_space = create_design_space()
design_space.add_variable("x", 1, l_b=-2.0, u_b=2.0, value=-0.5 * ones(1))

Define the MDO scenario

Then, by means of the create_scenario() API function, we define an MDOScenario from the MDODiscipline and the DesignSpace defined above:

scenario = create_scenario(
    discipline, "DisciplinaryOpt", "y", design_space, scenario_type="MDO"
)

What about the differentiation method?

The AnalyticDiscipline automatically differentiates the expressions to obtain the Jacobian matrices. Therefore, there is no need to define a differentiation method in this case. Keep in mind that for a generic discipline with no defined Jacobian function, you can use the Scenario.set_differentiation_method() method to define a numerical approximation of the gradients.

scenario.set_differentiation_method('user')

Execute the MDO scenario

Lastly, we solve the OptimizationProblem included in the MDOScenario defined above by minimizing the objective function over the DesignSpace. Precisely, we choose the L-BFGS-B algorithm implemented in the function scipy.optimize.fmin_l_bfgs_b.

scenario.execute({"algo": "L-BFGS-B", "max_iter": 100})

Out:

    INFO - 10:05:18:
    INFO - 10:05:18: *** Start MDOScenario execution ***
    INFO - 10:05:18: MDOScenario
    INFO - 10:05:18:    Disciplines: AnalyticDiscipline
    INFO - 10:05:18:    MDO formulation: DisciplinaryOpt
    INFO - 10:05:18: Optimization problem:
    INFO - 10:05:18:    minimize y(x)
    INFO - 10:05:18:    with respect to x
    INFO - 10:05:18:    over the design space:
    INFO - 10:05:18:    +------+-------------+-------+-------------+-------+
    INFO - 10:05:18:    | name | lower_bound | value | upper_bound | type  |
    INFO - 10:05:18:    +------+-------------+-------+-------------+-------+
    INFO - 10:05:18:    | x    |      -2     |  -0.5 |      2      | float |
    INFO - 10:05:18:    +------+-------------+-------+-------------+-------+
    INFO - 10:05:18: Solving optimization problem with algorithm L-BFGS-B:
    INFO - 10:05:18: ...   0%|          | 0/100 [00:00<?, ?it]
    INFO - 10:05:18: ...   7%|▋         | 7/100 [00:00<00:00, 9003.36 it/sec, obj=-1.24]
    INFO - 10:05:18: Optimization result:
    INFO - 10:05:18:    Optimizer info:
    INFO - 10:05:18:       Status: 0
    INFO - 10:05:18:       Message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
    INFO - 10:05:18:       Number of calls to the objective function by the optimizer: 8
    INFO - 10:05:18:    Solution:
    INFO - 10:05:18:       Objective: -1.236108341859242
    INFO - 10:05:18:       Design space:
    INFO - 10:05:18:       +------+-------------+--------------------+-------------+-------+
    INFO - 10:05:18:       | name | lower_bound |       value        | upper_bound | type  |
    INFO - 10:05:18:       +------+-------------+--------------------+-------------+-------+
    INFO - 10:05:18:       | x    |      -2     | -1.292695718944152 |      2      | float |
    INFO - 10:05:18:       +------+-------------+--------------------+-------------+-------+
    INFO - 10:05:18: *** End MDOScenario execution (time: 0:00:00.018846) ***

{'max_iter': 100, 'algo': 'L-BFGS-B'}

The optimum results can be found in the execution log. It is also possible to extract them by invoking the Scenario.get_optimum() method. It returns a dictionary containing the optimum results for the scenario under consideration:

opt_results = scenario.get_optimum()
print(
    "The solution of P is (x*,f(x*)) = ({}, {})".format(
        opt_results.x_opt, opt_results.f_opt
    ),
)

Out:

The solution of P is (x*,f(x*)) = ([-1.29269572], -1.236108341859242)

See also

You can found the scipy implementation of the L-BFGS-B algorithm algorithm by clicking here. # noqa

Available algorithms

In order to get the list of available optimization algorithms, use:

algo_list = get_available_opt_algorithms()
print(f"Available algorithms: {algo_list}")

Out:

Available algorithms: ['NLOPT_MMA', 'NLOPT_COBYLA', 'NLOPT_SLSQP', 'NLOPT_BOBYQA', 'NLOPT_BFGS', 'NLOPT_NEWUOA', 'PDFO_COBYLA', 'PDFO_BOBYQA', 'PDFO_NEWUOA', 'PSEVEN', 'PSEVEN_FD', 'PSEVEN_MOM', 'PSEVEN_NCG', 'PSEVEN_NLS', 'PSEVEN_POWELL', 'PSEVEN_QP', 'PSEVEN_SQP', 'PSEVEN_SQ2P', 'PYMOO_GA', 'PYMOO_NSGA2', 'PYMOO_NSGA3', 'PYMOO_UNSGA3', 'PYMOO_RNSGA3', 'DUAL_ANNEALING', 'SHGO', 'DIFFERENTIAL_EVOLUTION', 'LINEAR_INTERIOR_POINT', 'REVISED_SIMPLEX', 'SIMPLEX', 'SLSQP', 'L-BFGS-B', 'TNC', 'SNOPTB']

Available post-processing

In order to get the list of available post-processing algorithms, use:

post_list = get_available_post_processings()
print(f"Available algorithms: {post_list}")

Out:

Available algorithms: ['BasicHistory', 'Compromise', 'ConstraintsHistory', 'Correlations', 'GradientSensitivity', 'HighTradeOff', 'KMeans', 'MultiObjectiveDiagram', 'ObjConstrHist', 'OptHistoryView', 'ParallelCoordinates', 'ParetoFront', 'Petal', 'QuadApprox', 'Radar', 'RadarChart', 'Robustness', 'SOM', 'ScatterPareto', 'ScatterPlotMatrix', 'VariableInfluence']

You can also look at the examples:

Total running time of the script: ( 0 minutes 0.040 seconds)

Gallery generated by Sphinx-Gallery