Create an MDO Scenario

from __future__ import division, unicode_literals

from numpy import ones

from gemseo.api import (
    configure_logger,
    create_design_space,
    create_discipline,
    create_scenario,
    get_available_opt_algorithms,
    get_available_post_processings,
)

configure_logger()

Out:

<RootLogger root (INFO)>

Let \((P)\) be a simple optimization problem:

\[\begin{split}(P) = \left\{ \begin{aligned} & \underset{x}{\text{minimize}} & & f(x) = \sin(x) - \exp(x) \\ & \text{subject to} & & -2 \leq x \leq 2 \end{aligned} \right.\end{split}\]

In this subsection, we will see how to use GEMSEO to solve this problem \((P)\) by means of an optimization algorithm.

Define the discipline

Firstly, by means of the create_discipline() API function, we create a MDODiscipline of AnalyticDiscipline type from a python function:

expressions_dict = {"y": "sin(x)-exp(x)"}
discipline = create_discipline("AnalyticDiscipline", expressions_dict=expressions_dict)

Now, we can to minimize this MDODiscipline over a design space, by means of a quasi-Newton method from the initial point \(0.5\).

Define the design space

For that, by means of the create_design_space() API function, we define the DesignSpace \([-2, 2]\) with initial value \(0.5\) by using its DesignSpace.add_variable() method.

design_space = create_design_space()
design_space.add_variable("x", 1, l_b=-2.0, u_b=2.0, value=-0.5 * ones(1))

Define the DOE scenario

Then, by means of the create_scenario() API function, we define a DOEScenario from the MDODiscipline and the DesignSpace defined above:

scenario = create_scenario(
    discipline, "DisciplinaryOpt", "y", design_space, scenario_type="MDO"
)

Execute the MDO scenario

Lastly, we solve the OptimizationProblem included in the MDOScenario defined above by minimizing the objective function over the DesignSpace. Precisely, we choose the L-BFGS-B algorithm implemented in the function scipy.optimize.fmin_l_bfgs_b.

scenario.execute({"algo": "L-BFGS-B", "max_iter": 100})

Out:

    INFO - 09:25:57:
    INFO - 09:25:57: *** Start MDO Scenario execution ***
    INFO - 09:25:57: MDOScenario
    INFO - 09:25:57:    Disciplines: AnalyticDiscipline
    INFO - 09:25:57:    MDOFormulation: DisciplinaryOpt
    INFO - 09:25:57:    Algorithm: L-BFGS-B
    INFO - 09:25:57: Optimization problem:
    INFO - 09:25:57:    Minimize: y(x)
    INFO - 09:25:57:    With respect to: x
    INFO - 09:25:57: Design Space:
    INFO - 09:25:57: +------+-------------+-------+-------------+-------+
    INFO - 09:25:57: | name | lower_bound | value | upper_bound | type  |
    INFO - 09:25:57: +------+-------------+-------+-------------+-------+
    INFO - 09:25:57: | x    |      -2     |  -0.5 |      2      | float |
    INFO - 09:25:57: +------+-------------+-------+-------------+-------+
    INFO - 09:25:57: Optimization:   0%|          | 0/100 [00:00<?, ?it]
    INFO - 09:25:57: Optimization:   7%|▋         | 7/100 [00:00<00:00, 8478.99 it/sec, obj=-1.24]
    INFO - 09:25:57: Optimization result:
    INFO - 09:25:57: Objective value = -1.2361083418592416
    INFO - 09:25:57: The result is feasible.
    INFO - 09:25:57: Status: 0
    INFO - 09:25:57: Optimizer message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
    INFO - 09:25:57: Number of calls to the objective function by the optimizer: 8
    INFO - 09:25:57: Design Space:
    INFO - 09:25:57: +------+-------------+--------------------+-------------+-------+
    INFO - 09:25:57: | name | lower_bound |       value        | upper_bound | type  |
    INFO - 09:25:57: +------+-------------+--------------------+-------------+-------+
    INFO - 09:25:57: | x    |      -2     | -1.292695718944152 |      2      | float |
    INFO - 09:25:57: +------+-------------+--------------------+-------------+-------+
    INFO - 09:25:57: *** MDO Scenario run terminated in 0:00:00.020182 ***

{'algo': 'L-BFGS-B', 'max_iter': 100}

The optimum results can be found in the execution log. It is also possible to extract them by invoking the Scenario.get_optimum() method. It returns a dictionary containing the optimum results for the scenario under consideration:

opt_results = scenario.get_optimum()
print(
    "The solution of P is (x*,f(x*)) = ({}, {})".format(
        opt_results.x_opt, opt_results.f_opt
    ),
)

Out:

The solution of P is (x*,f(x*)) = ([-1.29269572], -1.2361083418592416)

See also

You can found the scipy implementation of the L-BFGS-B algorithm algorithm by clicking here. # noqa

Available algorithms

In order to get the list of available optimization algorithms, use:

algo_list = get_available_opt_algorithms()
print("Available algorithms: {}".format(algo_list))

Out:

Available algorithms: ['NLOPT_MMA', 'NLOPT_COBYLA', 'NLOPT_SLSQP', 'NLOPT_BOBYQA', 'NLOPT_BFGS', 'NLOPT_NEWUOA', 'PDFO_COBYLA', 'PDFO_BOBYQA', 'PDFO_NEWUOA', 'PSEVEN', 'PSEVEN_FD', 'PSEVEN_MOM', 'PSEVEN_NCG', 'PSEVEN_NLS', 'PSEVEN_POWELL', 'PSEVEN_QP', 'PSEVEN_SQP', 'PSEVEN_SQ2P', 'DUAL_ANNEALING', 'SHGO', 'DIFFERENTIAL_EVOLUTION', 'LINEAR_INTERIOR_POINT', 'REVISED_SIMPLEX', 'SIMPLEX', 'SLSQP', 'L-BFGS-B', 'TNC', 'SNOPTB']

Available post-processing

In order to get the list of available post-processing algorithms, use:

post_list = get_available_post_processings()
print("Available algorithms: {}".format(post_list))

Out:

Available algorithms: ['BasicHistory', 'ConstraintsHistory', 'Correlations', 'GradientSensitivity', 'KMeans', 'ObjConstrHist', 'OptHistoryView', 'ParallelCoordinates', 'ParetoFront', 'QuadApprox', 'RadarChart', 'Robustness', 'SOM', 'ScatterPlotMatrix', 'VariableInfluence']

You can also look at the examples:

Total running time of the script: ( 0 minutes 0.044 seconds)

Gallery generated by Sphinx-Gallery