Create a DOE Scenario

from __future__ import annotations

from gemseo.api import configure_logger
from gemseo.api import create_design_space
from gemseo.api import create_discipline
from gemseo.api import create_scenario
from gemseo.api import get_available_doe_algorithms
from gemseo.api import get_available_post_processings

configure_logger()
<RootLogger root (INFO)>

Let \((P)\) be a simple optimization problem:

\[\begin{split}(P) = \left\{ \begin{aligned} & \underset{x\in\mathbb{N}^2}{\text{minimize}} & & f(x) = x_1 + x_2 \\ & \text{subject to} & & -5 \leq x \leq 5 \end{aligned} \right.\end{split}\]

In this example, we will see how to use GEMSEO to solve this problem \((P)\) by means of a Design Of Experiments (DOE)

Define the discipline

Firstly, by means of the create_discipline() API function, we create an MDODiscipline of AnalyticDiscipline type from a Python function:

expressions = {"y": "x1+x2"}
discipline = create_discipline("AnalyticDiscipline", expressions=expressions)

Now, we want to minimize this MDODiscipline over a design of experiments (DOE).

Define the design space

For that, by means of the create_design_space() API function, we define the DesignSpace \([-5, 5]\times[-5, 5]\) by using its DesignSpace.add_variable() method.

design_space = create_design_space()
design_space.add_variable("x1", 1, l_b=-5, u_b=5, var_type="integer")
design_space.add_variable("x2", 1, l_b=-5, u_b=5, var_type="integer")

Define the DOE scenario

Then, by means of the create_scenario() API function, we define a DOEScenario from the MDODiscipline and the DesignSpace defined above:

scenario = create_scenario(
    discipline, "DisciplinaryOpt", "y", design_space, scenario_type="DOE"
)

Execute the DOE scenario

Lastly, we solve the OptimizationProblem included in the DOEScenario defined above by minimizing the objective function over a design of experiments included in the DesignSpace. Precisely, we choose a full factorial design of size \(11^2\):

scenario.execute({"algo": "fullfact", "n_samples": 11**2})
    INFO - 14:46:11:
    INFO - 14:46:11: *** Start DOEScenario execution ***
    INFO - 14:46:11: DOEScenario
    INFO - 14:46:11:    Disciplines: AnalyticDiscipline
    INFO - 14:46:11:    MDO formulation: DisciplinaryOpt
    INFO - 14:46:11: Optimization problem:
    INFO - 14:46:11:    minimize y(x1, x2)
    INFO - 14:46:11:    with respect to x1, x2
    INFO - 14:46:11:    over the design space:
    INFO - 14:46:11:    +------+-------------+-------+-------------+---------+
    INFO - 14:46:11:    | name | lower_bound | value | upper_bound | type    |
    INFO - 14:46:11:    +------+-------------+-------+-------------+---------+
    INFO - 14:46:11:    | x1   |      -5     |  None |      5      | integer |
    INFO - 14:46:11:    | x2   |      -5     |  None |      5      | integer |
    INFO - 14:46:11:    +------+-------------+-------+-------------+---------+
    INFO - 14:46:11: Solving optimization problem with algorithm fullfact:
    INFO - 14:46:11: ...   0%|          | 0/121 [00:00<?, ?it]
    INFO - 14:46:11: ... 100%|██████████| 121/121 [00:00<00:00, 3063.90 it/sec, obj=10]
    INFO - 14:46:11: Optimization result:
    INFO - 14:46:11:    Optimizer info:
    INFO - 14:46:11:       Status: None
    INFO - 14:46:11:       Message: None
    INFO - 14:46:11:       Number of calls to the objective function by the optimizer: 121
    INFO - 14:46:11:    Solution:
    INFO - 14:46:11:       Objective: -10.0
    INFO - 14:46:11:       Design space:
    INFO - 14:46:11:       +------+-------------+-------+-------------+---------+
    INFO - 14:46:11:       | name | lower_bound | value | upper_bound | type    |
    INFO - 14:46:11:       +------+-------------+-------+-------------+---------+
    INFO - 14:46:11:       | x1   |      -5     |   -5  |      5      | integer |
    INFO - 14:46:11:       | x2   |      -5     |   -5  |      5      | integer |
    INFO - 14:46:11:       +------+-------------+-------+-------------+---------+
    INFO - 14:46:11: *** End DOEScenario execution (time: 0:00:00.048401) ***

{'eval_jac': False, 'algo': 'fullfact', 'n_samples': 121}

The optimum results can be found in the execution log. It is also possible to extract them by invoking the Scenario.get_optimum() method. It returns a dictionary containing the optimum results for the scenario under consideration:

opt_results = scenario.get_optimum()
print(
    "The solution of P is (x*,f(x*)) = ({}, {})".format(
        opt_results.x_opt, opt_results.f_opt
    ),
)
The solution of P is (x*,f(x*)) = ([-5. -5.], -10.0)

Available DOE algorithms

In order to get the list of available DOE algorithms, use:

algo_list = get_available_doe_algorithms()
print(f"Available algorithms: {algo_list}")
Available algorithms: ['CustomDOE', 'DiagonalDOE', 'OT_SOBOL', 'OT_RANDOM', 'OT_HASELGROVE', 'OT_REVERSE_HALTON', 'OT_HALTON', 'OT_FAURE', 'OT_MONTE_CARLO', 'OT_FACTORIAL', 'OT_COMPOSITE', 'OT_AXIAL', 'OT_OPT_LHS', 'OT_LHS', 'OT_LHSC', 'OT_FULLFACT', 'OT_SOBOL_INDICES', 'fullfact', 'ff2n', 'pbdesign', 'bbdesign', 'ccdesign', 'lhs']

Available post-processing

In order to get the list of available post-processing algorithms, use:

post_list = get_available_post_processings()
print(f"Available algorithms: {post_list}")
Available algorithms: ['BasicHistory', 'Compromise', 'ConstraintsHistory', 'Correlations', 'GradientSensitivity', 'HighTradeOff', 'KMeans', 'MultiObjectiveDiagram', 'ObjConstrHist', 'OptHistoryView', 'ParallelCoordinates', 'ParetoFront', 'Petal', 'QuadApprox', 'Radar', 'RadarChart', 'Robustness', 'SOM', 'ScatterPareto', 'ScatterPlotMatrix', 'VariableInfluence']

You can also look at the examples:

Total running time of the script: ( 0 minutes 0.065 seconds)

Gallery generated by Sphinx-Gallery