# Create a DOE Scenario¶

from __future__ import absolute_import, division, print_function, unicode_literals

from future import standard_library

from gemseo.api import (
configure_logger,
create_design_space,
create_discipline,
create_scenario,
get_available_doe_algorithms,
get_available_post_processings,
)

configure_logger()

standard_library.install_aliases()

Let $$(P)$$ be a simple optimization problem:

\begin{split}(P) = \left\{ \begin{aligned} & \underset{x\in\mathbb{N}^2}{\text{minimize}} & & f(x) = x_1 + x_2 \\ & \text{subject to} & & -5 \leq x \leq 5 \end{aligned} \right.\end{split}

In this example, we will see how to use GEMSEO to solve this problem $$(P)$$ by means of a Design Of Experiments (DOE)

## Define the discipline¶

Firstly, by means of the create_discipline() API function, we create a MDODiscipline of AnalyticDiscipline type from a python function:

expressions_dict = {"y": "x1+x2"}
discipline = create_discipline("AnalyticDiscipline", expressions_dict=expressions_dict)

Now, we want to minimize this MDODiscipline over a design of experiments (DOE).

## Define the design space¶

For that, by means of the create_design_space() API function, we define the DesignSpace $$[-5, 5]\times[-5, 5]$$ by using its DesignSpace.add_variable() method.

design_space = create_design_space()

## Define the DOE scenario¶

Then, by means of the create_scenario() API function, we define a DOEScenario from the MDODiscipline and the DesignSpace defined above:

scenario = create_scenario(
discipline, "DisciplinaryOpt", "y", design_space, scenario_type="DOE"
)

## Execute the DOE scenario¶

Lastly, we solve the OptimizationProblem included in the DOEScenario defined above by minimizing the objective function over a design of experiments included in the DesignSpace. Precisely, we choose a full factorial design of size $$11^2$$:

scenario.execute({"algo": "fullfact", "n_samples": 11 ** 2})

Out:

{'eval_jac': False, 'algo': 'fullfact', 'n_samples': 121}

The optimum results can be found in the execution log. It is also possible to extract them by invoking the Scenario.get_optimum() method. It returns a dictionary containing the optimum results for the scenario under consideration:

opt_results = scenario.get_optimum()
print(
"The solution of P is (x*,f(x*)) = ({}, {})".format(
opt_results.x_opt, opt_results.f_opt
),
)

# Available DOE algorithms
# ------------------------
# In order to get the list of available DOE algorithms, use:

algo_list = get_available_doe_algorithms()
print("Available algorithms: {}".format(algo_list))

Out:

The solution of P is (x*,f(x*)) = ([-5. -5.], -10.0)
Available algorithms: ['CustomDOE', 'DiagonalDOE', 'OT_SOBOL', 'OT_HASELGROVE', 'OT_REVERSE_HALTON', 'OT_HALTON', 'OT_FAURE', 'OT_AXIAL', 'OT_FACTORIAL', 'OT_MONTE_CARLO', 'OT_LHS', 'OT_LHSC', 'OT_RANDOM', 'OT_FULLFACT', 'OT_COMPOSITE', 'OT_SOBOL_INDICES', 'fullfact', 'ff2n', 'pbdesign', 'bbdesign', 'ccdesign', 'lhs']

## Available post-processing¶

In order to get the list of available post-processing algorithms, use:

post_list = get_available_post_processings()
print("Available algorithms: {}".format(post_list))

Out:

Available algorithms: ['BasicHistory', 'ConstraintsHistory', 'Correlations', 'GradientSensitivity', 'KMeans', 'ObjConstrHist', 'OptHistoryView', 'ParallelCoordinates', 'ParetoFront', 'QuadApprox', 'RadarChart', 'Robustness', 'SOM', 'ScatterPlotMatrix', 'VariableInfluence']

You can also look at the examples:

Total running time of the script: ( 0 minutes 0.191 seconds)

Gallery generated by Sphinx-Gallery