Note
Click here to download the full example code
Create a MDO Scenario¶
from __future__ import absolute_import, division, print_function, unicode_literals
from future import standard_library
from numpy import ones
from gemseo.api import (
configure_logger,
create_design_space,
create_discipline,
create_scenario,
get_available_opt_algorithms,
get_available_post_processings,
)
configure_logger()
standard_library.install_aliases()
Let \((P)\) be a simple optimization problem:
In this subsection, we will see how to use GEMSEO to solve this problem \((P)\) by means of an optimization algorithm.
Define the discipline¶
Firstly, by means of the create_discipline()
API function,
we create a MDODiscipline
of AnalyticDiscipline
type
from a python function:
expressions_dict = {"y": "sin(x)-exp(x)"}
discipline = create_discipline("AnalyticDiscipline", expressions_dict=expressions_dict)
Now, we can to minimize this MDODiscipline
over a design space,
by means of a quasi-Newton method from the initial point \(0.5\).
Define the design space¶
For that, by means of the create_design_space()
API function,
we define the DesignSpace
\([-2, 2]\) with initial value \(0.5\)
by using its DesignSpace.add_variable()
method.
design_space = create_design_space()
design_space.add_variable("x", 1, l_b=-2.0, u_b=2.0, value=-0.5 * ones(1))
Define the DOE scenario¶
Then, by means of the create_scenario()
API function,
we define a DOEScenario
from the MDODiscipline
and the DesignSpace
defined above:
scenario = create_scenario(
discipline, "DisciplinaryOpt", "y", design_space, scenario_type="MDO"
)
Execute the MDO scenario¶
Lastly, we solve the OptimizationProblem
included in the
MDOScenario
defined above by minimizing the objective function over
the DesignSpace
. Precisely, we choose the L-BFGS-B algorithm implemented in the
function scipy.optimize.fmin_l_bfgs_b
.
scenario.execute({"algo": "L-BFGS-B", "max_iter": 100})
Out:
{'algo': 'L-BFGS-B', 'max_iter': 100}
The optimum results can be found in the execution log. It is also possible to
extract them by invoking the Scenario.get_optimum()
method. It
returns a dictionary containing the optimum results for the
scenario under consideration:
opt_results = scenario.get_optimum()
print(
"The solution of P is (x*,f(x*)) = ({}, {})".format(
opt_results.x_opt, opt_results.f_opt
),
)
Out:
The solution of P is (x*,f(x*)) = ([-1.29269572], -1.2361083418592418)
See also
You can found the scipy implementation of the L-BFGS-B algorithm algorithm by clicking here. # noqa
Available algorithms¶
In order to get the list of available optimization algorithms, use:
algo_list = get_available_opt_algorithms()
print("Available algorithms: {}".format(algo_list))
Out:
Available algorithms: ['NLOPT_MMA', 'NLOPT_COBYLA', 'NLOPT_SLSQP', 'NLOPT_BOBYQA', 'NLOPT_BFGS', 'NLOPT_NEWUOA', 'SLSQP', 'L-BFGS-B', 'TNC']
Available post-processing¶
In order to get the list of available post-processing algorithms, use:
post_list = get_available_post_processings()
print("Available algorithms: {}".format(post_list))
Out:
Available algorithms: ['BasicHistory', 'ConstraintsHistory', 'Correlations', 'GradientSensitivity', 'KMeans', 'ObjConstrHist', 'OptHistoryView', 'ParallelCoordinates', 'ParetoFront', 'QuadApprox', 'RadarChart', 'Robustness', 'SOM', 'ScatterPlotMatrix', 'VariableInfluence']
You can also look at the examples:
Total running time of the script: ( 0 minutes 0.042 seconds)