Note
Click here to download the full example code
Create an MDO Scenario¶
from __future__ import annotations
from gemseo.api import configure_logger
from gemseo.api import create_design_space
from gemseo.api import create_discipline
from gemseo.api import create_scenario
from gemseo.api import get_available_opt_algorithms
from gemseo.api import get_available_post_processings
from numpy import ones
configure_logger()
<RootLogger root (INFO)>
Let \((P)\) be a simple optimization problem:
In this subsection, we will see how to use GEMSEO to solve this problem \((P)\) by means of an optimization algorithm.
Define the discipline¶
Firstly, by means of the create_discipline()
API function,
we create an MDODiscipline
of AnalyticDiscipline
type
from a Python function:
expressions = {"y": "sin(x)-exp(x)"}
discipline = create_discipline("AnalyticDiscipline", expressions=expressions)
We can quickly access the most relevant information of any discipline (name, inputs,
and outputs) with Python’s print()
function. Moreover, we can get the default
input values of a discipline with the attribute MDODiscipline.default_inputs
print(discipline)
print(f"Default inputs: {discipline.default_inputs}")
AnalyticDiscipline
Default inputs: {'x': array([0.])}
Now, we can to minimize this MDODiscipline
over a design space,
by means of a quasi-Newton method from the initial point \(0.5\).
Define the design space¶
For that, by means of the create_design_space()
API function,
we define the DesignSpace
\([-2, 2]\) with initial value \(0.5\)
by using its DesignSpace.add_variable()
method.
design_space = create_design_space()
design_space.add_variable("x", l_b=-2.0, u_b=2.0, value=-0.5 * ones(1))
Define the MDO scenario¶
Then, by means of the create_scenario()
API function,
we define an MDOScenario
from the MDODiscipline
and the DesignSpace
defined above:
scenario = create_scenario(discipline, "DisciplinaryOpt", "y", design_space)
What about the differentiation method?¶
The AnalyticDiscipline
automatically differentiates the
expressions to obtain the Jacobian matrices. Therefore, there is no need to
define a differentiation method in this case. Keep in mind that for a
generic discipline with no defined Jacobian function, you can use the
Scenario.set_differentiation_method()
method to define a numerical
approximation of the gradients.
scenario.set_differentiation_method("finite_differences")
Execute the MDO scenario¶
Lastly, we solve the OptimizationProblem
included in the
MDOScenario
defined above by minimizing the objective function over
the DesignSpace
. Precisely, we choose the L-BFGS-B algorithm implemented in the
function scipy.optimize.fmin_l_bfgs_b
.
scenario.execute({"algo": "L-BFGS-B", "max_iter": 100})
INFO - 17:23:06:
INFO - 17:23:06: *** Start MDOScenario execution ***
INFO - 17:23:06: MDOScenario
INFO - 17:23:06: Disciplines: AnalyticDiscipline
INFO - 17:23:06: MDO formulation: DisciplinaryOpt
INFO - 17:23:06: Optimization problem:
INFO - 17:23:06: minimize y(x)
INFO - 17:23:06: with respect to x
INFO - 17:23:06: over the design space:
INFO - 17:23:06: +------+-------------+-------+-------------+-------+
INFO - 17:23:06: | name | lower_bound | value | upper_bound | type |
INFO - 17:23:06: +------+-------------+-------+-------------+-------+
INFO - 17:23:06: | x | -2 | -0.5 | 2 | float |
INFO - 17:23:06: +------+-------------+-------+-------------+-------+
INFO - 17:23:06: Solving optimization problem with algorithm L-BFGS-B:
INFO - 17:23:06: ... 0%| | 0/100 [00:00<?, ?it]
INFO - 17:23:06: ... 1%| | 1/100 [00:00<00:00, 238.42 it/sec, obj=-1.09]
INFO - 17:23:06: ... 2%|▏ | 2/100 [00:00<00:00, 284.70 it/sec, obj=-1.04]
INFO - 17:23:06: ... 3%|▎ | 3/100 [00:00<00:00, 333.43 it/sec, obj=-1.24]
INFO - 17:23:06: ... 4%|▍ | 4/100 [00:00<00:00, 340.86 it/sec, obj=-1.23]
INFO - 17:23:06: ... 5%|▌ | 5/100 [00:00<00:00, 347.94 it/sec, obj=-1.24]
INFO - 17:23:06: ... 6%|▌ | 6/100 [00:00<00:00, 351.21 it/sec, obj=-1.24]
INFO - 17:23:06: ... 7%|▋ | 7/100 [00:00<00:00, 354.95 it/sec, obj=-1.24]
INFO - 17:23:06: Optimization result:
INFO - 17:23:06: Optimizer info:
INFO - 17:23:06: Status: 0
INFO - 17:23:06: Message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
INFO - 17:23:06: Number of calls to the objective function by the optimizer: 8
INFO - 17:23:06: Solution:
INFO - 17:23:06: Objective: -1.236108341859242
INFO - 17:23:06: Design space:
INFO - 17:23:06: +------+-------------+--------------------+-------------+-------+
INFO - 17:23:06: | name | lower_bound | value | upper_bound | type |
INFO - 17:23:06: +------+-------------+--------------------+-------------+-------+
INFO - 17:23:06: | x | -2 | -1.292695718944152 | 2 | float |
INFO - 17:23:06: +------+-------------+--------------------+-------------+-------+
INFO - 17:23:06: *** End MDOScenario execution (time: 0:00:00.038988) ***
{'max_iter': 100, 'algo': 'L-BFGS-B'}
The optimum results can be found in the execution log. It is also possible to
extract them by invoking the Scenario.get_optimum()
method. It
returns a dictionary containing the optimum results for the
scenario under consideration:
opt_results = scenario.get_optimum()
print(
"The solution of P is (x*,f(x*)) = ({}, {})".format(
opt_results.x_opt, opt_results.f_opt
),
)
The solution of P is (x*,f(x*)) = ([-1.29269572], -1.236108341859242)
See also
You can found the SciPy implementation of the L-BFGS-B algorithm algorithm by clicking here. # noqa
Available algorithms¶
In order to get the list of available optimization algorithms, use:
algo_list = get_available_opt_algorithms()
print(f"Available algorithms: {algo_list}")
Available algorithms: ['NLOPT_MMA', 'NLOPT_COBYLA', 'NLOPT_SLSQP', 'NLOPT_BOBYQA', 'NLOPT_BFGS', 'NLOPT_NEWUOA', 'PDFO_COBYLA', 'PDFO_BOBYQA', 'PDFO_NEWUOA', 'PSEVEN', 'PSEVEN_FD', 'PSEVEN_MOM', 'PSEVEN_NCG', 'PSEVEN_NLS', 'PSEVEN_POWELL', 'PSEVEN_QP', 'PSEVEN_SQP', 'PSEVEN_SQ2P', 'PYMOO_GA', 'PYMOO_NSGA2', 'PYMOO_NSGA3', 'PYMOO_UNSGA3', 'PYMOO_RNSGA3', 'DUAL_ANNEALING', 'SHGO', 'DIFFERENTIAL_EVOLUTION', 'LINEAR_INTERIOR_POINT', 'REVISED_SIMPLEX', 'SIMPLEX', 'SLSQP', 'L-BFGS-B', 'TNC', 'SNOPTB']
Available post-processing¶
In order to get the list of available post-processing algorithms, use:
post_list = get_available_post_processings()
print(f"Available algorithms: {post_list}")
Available algorithms: ['BasicHistory', 'Compromise', 'ConstraintsHistory', 'Correlations', 'GradientSensitivity', 'HighTradeOff', 'KMeans', 'MultiObjectiveDiagram', 'ObjConstrHist', 'OptHistoryView', 'ParallelCoordinates', 'ParetoFront', 'Petal', 'QuadApprox', 'Radar', 'RadarChart', 'Robustness', 'SOM', 'ScatterPareto', 'ScatterPlotMatrix', 'VariableInfluence']
Exporting the problem data.¶
After the execution of the scenario, you may want to export your data to use it
elsewhere. The Scenario.export_to_dataset()
will allow you to export your
results to a Dataset
, the basic GEMSEO class to store data.
From a dataset, you can even obtain a Pandas dataframe with its method
export_to_dataframe()
:
dataset = scenario.export_to_dataset("a_name_for_my_dataset")
dataframe = dataset.export_to_dataframe()
You can also look at the examples:
Total running time of the script: ( 0 minutes 0.081 seconds)