Note
Go to the end to download the full example code.
Plug a surrogate discipline in a Scenario#
In this section we describe the usage of surrogate model in GEMSEO,
which is implemented in the SurrogateDiscipline
class.
A SurrogateDiscipline
can be used to substitute a
Discipline
within a Scenario
. This
SurrogateDiscipline
is an evaluation of the Discipline
and is faster to compute than the original discipline. It relies on a
BaseRegressor
. This comes at the price of computing a DOE
on the original Discipline
, and validating the approximation. The
computations from which the approximation is built can be available, or can be
built using GEMSEO' DOE capabilities. See Tutorial: How to carry out a trade-off study and
Tutorial: How to solve an MDO problem.
In GEMSEO's, the data used to build the surrogate model is taken from a
Dataset
containing both inputs and outputs of the DOE. This
Dataset
may have been generated by GEMSEO from a cache, using the
BaseFullCache.to_dataset()
method,
from a database, using the OptimizationProblem.to_dataset()
method,
or from a NumPy array or
a text file using the Dataset.from_array()
and
Dataset.from_txt()
.
Then, the surrogate discipline can be used as any other discipline in a
MDOScenario
, a DOEScenario
, or a BaseMDA
.
from __future__ import annotations
from numpy import array
from numpy import hstack
from numpy import vstack
from gemseo import configure_logger
from gemseo import create_discipline
from gemseo import create_scenario
from gemseo import create_surrogate
from gemseo.datasets.io_dataset import IODataset
from gemseo.problems.mdo.sobieski.core.design_space import SobieskiDesignSpace
configure_logger()
<RootLogger root (INFO)>
Create a surrogate discipline#
Create the learning dataset#
If you already have available data from a DOE produced externally,
it is possible to create a Dataset
and Step 1 ends here.
For example, let us consider a synthetic dataset, with \(x\)
as input and \(y\) as output, described as a numpy
array. Then, we store these data in a Dataset
:
variables = ["x", "y"]
sizes = {"x": 1, "y": 1}
groups = {"x": "inputs", "y": "outputs"}
data = vstack((
hstack((array([1.0]), array([1.0]))),
hstack((array([2.0]), array([2.0]))),
))
synthetic_dataset = IODataset.from_array(data, variables, sizes, groups)
If you do not have available data,the following paragraphs of Step 1 concern you.
Here, we illustrate the generation of the training data using a DOEScenario
,
similarly to Tutorial: How to carry out a trade-off study, where more details are given.
In this basic example, an Discipline
computing the mission
performance (range) in the SSBJ test case is
sampled with a DOEScenario
. Then, the generated database is used to
build a SurrogateDiscipline
.
But more complex scenarios can be used in the same way: complete optimization
processes or MDAs can be replaced by their surrogate counterparts. The right
cache or database shall then be used to build the
SurrogateDiscipline
, but the main logic won't differ from this
example.
Firstly, we create the Discipline
by means of the API function
create_discipline()
:
discipline = create_discipline("SobieskiMission")
Then, we read the DesignSpace
of the Sobieski problem and keep only the inputs of the Sobieski Mission
"x_shared", "y_24", "y_34"
as inputs of the DOE:
design_space = SobieskiDesignSpace()
design_space = design_space.filter(["x_shared", "y_24", "y_34"])
From this Discipline
and this DesignSpace
,
we build a DOEScenario
by means of the API function create_scenario()
:
scenario = create_scenario(
[discipline],
"y_4",
design_space,
formulation_name="DisciplinaryOpt",
scenario_type="DOE",
)
Lastly, we execute the process with the LHS algorithm and 30 samples.
scenario.execute(algo_name="PYDOE_LHS", n_samples=30)
mission_dataset = scenario.to_dataset(opt_naming=False)
INFO - 08:37:53:
INFO - 08:37:53: *** Start DOEScenario execution ***
INFO - 08:37:53: DOEScenario
INFO - 08:37:53: Disciplines: SobieskiMission
INFO - 08:37:53: MDO formulation: DisciplinaryOpt
INFO - 08:37:53: Optimization problem:
INFO - 08:37:53: minimize y_4(x_shared, y_24, y_34)
INFO - 08:37:53: with respect to x_shared, y_24, y_34
INFO - 08:37:53: over the design space:
INFO - 08:37:53: +-------------+-------------+------------+-------------+-------+
INFO - 08:37:53: | Name | Lower bound | Value | Upper bound | Type |
INFO - 08:37:53: +-------------+-------------+------------+-------------+-------+
INFO - 08:37:53: | x_shared[0] | 0.01 | 0.05 | 0.09 | float |
INFO - 08:37:53: | x_shared[1] | 30000 | 45000 | 60000 | float |
INFO - 08:37:53: | x_shared[2] | 1.4 | 1.6 | 1.8 | float |
INFO - 08:37:53: | x_shared[3] | 2.5 | 5.5 | 8.5 | float |
INFO - 08:37:53: | x_shared[4] | 40 | 55 | 70 | float |
INFO - 08:37:53: | x_shared[5] | 500 | 1000 | 1500 | float |
INFO - 08:37:53: | y_24 | 0.44 | 4.15006276 | 11.13 | float |
INFO - 08:37:53: | y_34 | 0.44 | 1.10754577 | 1.98 | float |
INFO - 08:37:53: +-------------+-------------+------------+-------------+-------+
INFO - 08:37:53: Solving optimization problem with algorithm PYDOE_LHS:
INFO - 08:37:53: 3%|▎ | 1/30 [00:00<00:00, 302.97 it/sec, obj=1.23e+3]
INFO - 08:37:53: 7%|▋ | 2/30 [00:00<00:00, 512.56 it/sec, obj=2.09e+3]
INFO - 08:37:53: 10%|█ | 3/30 [00:00<00:00, 679.13 it/sec, obj=792]
INFO - 08:37:53: 13%|█▎ | 4/30 [00:00<00:00, 813.09 it/sec, obj=387]
INFO - 08:37:53: 17%|█▋ | 5/30 [00:00<00:00, 922.23 it/sec, obj=510]
INFO - 08:37:53: 20%|██ | 6/30 [00:00<00:00, 1012.34 it/sec, obj=1.27e+3]
INFO - 08:37:53: 23%|██▎ | 7/30 [00:00<00:00, 1084.04 it/sec, obj=2.56e+3]
INFO - 08:37:53: 27%|██▋ | 8/30 [00:00<00:00, 1141.27 it/sec, obj=1.88e+3]
INFO - 08:37:53: 30%|███ | 9/30 [00:00<00:00, 1196.21 it/sec, obj=720]
INFO - 08:37:53: 33%|███▎ | 10/30 [00:00<00:00, 1245.75 it/sec, obj=1.33e+3]
INFO - 08:37:53: 37%|███▋ | 11/30 [00:00<00:00, 1289.80 it/sec, obj=436]
INFO - 08:37:53: 40%|████ | 12/30 [00:00<00:00, 1329.20 it/sec, obj=254]
INFO - 08:37:53: 43%|████▎ | 13/30 [00:00<00:00, 1364.10 it/sec, obj=420]
INFO - 08:37:53: 47%|████▋ | 14/30 [00:00<00:00, 1391.74 it/sec, obj=655]
INFO - 08:37:53: 50%|█████ | 15/30 [00:00<00:00, 1415.97 it/sec, obj=93.2]
INFO - 08:37:53: 53%|█████▎ | 16/30 [00:00<00:00, 1436.99 it/sec, obj=1.33e+3]
INFO - 08:37:53: 57%|█████▋ | 17/30 [00:00<00:00, 1460.59 it/sec, obj=690]
INFO - 08:37:53: 60%|██████ | 18/30 [00:00<00:00, 1482.23 it/sec, obj=107]
INFO - 08:37:53: 63%|██████▎ | 19/30 [00:00<00:00, 1502.74 it/sec, obj=213]
INFO - 08:37:53: 67%|██████▋ | 20/30 [00:00<00:00, 1521.11 it/sec, obj=2.24e+3]
INFO - 08:37:53: 70%|███████ | 21/30 [00:00<00:00, 1538.55 it/sec, obj=860]
INFO - 08:37:53: 73%|███████▎ | 22/30 [00:00<00:00, 1553.03 it/sec, obj=71.2]
INFO - 08:37:53: 77%|███████▋ | 23/30 [00:00<00:00, 1565.67 it/sec, obj=861]
INFO - 08:37:53: 80%|████████ | 24/30 [00:00<00:00, 1578.71 it/sec, obj=719]
INFO - 08:37:53: 83%|████████▎ | 25/30 [00:00<00:00, 1592.01 it/sec, obj=153]
INFO - 08:37:53: 87%|████████▋ | 26/30 [00:00<00:00, 1604.79 it/sec, obj=517]
INFO - 08:37:53: 90%|█████████ | 27/30 [00:00<00:00, 1616.56 it/sec, obj=716]
INFO - 08:37:53: 93%|█████████▎| 28/30 [00:00<00:00, 1627.73 it/sec, obj=324]
INFO - 08:37:53: 97%|█████████▋| 29/30 [00:00<00:00, 1638.22 it/sec, obj=432]
INFO - 08:37:53: 100%|██████████| 30/30 [00:00<00:00, 1646.27 it/sec, obj=1.27e+3]
INFO - 08:37:53: Optimization result:
INFO - 08:37:53: Optimizer info:
INFO - 08:37:53: Status: None
INFO - 08:37:53: Message: None
INFO - 08:37:53: Number of calls to the objective function by the optimizer: 30
INFO - 08:37:53: Solution:
INFO - 08:37:53: Objective: 71.16601799429675
INFO - 08:37:53: Design space:
INFO - 08:37:53: +-------------+-------------+---------------------+-------------+-------+
INFO - 08:37:53: | Name | Lower bound | Value | Upper bound | Type |
INFO - 08:37:53: +-------------+-------------+---------------------+-------------+-------+
INFO - 08:37:53: | x_shared[0] | 0.01 | 0.04440901205483268 | 0.09 | float |
INFO - 08:37:53: | x_shared[1] | 30000 | 58940.10748233336 | 60000 | float |
INFO - 08:37:53: | x_shared[2] | 1.4 | 1.441133922818264 | 1.8 | float |
INFO - 08:37:53: | x_shared[3] | 2.5 | 5.893919149663935 | 8.5 | float |
INFO - 08:37:53: | x_shared[4] | 40 | 58.55971698205414 | 70 | float |
INFO - 08:37:53: | x_shared[5] | 500 | 598.9420525239799 | 1500 | float |
INFO - 08:37:53: | y_24 | 0.44 | 0.8060924457095278 | 11.13 | float |
INFO - 08:37:53: | y_34 | 0.44 | 1.458803878476488 | 1.98 | float |
INFO - 08:37:53: +-------------+-------------+---------------------+-------------+-------+
INFO - 08:37:53: *** End DOEScenario execution (time: 0:00:00.026275) ***
See also
In this tutorial, the DOE is based on pyDOE, however, several other designs are
available, based on the package or OpenTURNS. Some examples of these designs are plotted
in DOE algorithms. To list the available DOE algorithms in the
current GEMSEO configuration, use
gemseo.get_available_doe_algorithms()
.
Create the SurrogateDiscipline
#
From this Dataset
, we can build a SurrogateDiscipline
of the Discipline
.
Indeed, by means of the API function create_surrogate
,
we create the SurrogateDiscipline
from the dataset,
which can be executed as any other discipline.
Precisely,
by means of the API function create_surrogate()
,
we create a SurrogateDiscipline
relying on a LinearRegressor
and inheriting from Discipline
:
synthetic_surrogate = create_surrogate("LinearRegressor", synthetic_dataset)
See also
Note that a subset of the inputs and outputs to be used to build the
SurrogateDiscipline
may be specified by the user if needed,
mainly to avoid unnecessary computations.
Then, we execute it as any Discipline
:
input_data = {"x": array([2.0])}
out = synthetic_surrogate.execute(input_data)
out["y"]
array([2.])
In our study case, from the DOE built at Step 1,
we build a RBFRegressor
of \(y_4\)
representing the range in function of L/D:
range_surrogate = create_surrogate("RBFRegressor", mission_dataset)
Use the SurrogateDiscipline
in MDO#
The obtained SurrogateDiscipline
can be used in any
Scenario
, such as a DOEScenario
or MDOScenario
.
We see here that the Discipline.execute()
method can be used as in
any other discipline to compute the outputs for given inputs:
for i in range(5):
lod = i * 2.0
y_4_pred = range_surrogate.execute({"y_24": array([lod])})["y_4"]
print(f"Surrogate range (L/D = {lod}) = {y_4_pred}")
Surrogate range (L/D = 0.0) = [-97.86844673]
Surrogate range (L/D = 2.0) = [184.60105962]
Surrogate range (L/D = 4.0) = [505.37518268]
Surrogate range (L/D = 6.0) = [840.33241658]
Surrogate range (L/D = 8.0) = [1161.49215263]
And we can build and execute an optimization scenario from it.
The design variables are "y_24". The Jacobian matrix is computed by finite
differences by default for surrogates, except for the
SurrogateDiscipline
relying on LinearRegressor
which has
an analytical (and constant) Jacobian.
design_space = design_space.filter(["y_24"])
scenario = create_scenario(
range_surrogate,
"y_4",
design_space,
formulation_name="DisciplinaryOpt",
maximize_objective=True,
)
scenario.execute(algo_name="L-BFGS-B", max_iter=30)
INFO - 08:37:53:
INFO - 08:37:53: *** Start MDOScenario execution ***
INFO - 08:37:53: MDOScenario
INFO - 08:37:53: Disciplines: RBF_DOEScenario
INFO - 08:37:53: MDO formulation: DisciplinaryOpt
INFO - 08:37:53: Optimization problem:
INFO - 08:37:53: minimize -y_4(y_24)
INFO - 08:37:53: with respect to y_24
INFO - 08:37:53: over the design space:
INFO - 08:37:53: +------+-------------+--------------------+-------------+-------+
INFO - 08:37:53: | Name | Lower bound | Value | Upper bound | Type |
INFO - 08:37:53: +------+-------------+--------------------+-------------+-------+
INFO - 08:37:53: | y_24 | 0.44 | 0.8060924457095278 | 11.13 | float |
INFO - 08:37:53: +------+-------------+--------------------+-------------+-------+
INFO - 08:37:53: Solving optimization problem with algorithm L-BFGS-B:
INFO - 08:37:53: 3%|▎ | 1/30 [00:00<00:00, 375.67 it/sec, obj=-10.3]
INFO - 08:37:53: 7%|▋ | 2/30 [00:00<00:00, 247.12 it/sec, obj=-1.59e+3]
INFO - 08:37:53: Optimization result:
INFO - 08:37:53: Optimizer info:
INFO - 08:37:53: Status: 0
INFO - 08:37:53: Message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
INFO - 08:37:53: Number of calls to the objective function by the optimizer: 3
INFO - 08:37:53: Solution:
INFO - 08:37:53: Objective: -1589.7138353791008
INFO - 08:37:53: Design space:
INFO - 08:37:53: +------+-------------+-------+-------------+-------+
INFO - 08:37:53: | Name | Lower bound | Value | Upper bound | Type |
INFO - 08:37:53: +------+-------------+-------+-------------+-------+
INFO - 08:37:53: | y_24 | 0.44 | 11.13 | 11.13 | float |
INFO - 08:37:53: +------+-------------+-------+-------------+-------+
INFO - 08:37:53: *** End MDOScenario execution (time: 0:00:00.014106) ***
Available surrogate models#
Currently, the following surrogate models are available:
Linear regression, based on the Scikit-learn library, for that use the
LinearRegressor
class.Polynomial regression, based on the Scikit-learn library, for that use the
PolynomialRegressor
class,Gaussian processes (also known as Kriging), based on the Scikit-learn library, for that use the
GaussianProcessRegressor
class,Mixture of experts, for that use the
MOERegressor
class,Random forest models, based on the Scikit-learn # library, for that use the
RandomForestRegressor
class.RBF models (Radial Basis Functions), using the SciPy library, for that use the
RBFRegressor
class.PCE models (Polynomial Chaos Expansion), based on the OpenTURNS library, for that use the
PCERegressor
class.
To understand the detailed behavior of the models, please go to the documentation of the used packages.
Extending surrogate models --------------------------
All surrogate models work the same way: the BaseRegressor
base
class shall be extended. See Extend GEMSEO features to learn how to run
GEMSEO
with external Python modules. Then, the RegressorFactory
can
build the new BaseRegressor
automatically from its regression
algorithm name and options. This factory is called by the constructor of
SurrogateDiscipline
.
See also
More generally, GEMSEO provides extension mechanisms to integrate external :DOE and optimization algorithms, disciplines, MDAs and surrogate models.
Total running time of the script: (0 minutes 0.114 seconds)