Note
Click here to download the full example code
Parameter space¶
In this example, we will see the basics of ParameterSpace
.
from gemseo.algos.parameter_space import ParameterSpace
from gemseo.api import configure_logger
from gemseo.api import create_discipline
from gemseo.api import create_scenario
configure_logger()
Out:
<RootLogger root (INFO)>
Firstly, a ParameterSpace
does not require any mandatory argument.
Create a parameter space¶
parameter_space = ParameterSpace()
Then, we can add either deterministic variables from their lower and upper
bounds (use DesignSpace.add_variable()
)
or uncertain variables from their distribution names and parameters
(use add_random_variable()
)
parameter_space.add_variable("x", l_b=-2.0, u_b=2.0)
parameter_space.add_random_variable("y", "SPNormalDistribution", mu=0.0, sigma=1.0)
print(parameter_space)
Out:
+----------------------------------------------------------------------------+
| Parameter space |
+------+-------------+-------+-------------+-------+-------------------------+
| name | lower_bound | value | upper_bound | type | Initial distribution |
+------+-------------+-------+-------------+-------+-------------------------+
| x | -2 | None | 2 | float | |
| y | -inf | 0 | inf | float | norm(mu=0.0, sigma=1.0) |
+------+-------------+-------+-------------+-------+-------------------------+
We can check that the deterministic and uncertain variables are implemented as deterministic and deterministic variables respectively:
print("x is deterministic: ", parameter_space.is_deterministic("x"))
print("y is deterministic: ", parameter_space.is_deterministic("y"))
print("x is uncertain: ", parameter_space.is_uncertain("x"))
print("y is uncertain: ", parameter_space.is_uncertain("y"))
Out:
x is deterministic: True
y is deterministic: False
x is uncertain: False
y is uncertain: True
Sample from the parameter space¶
We can sample the uncertain variables from the ParameterSpace
and
get values either as an array (default value) or as a dictionary:
sample = parameter_space.compute_samples(n_samples=2, as_dict=True)
print(sample)
sample = parameter_space.compute_samples(n_samples=4)
print(sample)
Out:
[{'y': array([-0.22455908])}, {'y': array([0.96599831])}]
[[-0.94119273]
[ 0.71294261]
[ 0.82955208]
[ 1.40263977]]
Sample a discipline over the parameter space¶
We can also sample a discipline over the parameter space. For simplicity,
we instantiate an AnalyticDiscipline
from a dictionary of
expressions.
discipline = create_discipline("AnalyticDiscipline", expressions={"z": "x+y"})
From these parameter space and discipline, we build a DOEScenario
and execute it with a Latin Hypercube Sampling algorithm and 100 samples.
Warning
A Scenario
deals with all variables available in the
DesignSpace
. By inheritance, a DOEScenario
deals
with all variables available in the ParameterSpace
.
Thus, if we do not filter the uncertain variables, the
DOEScenario
will consider all variables. In particular, the
deterministic variables will be consider as uniformly distributed.
scenario = create_scenario(
[discipline], "DisciplinaryOpt", "z", parameter_space, scenario_type="DOE"
)
scenario.execute({"algo": "lhs", "n_samples": 100})
Out:
INFO - 07:19:14:
INFO - 07:19:14: *** Start DOEScenario execution ***
INFO - 07:19:14: DOEScenario
INFO - 07:19:14: Disciplines: AnalyticDiscipline
INFO - 07:19:14: MDO formulation: DisciplinaryOpt
INFO - 07:19:14: Optimization problem:
INFO - 07:19:14: minimize z(x, y)
INFO - 07:19:14: with respect to x, y
INFO - 07:19:14: over the design space:
INFO - 07:19:14: | Parameter space |
INFO - 07:19:14: +------+-------------+-------+-------------+-------+-------------------------+
INFO - 07:19:14: | name | lower_bound | value | upper_bound | type | Initial distribution |
INFO - 07:19:14: +------+-------------+-------+-------------+-------+-------------------------+
INFO - 07:19:14: | x | -2 | None | 2 | float | |
INFO - 07:19:14: | y | -inf | 0 | inf | float | norm(mu=0.0, sigma=1.0) |
INFO - 07:19:14: +------+-------------+-------+-------------+-------+-------------------------+
INFO - 07:19:14: Solving optimization problem with algorithm lhs:
INFO - 07:19:14: ... 0%| | 0/100 [00:00<?, ?it]
INFO - 07:19:14: ... 100%|██████████| 100/100 [00:00<00:00, 2602.70 it/sec, obj=-2.21]
INFO - 07:19:14: Optimization result:
INFO - 07:19:14: Optimizer info:
INFO - 07:19:14: Status: None
INFO - 07:19:14: Message: None
INFO - 07:19:14: Number of calls to the objective function by the optimizer: 100
INFO - 07:19:14: Solution:
INFO - 07:19:14: Objective: -3.3284373246961634
INFO - 07:19:14: +-----------------------------------------------------------------------------------------+
INFO - 07:19:14: | Parameter space |
INFO - 07:19:14: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 07:19:14: | name | lower_bound | value | upper_bound | type | Initial distribution |
INFO - 07:19:14: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 07:19:14: | x | -2 | -1.959995425007306 | 2 | float | |
INFO - 07:19:14: | y | -inf | -1.368441899688857 | inf | float | norm(mu=0.0, sigma=1.0) |
INFO - 07:19:14: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 07:19:14: *** End DOEScenario execution (time: 0:00:00.058581) ***
{'eval_jac': False, 'algo': 'lhs', 'n_samples': 100}
We can visualize the result by encapsulating the database in
a Dataset
:
dataset = scenario.export_to_dataset(opt_naming=False)
This visualization can be tabular for example:
print(dataset.export_to_dataframe())
Out:
inputs outputs
x y z
0 0 0
0 1.869403 1.246453 3.115855
1 -1.567970 3.285041 1.717071
2 0.282640 -0.101706 0.180934
3 1.916313 1.848317 3.764630
4 1.562653 0.586038 2.148691
.. ... ... ...
95 0.120633 -0.327477 -0.206844
96 -0.999225 1.461403 0.462178
97 -1.396066 -0.972779 -2.368845
98 1.090093 0.225565 1.315658
99 -1.433207 -0.779330 -2.212536
[100 rows x 3 columns]
or graphical by means of a scatter plot matrix for example:
dataset.plot("ScatterMatrix")
Out:
/home/docs/checkouts/readthedocs.org/user_builds/gemseo/envs/4.0.0/lib/python3.9/site-packages/gemseo/post/dataset/scatter_plot_matrix.py:135: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared.
sub_axes = scatter_matrix(
<gemseo.post.dataset.scatter_plot_matrix.ScatterMatrix object at 0x7f28d8a32220>
Sample a discipline over the uncertain space¶
If we want to sample a discipline over the uncertain space, we need to filter the uncertain variables:
parameter_space.filter(parameter_space.uncertain_variables)
Out:
<gemseo.algos.parameter_space.ParameterSpace object at 0x7f28d8bf0550>
Then, we create a new scenario from this parameter space containing only the uncertain variables and execute it.
scenario = create_scenario(
[discipline], "DisciplinaryOpt", "z", parameter_space, scenario_type="DOE"
)
scenario.execute({"algo": "lhs", "n_samples": 100})
Out:
INFO - 07:19:14:
INFO - 07:19:14: *** Start DOEScenario execution ***
INFO - 07:19:14: DOEScenario
INFO - 07:19:14: Disciplines: AnalyticDiscipline
INFO - 07:19:14: MDO formulation: DisciplinaryOpt
INFO - 07:19:14: Optimization problem:
INFO - 07:19:14: minimize z(y)
INFO - 07:19:14: with respect to y
INFO - 07:19:14: over the design space:
INFO - 07:19:14: | Parameter space |
INFO - 07:19:14: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 07:19:14: | name | lower_bound | value | upper_bound | type | Initial distribution |
INFO - 07:19:14: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 07:19:14: | y | -inf | -1.368441899688857 | inf | float | norm(mu=0.0, sigma=1.0) |
INFO - 07:19:14: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 07:19:14: Solving optimization problem with algorithm lhs:
INFO - 07:19:14: ... 0%| | 0/100 [00:00<?, ?it]
INFO - 07:19:14: ... 100%|██████████| 100/100 [00:00<00:00, 2690.88 it/sec, obj=-2.25]
INFO - 07:19:14: Optimization result:
INFO - 07:19:14: Optimizer info:
INFO - 07:19:14: Status: None
INFO - 07:19:14: Message: None
INFO - 07:19:14: Number of calls to the objective function by the optimizer: 100
INFO - 07:19:14: Solution:
INFO - 07:19:14: Objective: -4.071174990042083
INFO - 07:19:14: +-----------------------------------------------------------------------------------------+
INFO - 07:19:14: | Parameter space |
INFO - 07:19:14: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 07:19:14: | name | lower_bound | value | upper_bound | type | Initial distribution |
INFO - 07:19:14: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 07:19:14: | y | -inf | -2.637968206824666 | inf | float | norm(mu=0.0, sigma=1.0) |
INFO - 07:19:14: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 07:19:14: *** End DOEScenario execution (time: 0:00:00.056016) ***
{'eval_jac': False, 'algo': 'lhs', 'n_samples': 100}
Finally, we build a dataset from the disciplinary cache and visualize it. We can see that the deterministic variable ‘x’ is set to its default value for all evaluations, contrary to the previous case where we were considering the whole parameter space.
dataset = scenario.export_to_dataset(opt_naming=False)
print(dataset.export_to_dataframe())
Out:
inputs outputs
y z
0 0
0 -0.640726 -2.073933
1 -0.393653 -1.826859
2 0.550565 -0.882642
3 0.944369 -0.488838
4 -2.115275 -3.548482
.. ... ...
95 0.081947 -1.351260
96 -1.085812 -2.519018
97 -0.761651 -2.194858
98 -0.042932 -1.476139
99 -0.813354 -2.246561
[100 rows x 2 columns]
Total running time of the script: ( 0 minutes 0.510 seconds)