Parameter space

In this example, we will see the basics of ParameterSpace.

from __future__ import annotations

from gemseo.algos.parameter_space import ParameterSpace
from gemseo.api import configure_logger
from gemseo.api import create_discipline
from gemseo.api import create_scenario

configure_logger()
<RootLogger root (INFO)>

Firstly, a ParameterSpace does not require any mandatory argument.

Create a parameter space

parameter_space = ParameterSpace()

Then, we can add either deterministic variables from their lower and upper bounds (use DesignSpace.add_variable()) or uncertain variables from their distribution names and parameters (use add_random_variable())

parameter_space.add_variable("x", l_b=-2.0, u_b=2.0)
parameter_space.add_random_variable("y", "SPNormalDistribution", mu=0.0, sigma=1.0)
print(parameter_space)
+----------------------------------------------------------------------------+
|                              Parameter space                               |
+------+-------------+-------+-------------+-------+-------------------------+
| name | lower_bound | value | upper_bound | type  |   Initial distribution  |
+------+-------------+-------+-------------+-------+-------------------------+
| x    |      -2     |  None |      2      | float |                         |
| y    |     -inf    |   0   |     inf     | float | norm(mu=0.0, sigma=1.0) |
+------+-------------+-------+-------------+-------+-------------------------+

We can check that the deterministic and uncertain variables are implemented as deterministic and deterministic variables respectively:

print("x is deterministic: ", parameter_space.is_deterministic("x"))
print("y is deterministic: ", parameter_space.is_deterministic("y"))
print("x is uncertain: ", parameter_space.is_uncertain("x"))
print("y is uncertain: ", parameter_space.is_uncertain("y"))
x is deterministic:  True
y is deterministic:  False
x is uncertain:  False
y is uncertain:  True

Sample from the parameter space

We can sample the uncertain variables from the ParameterSpace and get values either as an array (default value) or as a dictionary:

sample = parameter_space.compute_samples(n_samples=2, as_dict=True)
print(sample)
sample = parameter_space.compute_samples(n_samples=4)
print(sample)
[{'y': array([-0.22455908])}, {'y': array([0.96599831])}]
[[-0.94119273]
 [ 0.71294261]
 [ 0.82955208]
 [ 1.40263977]]

Sample a discipline over the parameter space

We can also sample a discipline over the parameter space. For simplicity, we instantiate an AnalyticDiscipline from a dictionary of expressions.

discipline = create_discipline("AnalyticDiscipline", expressions={"z": "x+y"})

From these parameter space and discipline, we build a DOEScenario and execute it with a Latin Hypercube Sampling algorithm and 100 samples.

Warning

A Scenario deals with all variables available in the DesignSpace. By inheritance, a DOEScenario deals with all variables available in the ParameterSpace. Thus, if we do not filter the uncertain variables, the DOEScenario will consider all variables. In particular, the deterministic variables will be consider as uniformly distributed.

scenario = create_scenario(
    [discipline], "DisciplinaryOpt", "z", parameter_space, scenario_type="DOE"
)
scenario.execute({"algo": "lhs", "n_samples": 100})
    INFO - 14:48:21:
    INFO - 14:48:21: *** Start DOEScenario execution ***
    INFO - 14:48:21: DOEScenario
    INFO - 14:48:21:    Disciplines: AnalyticDiscipline
    INFO - 14:48:21:    MDO formulation: DisciplinaryOpt
    INFO - 14:48:21: Optimization problem:
    INFO - 14:48:21:    minimize z(x, y)
    INFO - 14:48:21:    with respect to x, y
    INFO - 14:48:21:    over the design space:
    INFO - 14:48:21:    |                              Parameter space                               |
    INFO - 14:48:21:    +------+-------------+-------+-------------+-------+-------------------------+
    INFO - 14:48:21:    | name | lower_bound | value | upper_bound | type  |   Initial distribution  |
    INFO - 14:48:21:    +------+-------------+-------+-------------+-------+-------------------------+
    INFO - 14:48:21:    | x    |      -2     |  None |      2      | float |                         |
    INFO - 14:48:21:    | y    |     -inf    |   0   |     inf     | float | norm(mu=0.0, sigma=1.0) |
    INFO - 14:48:21:    +------+-------------+-------+-------------+-------+-------------------------+
    INFO - 14:48:21: Solving optimization problem with algorithm lhs:
    INFO - 14:48:21: ...   0%|          | 0/100 [00:00<?, ?it]
    INFO - 14:48:21: ... 100%|██████████| 100/100 [00:00<00:00, 2957.67 it/sec, obj=-2.21]
    INFO - 14:48:21: Optimization result:
    INFO - 14:48:21:    Optimizer info:
    INFO - 14:48:21:       Status: None
    INFO - 14:48:21:       Message: None
    INFO - 14:48:21:       Number of calls to the objective function by the optimizer: 100
    INFO - 14:48:21:    Solution:
    INFO - 14:48:21:       Objective: -3.3284373246961634
    INFO - 14:48:21:       +-----------------------------------------------------------------------------------------+
    INFO - 14:48:21:       |                                     Parameter space                                     |
    INFO - 14:48:21:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 14:48:21:       | name | lower_bound |       value        | upper_bound | type  |   Initial distribution  |
    INFO - 14:48:21:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 14:48:21:       | x    |      -2     | -1.959995425007306 |      2      | float |                         |
    INFO - 14:48:21:       | y    |     -inf    | -1.368441899688857 |     inf     | float | norm(mu=0.0, sigma=1.0) |
    INFO - 14:48:21:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 14:48:21: *** End DOEScenario execution (time: 0:00:00.058304) ***

{'eval_jac': False, 'algo': 'lhs', 'n_samples': 100}

We can visualize the result by encapsulating the database in a Dataset:

dataset = scenario.export_to_dataset(opt_naming=False)

This visualization can be tabular for example:

print(dataset.export_to_dataframe())
      inputs             outputs
           x         y         z
           0         0         0
0   1.869403  1.246453  3.115855
1  -1.567970  3.285041  1.717071
2   0.282640 -0.101706  0.180934
3   1.916313  1.848317  3.764630
4   1.562653  0.586038  2.148691
..       ...       ...       ...
95  0.120633 -0.327477 -0.206844
96 -0.999225  1.461403  0.462178
97 -1.396066 -0.972779 -2.368845
98  1.090093  0.225565  1.315658
99 -1.433207 -0.779330 -2.212536

[100 rows x 3 columns]

or graphical by means of a scatter plot matrix for example:

dataset.plot("ScatterMatrix")
/home/docs/checkouts/readthedocs.org/user_builds/gemseo/envs/4.1.0/lib/python3.9/site-packages/gemseo/post/dataset/scatter_plot_matrix.py:135: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared.
  sub_axes = scatter_matrix(

<gemseo.post.dataset.scatter_plot_matrix.ScatterMatrix object at 0x7f3c38d94c40>

Sample a discipline over the uncertain space

If we want to sample a discipline over the uncertain space, we need to filter the uncertain variables:

parameter_space.filter(parameter_space.uncertain_variables)
<gemseo.algos.parameter_space.ParameterSpace object at 0x7f3bf9137e50>

Then, we create a new scenario from this parameter space containing only the uncertain variables and execute it.

scenario = create_scenario(
    [discipline], "DisciplinaryOpt", "z", parameter_space, scenario_type="DOE"
)
scenario.execute({"algo": "lhs", "n_samples": 100})
    INFO - 14:48:21:
    INFO - 14:48:21: *** Start DOEScenario execution ***
    INFO - 14:48:21: DOEScenario
    INFO - 14:48:21:    Disciplines: AnalyticDiscipline
    INFO - 14:48:21:    MDO formulation: DisciplinaryOpt
    INFO - 14:48:21: Optimization problem:
    INFO - 14:48:21:    minimize z(y)
    INFO - 14:48:21:    with respect to y
    INFO - 14:48:21:    over the design space:
    INFO - 14:48:21:    |                                     Parameter space                                     |
    INFO - 14:48:21:    +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 14:48:21:    | name | lower_bound |       value        | upper_bound | type  |   Initial distribution  |
    INFO - 14:48:21:    +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 14:48:21:    | y    |     -inf    | -1.368441899688857 |     inf     | float | norm(mu=0.0, sigma=1.0) |
    INFO - 14:48:21:    +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 14:48:21: Solving optimization problem with algorithm lhs:
    INFO - 14:48:21: ...   0%|          | 0/100 [00:00<?, ?it]
    INFO - 14:48:21: ... 100%|██████████| 100/100 [00:00<00:00, 3022.33 it/sec, obj=-.813]
    INFO - 14:48:21: Optimization result:
    INFO - 14:48:21:    Optimizer info:
    INFO - 14:48:21:       Status: None
    INFO - 14:48:21:       Message: None
    INFO - 14:48:21:       Number of calls to the objective function by the optimizer: 100
    INFO - 14:48:21:    Solution:
    INFO - 14:48:21:       Objective: -2.6379682068246657
    INFO - 14:48:21:       +-----------------------------------------------------------------------------------------+
    INFO - 14:48:21:       |                                     Parameter space                                     |
    INFO - 14:48:21:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 14:48:21:       | name | lower_bound |       value        | upper_bound | type  |   Initial distribution  |
    INFO - 14:48:21:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 14:48:21:       | y    |     -inf    | -2.637968206824666 |     inf     | float | norm(mu=0.0, sigma=1.0) |
    INFO - 14:48:21:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 14:48:21: *** End DOEScenario execution (time: 0:00:00.056268) ***

{'eval_jac': False, 'algo': 'lhs', 'n_samples': 100}

Finally, we build a dataset from the disciplinary cache and visualize it. We can see that the deterministic variable ‘x’ is set to its default value for all evaluations, contrary to the previous case where we were considering the whole parameter space.

dataset = scenario.export_to_dataset(opt_naming=False)
print(dataset.export_to_dataframe())
      inputs   outputs
           y         z
           0         0
0  -0.640726 -0.640726
1  -0.393653 -0.393653
2   0.550565  0.550565
3   0.944369  0.944369
4  -2.115275 -2.115275
..       ...       ...
95  0.081947  0.081947
96 -1.085812 -1.085812
97 -0.761651 -0.761651
98 -0.042932 -0.042932
99 -0.813354 -0.813354

[100 rows x 2 columns]

Total running time of the script: ( 0 minutes 0.550 seconds)

Gallery generated by Sphinx-Gallery