Parameter space

In this example, we will see the basics of ParameterSpace.

from gemseo.algos.parameter_space import ParameterSpace
from gemseo.api import configure_logger
from gemseo.api import create_discipline
from gemseo.api import create_scenario
from matplotlib import pyplot as plt

configure_logger()

Out:

<RootLogger root (INFO)>

Create a parameter space

Firstly, the creation of a ParameterSpace does not require any mandatory argument:

parameter_space = ParameterSpace()

Then, we can add either deterministic variables from their lower and upper bounds (use ParameterSpace.add_variable()) or uncertain variables from their distribution names and parameters (use ParameterSpace.add_random_variable())

parameter_space.add_variable("x", l_b=-2.0, u_b=2.0)
parameter_space.add_random_variable("y", "SPNormalDistribution", mu=0.0, sigma=1.0)
print(parameter_space)

Out:

+----------------------------------------------------------------------------+
|                              Parameter space                               |
+------+-------------+-------+-------------+-------+-------------------------+
| name | lower_bound | value | upper_bound | type  |   Initial distribution  |
+------+-------------+-------+-------------+-------+-------------------------+
| x    |      -2     |  None |      2      | float |                         |
| y    |     -inf    |   0   |     inf     | float | norm(mu=0.0, sigma=1.0) |
+------+-------------+-------+-------------+-------+-------------------------+

We can check that the deterministic and uncertain variables are implemented as deterministic and deterministic variables respectively:

print("x is deterministic: ", parameter_space.is_deterministic("x"))
print("y is deterministic: ", parameter_space.is_deterministic("y"))
print("x is uncertain: ", parameter_space.is_uncertain("x"))
print("y is uncertain: ", parameter_space.is_uncertain("y"))

Out:

x is deterministic:  True
y is deterministic:  False
x is uncertain:  False
y is uncertain:  True

Sample from the parameter space

We can sample the uncertain variables from the ParameterSpace and get values either as a NumPy array (by default) or as a dictionary of NumPy arrays indexed by the names of the variables:

sample = parameter_space.compute_samples(n_samples=2, as_dict=True)
print(sample)
sample = parameter_space.compute_samples(n_samples=4)
print(sample)

Out:

[{'y': array([-0.29202523])}, {'y': array([0.62106936])}]
[[-0.05989239]
 [-0.31236041]
 [ 0.48944185]
 [-1.33652064]]

Sample a discipline over the parameter space

We can also sample a discipline over the parameter space. For simplicity, we instantiate an AnalyticDiscipline from a dictionary of expressions:

discipline = create_discipline("AnalyticDiscipline", expressions={"z": "x+y"})

From these parameter space and discipline, we build a DOEScenario and execute it with a Latin Hypercube Sampling algorithm and 100 samples.

Warning

A DOEScenario considers all the variables available in its DesignSpace. By inheritance, in the special case of a ParameterSpace, a DOEScenario considers all the variables available in this ParameterSpace. Thus, if we do not filter the uncertain variables, the DOEScenario will consider both the deterministic variables as uniformly distributed variables and the uncertain variables with their specified probability distributions.

scenario = create_scenario(
    [discipline], "DisciplinaryOpt", "z", parameter_space, scenario_type="DOE"
)
scenario.execute({"algo": "lhs", "n_samples": 100})

Out:

    INFO - 07:19:15:
    INFO - 07:19:15: *** Start DOEScenario execution ***
    INFO - 07:19:15: DOEScenario
    INFO - 07:19:15:    Disciplines: AnalyticDiscipline
    INFO - 07:19:15:    MDO formulation: DisciplinaryOpt
    INFO - 07:19:15: Optimization problem:
    INFO - 07:19:15:    minimize z(x, y)
    INFO - 07:19:15:    with respect to x, y
    INFO - 07:19:15:    over the design space:
    INFO - 07:19:15:    |                              Parameter space                               |
    INFO - 07:19:15:    +------+-------------+-------+-------------+-------+-------------------------+
    INFO - 07:19:15:    | name | lower_bound | value | upper_bound | type  |   Initial distribution  |
    INFO - 07:19:15:    +------+-------------+-------+-------------+-------+-------------------------+
    INFO - 07:19:15:    | x    |      -2     |  None |      2      | float |                         |
    INFO - 07:19:15:    | y    |     -inf    |   0   |     inf     | float | norm(mu=0.0, sigma=1.0) |
    INFO - 07:19:15:    +------+-------------+-------+-------------+-------+-------------------------+
    INFO - 07:19:15: Solving optimization problem with algorithm lhs:
    INFO - 07:19:15: ...   0%|          | 0/100 [00:00<?, ?it]
    INFO - 07:19:15: ... 100%|██████████| 100/100 [00:00<00:00, 2585.06 it/sec, obj=-2.21]
    INFO - 07:19:15: Optimization result:
    INFO - 07:19:15:    Optimizer info:
    INFO - 07:19:15:       Status: None
    INFO - 07:19:15:       Message: None
    INFO - 07:19:15:       Number of calls to the objective function by the optimizer: 100
    INFO - 07:19:15:    Solution:
    INFO - 07:19:15:       Objective: -3.3284373246961634
    INFO - 07:19:15:       +-----------------------------------------------------------------------------------------+
    INFO - 07:19:15:       |                                     Parameter space                                     |
    INFO - 07:19:15:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 07:19:15:       | name | lower_bound |       value        | upper_bound | type  |   Initial distribution  |
    INFO - 07:19:15:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 07:19:15:       | x    |      -2     | -1.959995425007306 |      2      | float |                         |
    INFO - 07:19:15:       | y    |     -inf    | -1.368441899688857 |     inf     | float | norm(mu=0.0, sigma=1.0) |
    INFO - 07:19:15:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 07:19:15: *** End DOEScenario execution (time: 0:00:00.057681) ***

{'eval_jac': False, 'algo': 'lhs', 'n_samples': 100}

We can export the optimization problem to a Dataset:

dataset = scenario.export_to_dataset(name="samples")

and visualize it in a tabular way:

print(dataset.export_to_dataframe())

Out:

   design_parameters           functions
                   x         y         z
                   0         0         0
0           1.869403  1.246453  3.115855
1          -1.567970  3.285041  1.717071
2           0.282640 -0.101706  0.180934
3           1.916313  1.848317  3.764630
4           1.562653  0.586038  2.148691
..               ...       ...       ...
95          0.120633 -0.327477 -0.206844
96         -0.999225  1.461403  0.462178
97         -1.396066 -0.972779 -2.368845
98          1.090093  0.225565  1.315658
99         -1.433207 -0.779330 -2.212536

[100 rows x 3 columns]

or with a graphical post-processing, e.g. a scatter plot matrix:

dataset.plot("ScatterMatrix", show=False)
# Workaround for HTML rendering, instead of ``show=True``
plt.show()
plot u parameter space

Out:

/home/docs/checkouts/readthedocs.org/user_builds/gemseo/envs/4.0.0/lib/python3.9/site-packages/gemseo/post/dataset/scatter_plot_matrix.py:135: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared.
  sub_axes = scatter_matrix(

Sample a discipline over the uncertain space

If we want to sample a discipline over the uncertain space, we need to extract it:

uncertain_space = parameter_space.extract_uncertain_space()

Then, we clear the cache, create a new scenario from this parameter space containing only the uncertain variables and execute it.

scenario = create_scenario(
    [discipline], "DisciplinaryOpt", "z", uncertain_space, scenario_type="DOE"
)
scenario.execute({"algo": "lhs", "n_samples": 100})

Out:

    INFO - 07:19:16:
    INFO - 07:19:16: *** Start DOEScenario execution ***
    INFO - 07:19:16: DOEScenario
    INFO - 07:19:16:    Disciplines: AnalyticDiscipline
    INFO - 07:19:16:    MDO formulation: DisciplinaryOpt
    INFO - 07:19:16: Optimization problem:
    INFO - 07:19:16:    minimize z(y)
    INFO - 07:19:16:    with respect to y
    INFO - 07:19:16:    over the design space:
    INFO - 07:19:16:    |                                     Parameter space                                     |
    INFO - 07:19:16:    +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 07:19:16:    | name | lower_bound |       value        | upper_bound | type  |   Initial distribution  |
    INFO - 07:19:16:    +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 07:19:16:    | y    |     -inf    | -1.368441899688857 |     inf     | float | norm(mu=0.0, sigma=1.0) |
    INFO - 07:19:16:    +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 07:19:16: Solving optimization problem with algorithm lhs:
    INFO - 07:19:16: ...   0%|          | 0/100 [00:00<?, ?it]
    INFO - 07:19:16: ... 100%|██████████| 100/100 [00:00<00:00, 2741.05 it/sec, obj=-2.25]
    INFO - 07:19:16: Optimization result:
    INFO - 07:19:16:    Optimizer info:
    INFO - 07:19:16:       Status: None
    INFO - 07:19:16:       Message: None
    INFO - 07:19:16:       Number of calls to the objective function by the optimizer: 100
    INFO - 07:19:16:    Solution:
    INFO - 07:19:16:       Objective: -4.071174990042083
    INFO - 07:19:16:       +-----------------------------------------------------------------------------------------+
    INFO - 07:19:16:       |                                     Parameter space                                     |
    INFO - 07:19:16:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 07:19:16:       | name | lower_bound |       value        | upper_bound | type  |   Initial distribution  |
    INFO - 07:19:16:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 07:19:16:       | y    |     -inf    | -2.637968206824666 |     inf     | float | norm(mu=0.0, sigma=1.0) |
    INFO - 07:19:16:       +------+-------------+--------------------+-------------+-------+-------------------------+
    INFO - 07:19:16: *** End DOEScenario execution (time: 0:00:00.055011) ***

{'eval_jac': False, 'algo': 'lhs', 'n_samples': 100}

Finally, we build a dataset from the disciplinary cache and visualize it. We can see that the deterministic variable ‘x’ is set to its default value for all evaluations, contrary to the previous case where we were considering the whole parameter space:

dataset = scenario.export_to_dataset(name="samples")
print(dataset.export_to_dataframe())

Out:

   design_parameters functions
                   y         z
                   0         0
0          -0.640726 -2.073933
1          -0.393653 -1.826859
2           0.550565 -0.882642
3           0.944369 -0.488838
4          -2.115275 -3.548482
..               ...       ...
95          0.081947 -1.351260
96         -1.085812 -2.519018
97         -0.761651 -2.194858
98         -0.042932 -1.476139
99         -0.813354 -2.246561

[100 rows x 2 columns]

Total running time of the script: ( 0 minutes 0.587 seconds)

Gallery generated by Sphinx-Gallery