# Parameter space¶

In this example, we will see the basics of ParameterSpace.

from __future__ import division, unicode_literals

from gemseo.algos.parameter_space import ParameterSpace
from gemseo.api import configure_logger, create_discipline, create_scenario

configure_logger()


Out:

<RootLogger root (INFO)>


Firstly, a ParameterSpace does not require any mandatory argument.

## Create a parameter space¶

parameter_space = ParameterSpace()


Out:

INFO - 09:23:51: *** Create a new parameter space ***


Then, we can add either deterministic variables from their lower and upper bounds (use DesignSpace.add_variable()) or uncertain variables from their distribution names and parameters (use add_random_variable())

parameter_space.add_variable("x", l_b=-2.0, u_b=2.0)
print(parameter_space)


Out:

    INFO - 09:23:51: Define the random variable: y
INFO - 09:23:51:    Distribution: norm(mu=0.0, sigma=1.0)
INFO - 09:23:51:    Dimension: 1
INFO - 09:23:51: |_ Mathematical support: [array([-inf,  inf])]
INFO - 09:23:51: |_ Numerical range: [array([-7.03448383,  7.03448691])]
INFO - 09:23:51: Add the random variable: y
INFO - 09:23:51: Define the random variable: y
INFO - 09:23:51:    Distribution: Composed(independent_copula)
INFO - 09:23:51:    Dimension: 1
INFO - 09:23:51:    Marginals:
INFO - 09:23:51:       y(1): norm(mu=0.0, sigma=1.0)
+----------------------------------------------------------------------------+
|                              Parameter space                               |
+------+-------------+-------+-------------+-------+-------------------------+
| name | lower_bound | value | upper_bound | type  |   Initial distribution  |
+------+-------------+-------+-------------+-------+-------------------------+
| x    |      -2     |  None |      2      | float |                         |
| y    |     -inf    |   0   |     inf     | float | norm(mu=0.0, sigma=1.0) |
+------+-------------+-------+-------------+-------+-------------------------+


We can check that the deterministic and uncertain variables are implemented as deterministic and deterministic variables respectively:

print("x is deterministic: ", parameter_space.is_deterministic("x"))
print("y is deterministic: ", parameter_space.is_deterministic("y"))
print("x is uncertain: ", parameter_space.is_uncertain("x"))
print("y is uncertain: ", parameter_space.is_uncertain("y"))


Out:

x is deterministic:  True
y is deterministic:  False
x is uncertain:  False
y is uncertain:  True


## Sample from the parameter space¶

We can sample the uncertain variables from the ParameterSpace and get values either as an array (default value) or as a dictionary:

sample = parameter_space.compute_samples(n_samples=2, as_dict=True)
print(sample)
sample = parameter_space.compute_samples(n_samples=4)
print(sample)


Out:

[{'y': array([-0.80217284])}, {'y': array([-0.44887781])}]
[[-1.10593508]
[-1.65451545]
[-2.3634686 ]
[ 1.13534535]]


## Sample a discipline over the parameter space¶

We can also sample a discipline over the parameter space. For simplicity, we instantiate an AnalyticDiscipline from a dictionary of expressions and update the cache policy # so as to cache all data in memory.

discipline = create_discipline("AnalyticDiscipline", expressions_dict={"z": "x+y"})
discipline.set_cache_policy(discipline.MEMORY_FULL_CACHE)


From these parameter space and discipline, we build a DOEScenario and execute it with a Latin Hypercube Sampling algorithm and 100 samples.

Warning

A Scenario deals with all variables available in the DesignSpace. By inheritance, a DOEScenario deals with all variables available in the ParameterSpace. Thus, if we do not filter the uncertain variables, the DOEScenario will consider all variables. In particular, the deterministic variables will be consider as uniformly distributed.

scenario = create_scenario(
[discipline], "DisciplinaryOpt", "z", parameter_space, scenario_type="DOE"
)
scenario.execute({"algo": "lhs", "n_samples": 100})


Out:

    INFO - 09:23:51:
INFO - 09:23:51: *** Start DOE Scenario execution ***
INFO - 09:23:51: DOEScenario
INFO - 09:23:51:    Disciplines: AnalyticDiscipline
INFO - 09:23:51:    MDOFormulation: DisciplinaryOpt
INFO - 09:23:51:    Algorithm: lhs
INFO - 09:23:51: Optimization problem:
INFO - 09:23:51:    Minimize: z(x, y)
INFO - 09:23:51:    With respect to: x, y
INFO - 09:23:51: DOE sampling:   0%|          | 0/100 [00:00<?, ?it]
INFO - 09:23:51: DOE sampling:  39%|███▉      | 39/100 [00:00<00:00, 982.90 it/sec, obj=-.913]
INFO - 09:23:51: DOE sampling:  79%|███████▉  | 79/100 [00:00<00:00, 491.28 it/sec, obj=-.377]
INFO - 09:23:51: DOE sampling: 100%|██████████| 100/100 [00:00<00:00, 384.88 it/sec, obj=-2.21]
INFO - 09:23:51: Optimization result:
INFO - 09:23:51: Objective value = -3.3284373246961634
INFO - 09:23:51: The result is feasible.
INFO - 09:23:51: Status: None
INFO - 09:23:51: Optimizer message: None
INFO - 09:23:51: Number of calls to the objective function by the optimizer: 100
INFO - 09:23:51: +-----------------------------------------------------------------------------------------+
INFO - 09:23:51: |                                     Parameter space                                     |
INFO - 09:23:51: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 09:23:51: | name | lower_bound |       value        | upper_bound | type  |   Initial distribution  |
INFO - 09:23:51: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 09:23:51: | x    |      -2     | -1.959995425007306 |      2      | float |                         |
INFO - 09:23:51: | y    |     -inf    | -1.368441899688857 |     inf     | float | norm(mu=0.0, sigma=1.0) |
INFO - 09:23:51: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 09:23:51: *** DOE Scenario run terminated ***

{'eval_jac': False, 'algo': 'lhs', 'n_samples': 100}


We can visualize the result by encapsulating the disciplinary cache in a Dataset:

dataset = discipline.cache.export_to_dataset()


This visualization can be tabular for example:

print(dataset.export_to_dataframe())


Out:

      inputs             outputs
x         y         z
0         0         0
0   1.869403  1.246453  3.115855
1  -1.567970  3.285041  1.717071
2   0.282640 -0.101706  0.180934
3   1.916313  1.848317  3.764630
4   1.562653  0.586038  2.148691
..       ...       ...       ...
95  0.120633 -0.327477 -0.206844
96 -0.999225  1.461403  0.462178
97 -1.396066 -0.972779 -2.368845
98  1.090093  0.225565  1.315658
99 -1.433207 -0.779330 -2.212536

[100 rows x 3 columns]


or graphical by means of a scatter plot matrix for example:

dataset.plot("ScatterMatrix")


Out:

<gemseo.post.dataset.scatter_plot_matrix.ScatterMatrix object at 0x7f61b85a5a60>


## Sample a discipline over the uncertain space¶

If we want to sample a discipline over the uncertain space, we need to filter the uncertain variables:

parameter_space.filter(parameter_space.uncertain_variables)


Out:

<gemseo.algos.parameter_space.ParameterSpace object at 0x7f61b8b35040>


Then, we clear the cache, create a new scenario from this parameter space containing only the uncertain variables and execute it.

discipline.cache.clear()
scenario = create_scenario(
[discipline], "DisciplinaryOpt", "z", parameter_space, scenario_type="DOE"
)
scenario.execute({"algo": "lhs", "n_samples": 100})


Out:

    INFO - 09:23:52:
INFO - 09:23:52: *** Start DOE Scenario execution ***
INFO - 09:23:52: DOEScenario
INFO - 09:23:52:    Disciplines: AnalyticDiscipline
INFO - 09:23:52:    MDOFormulation: DisciplinaryOpt
INFO - 09:23:52:    Algorithm: lhs
INFO - 09:23:52: Optimization problem:
INFO - 09:23:52:    Minimize: z(y)
INFO - 09:23:52:    With respect to: y
INFO - 09:23:52: DOE sampling:   0%|          | 0/100 [00:00<?, ?it]
INFO - 09:23:52: DOE sampling:  41%|████      | 41/100 [00:00<00:00, 999.61 it/sec, obj=-1.38]
INFO - 09:23:52: DOE sampling:  82%|████████▏ | 82/100 [00:00<00:00, 494.93 it/sec, obj=1.29]
INFO - 09:23:52: DOE sampling: 100%|██████████| 100/100 [00:00<00:00, 402.17 it/sec, obj=-.813]
INFO - 09:23:52: Optimization result:
INFO - 09:23:52: Objective value = -2.6379682068246657
INFO - 09:23:52: The result is feasible.
INFO - 09:23:52: Status: None
INFO - 09:23:52: Optimizer message: None
INFO - 09:23:52: Number of calls to the objective function by the optimizer: 100
INFO - 09:23:52: +-----------------------------------------------------------------------------------------+
INFO - 09:23:52: |                                     Parameter space                                     |
INFO - 09:23:52: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 09:23:52: | name | lower_bound |       value        | upper_bound | type  |   Initial distribution  |
INFO - 09:23:52: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 09:23:52: | y    |     -inf    | -2.637968206824666 |     inf     | float | norm(mu=0.0, sigma=1.0) |
INFO - 09:23:52: +------+-------------+--------------------+-------------+-------+-------------------------+
INFO - 09:23:52: *** DOE Scenario run terminated ***

{'eval_jac': False, 'algo': 'lhs', 'n_samples': 100}


Finally, we build a dataset from the disciplinary cache and visualize it. We can see that the deterministic variable ‘x’ is set to its default value for all evaluations, contrary to the previous case where we were considering the whole parameter space.

dataset = discipline.cache.export_to_dataset()
print(dataset.export_to_dataframe())


Out:

   inputs             outputs
x         y         z
0         0         0
0     0.0 -0.640726 -0.640726
1     0.0 -0.393653 -0.393653
2     0.0  0.550565  0.550565
3     0.0  0.944369  0.944369
4     0.0 -2.115275 -2.115275
..    ...       ...       ...
95    0.0  0.081947  0.081947
96    0.0 -1.085812 -1.085812
97    0.0 -0.761651 -0.761651
98    0.0 -0.042932 -0.042932
99    0.0 -0.813354 -0.813354

[100 rows x 3 columns]


Total running time of the script: ( 0 minutes 1.543 seconds)

Gallery generated by Sphinx-Gallery