robustness_quantifier module¶
Quantification of robustness of the optimum to variables perturbations.
Classes:

classdocs. 
Functions:

Draw random samples from a multivariate normal distribution. 
 class gemseo.post.core.robustness_quantifier.RobustnessQuantifier(history, approximation_method='SR1')[source]¶
Bases:
object
classdocs.
Constructor.
 Parameters
history – an approximation history.
approximation_method –
an approximation method for the Hessian.
By default it is set to SR1.
Attributes:
Methods:
compute_approximation
(funcname[, ...])Builds the BFGS approximation for the hessian.
compute_expected_value
(expect, cov)computes 1/2*E(e.T B e) , where e is a vector of expected values expect and covariance matrix cov.
compute_function_approximation
(x_vars)Computes a second order approximation of the function.
compute_gradient_approximation
(x_vars)Computes a first order approximation of the gradient based on the hessian.
compute_variance
(expect, cov)computes 1/2*E(e.T B e), where e is a vector of expected values expect and covariance matrix cov.
montecarlo_average_var
(mean, cov[, ...])Computes the variance and expected value using Monte Carlo approach.
 AVAILABLE_APPROXIMATIONS = ['BFGS', 'SR1', 'LEAST_SQUARES']¶
 compute_approximation(funcname, first_iter=0, last_iter=0, b0_mat=None, at_most_niter= 1, func_index=None)[source]¶
Builds the BFGS approximation for the hessian.
 Parameters
at_most_niter –
maximum number of iterations to take (Default value = 1).
By default it is set to 1.
funcname – param first_iter: (Default value = 0).
last_iter –
Default value = 0).
By default it is set to 0.
b0_mat –
Default value = None).
By default it is set to None.
func_index –
Default value = None).
By default it is set to None.
first_iter –
(Default value = 0).
By default it is set to 0.
 compute_expected_value(expect, cov)[source]¶
computes 1/2*E(e.T B e) , where e is a vector of expected values expect and covariance matrix cov.
 Parameters
expect – the expected value of inputs.
cov – the covariance matrix of inputs.
 compute_function_approximation(x_vars)[source]¶
Computes a second order approximation of the function.
 Parameters
x_vars – the point on which the approximation is evaluated.
x_vars – x vars.
 compute_gradient_approximation(x_vars)[source]¶
Computes a first order approximation of the gradient based on the hessian.
 Parameters
x_vars – the point on which the approximation is evaluated.
 compute_variance(expect, cov)[source]¶
computes 1/2*E(e.T B e), where e is a vector of expected values expect and covariance matrix cov.
 Parameters
expect – the expected value of inputs.
cov – the covariance matrix of inputs.
 montecarlo_average_var(mean, cov, n_samples=100000, func=None)[source]¶
Computes the variance and expected value using Monte Carlo approach.
 Parameters
mean – the mean value.
cov – the covariance matrix.
n_samples –
the number of samples for the distribution (Default value = 100000).
By default it is set to 100000.
func –
if None, the compute_function_approximation function, otherwise a user function (Default value = None).
By default it is set to None.
 gemseo.post.core.robustness_quantifier.multivariate_normal(mean, cov, size=None, check_valid='warn', tol=1e8)¶
Draw random samples from a multivariate normal distribution.
The multivariate normal, multinormal or Gaussian distribution is a generalization of the onedimensional normal distribution to higher dimensions. Such a distribution is specified by its mean and covariance matrix. These parameters are analogous to the mean (average or “center”) and variance (standard deviation, or “width,” squared) of the onedimensional normal distribution.
Note
New code should use the
multivariate_normal
method of adefault_rng()
instance instead; please see the randomquickstart. Parameters
mean (1D array_like, of length N) – Mean of the Ndimensional distribution.
cov (2D array_like, of shape (N, N)) – Covariance matrix of the distribution. It must be symmetric and positivesemidefinite for proper sampling.
size (int or tuple of ints, optional) – Given a shape of, for example,
(m,n,k)
,m*n*k
samples are generated, and packed in an mbynbyk arrangement. Because each sample is Ndimensional, the output shape is(m,n,k,N)
. If no shape is specified, a single (ND) sample is returned.check_valid ({ 'warn', 'raise', 'ignore' }, optional) – Behavior when the covariance matrix is not positive semidefinite.
tol (float, optional) – Tolerance when checking the singular values in covariance matrix. cov is cast to double before the check.
 Returns
out – The drawn samples, of shape size, if that was provided. If not, the shape is
(N,)
.In other words, each entry
out[i,j,...,:]
is an Ndimensional value drawn from the distribution. Return type
ndarray
See also
Generator.multivariate_normal
which should be used for new code.
Notes
The mean is a coordinate in Ndimensional space, which represents the location where samples are most likely to be generated. This is analogous to the peak of the bell curve for the onedimensional or univariate normal distribution.
Covariance indicates the level to which two variables vary together. From the multivariate normal distribution, we draw Ndimensional samples, \(X = [x_1, x_2, ... x_N]\). The covariance matrix element \(C_{ij}\) is the covariance of \(x_i\) and \(x_j\). The element \(C_{ii}\) is the variance of \(x_i\) (i.e. its “spread”).
Instead of specifying the full covariance matrix, popular approximations include:
Spherical covariance (cov is a multiple of the identity matrix)
Diagonal covariance (cov has nonnegative elements, and only on the diagonal)
This geometrical property can be seen in two dimensions by plotting generated datapoints:
>>> mean = [0, 0] >>> cov = [[1, 0], [0, 100]] # diagonal covariance
Diagonal covariance means that points are oriented along x or yaxis:
>>> import matplotlib.pyplot as plt >>> x, y = np.random.multivariate_normal(mean, cov, 5000).T >>> plt.plot(x, y, 'x') >>> plt.axis('equal') >>> plt.show()
Note that the covariance matrix must be positive semidefinite (a.k.a. nonnegativedefinite). Otherwise, the behavior of this method is undefined and backwards compatibility is not guaranteed.
References
 1
Papoulis, A., “Probability, Random Variables, and Stochastic Processes,” 3rd ed., New York: McGrawHill, 1991.
 2
Duda, R. O., Hart, P. E., and Stork, D. G., “Pattern Classification,” 2nd ed., New York: Wiley, 2001.
Examples
>>> mean = (1, 2) >>> cov = [[1, 0], [0, 1]] >>> x = np.random.multivariate_normal(mean, cov, (3, 3)) >>> x.shape (3, 3, 2)
The following is probably true, given that 0.6 is roughly twice the standard deviation:
>>> list((x[0,0,:]  mean) < 0.6) [True, True] # random