Polynomial chaos expansion model.

The polynomial chaos expansion (PCE) model expresses an output variable as a weighted sum of polynomial functions which are orthonormal in the stochastic input space spanned by the random input variables:

\[Y = w_0 + w_1\phi_1(X) + w_2\phi_2(X) + ... + w_K\phi_K(X)\]

where \(\phi_i(x)=\psi_{\tau_1(i),1}(x_1)\times\ldots\times \psi_{\tau_d(i),d}(x_d)\).

Enumerating strategy

The choice of the function \(\tau=(\tau_1,\ldots,\tau_d)\) is an enumerating strategy and \(\tau_j(i)\) is the polynomial degree of \(\psi_{\tau_j(i),j}\).

Distributions

PCE models depend on random input variable and are often used to deal with uncertainty quantification problems.

If \(X_j\) is a Gaussian random variable, \((\psi_{ij})_{i\geq 0}\) is the Legendre basis. If \(X_j\) is an uniform random variable, \((\psi_{ij})_{i\geq 0}\) is the Hermite basis.

When the problem is deterministic, we can still use PCE models under the assumptions that the random variables are independent uniform random variables. Then, the orthonormal function basis is the Hermite one.

Degree

The degree \(P\) of a PCE model is defined in such a way that \(\text{degree}(\phi_i)=\sum_{j=1}^d\tau_j(i)\leq P\).

Estimation

The coefficients \((w_1, w_2, ..., w_K)\) and the intercept \(w_0\) are estimated either by least squares regression, sparse least squares regression or quadrature.

Dependence

The PCE model relies on the FunctionalChaosAlgorithm class of the openturns library.

class gemseo.mlearning.regression.pce.PCERegressor(data, probability_space, discipline=None, transformer=None, input_names=None, output_names=None, strategy='LS', degree=2, n_quad=None, stieltjes=True, sparse_param=None)[source]

Polynomial chaos expansion model.

Parameters
  • data (Dataset) – The learning dataset.

  • probability_space (ParameterSpace) – The probability space defining the probability distributions of the model inputs.

  • discipline (MDODiscipline | None) –

    The discipline to evaluate with the quadrature strategy if the learning set does not have output data. If None, use the output data from the learning set.

    By default it is set to None.

  • transformer (Mapping[str, TransformerType] | None) –

    The strategies to transform the variables. The values are instances of Transformer while the keys are the names of either the variables or the groups of variables, e.g. “inputs” or “outputs” in the case of the regression algorithms. If a group is specified, the Transformer will be applied to all the variables of this group. If None, do not transform the variables.

    By default it is set to None.

  • input_names (Iterable[str] | None) –

    The names of the input variables. If None, consider all the input variables of the learning dataset.

    By default it is set to None.

  • output_names (Iterable[str] | None) –

    The names of the output variables. If None, consider all the output variables of the learning dataset.

    By default it is set to None.

  • strategy (str) –

    The strategy to compute the parameters of the PCE, either ‘LS’ for least-square, ‘Quad’ for quadrature or ‘SparseLS’ for sparse least-square.

    By default it is set to LS.

  • degree (int) –

    The polynomial degree of the PCE.

    By default it is set to 2.

  • n_quad (int | None) –

    The total number of quadrature points used by the quadrature strategy to compute the marginal number of points by input dimension. If None, this degree will be set equal to the polynomial degree of the PCE plus one.

    By default it is set to None.

  • stieltjes (bool) –

    Whether to use the Stieltjes method.

    By default it is set to True.

  • sparse_param (Mapping[str, int | float] | None) –

    The parameters for the Sparse Cleaning Truncation Strategy and/or hyperbolic truncation of the initial basis:

    • max_considered_terms (int) – The maximum considered terms (default: 120),

    • most_significant (int) – The most Significant number to retain (default: 30),

    • significance_factor (float) – Significance Factor (default: 1e-3),

    • hyper_factor (float) – The factor for the hyperbolic truncation strategy (default: 1.0).

    If None, use default values.

    By default it is set to None.

Raises

ValueError – Either if the variables of the probability space and the input variables of the dataset are different, if transformers are specified for the inputs, or if the strategy to compute the parameters of the PCE is unknown.

class DataFormatters

Machine learning regression model decorators.

classmethod format_dict(predict)

Make an array-based function be called with a dictionary of NumPy arrays.

Parameters

predict (Callable[[numpy.ndarray], numpy.ndarray]) – The function to be called; it takes a NumPy array in input and returns a NumPy array.

Returns

A function making the function ‘predict’ work with either a NumPy data array or a dictionary of NumPy data arrays indexed by variables names. The evaluation will have the same type as the input data.

Return type

Callable[[Union[numpy.ndarray, Mapping[str, numpy.ndarray]]], Union[numpy.ndarray, Mapping[str, numpy.ndarray]]]

classmethod format_dict_jacobian(predict_jac)

Wrap an array-based function to make it callable with a dictionary of NumPy arrays.

Parameters

predict_jac (Callable[[numpy.ndarray], numpy.ndarray]) – The function to be called; it takes a NumPy array in input and returns a NumPy array.

Returns

The wrapped ‘predict_jac’ function, callable with either a NumPy data array or a dictionary of numpy data arrays indexed by variables names. The return value will have the same type as the input data.

Return type

Callable[[Union[numpy.ndarray, Mapping[str, numpy.ndarray]]], Union[numpy.ndarray, Mapping[str, numpy.ndarray]]]

classmethod format_input_output(predict)

Make a function robust to type, array shape and data transformation.

Parameters

predict (Callable[[numpy.ndarray], numpy.ndarray]) – The function of interest to be called.

Returns

A function calling the function of interest ‘predict’, while guaranteeing consistency in terms of data type and array shape, and applying input and/or output data transformation if required.

Return type

Callable[[Union[numpy.ndarray, Mapping[str, numpy.ndarray]]], Union[numpy.ndarray, Mapping[str, numpy.ndarray]]]

classmethod format_samples(predict)

Make a 2D NumPy array-based function work with 1D NumPy array.

Parameters

predict (Callable[[numpy.ndarray], numpy.ndarray]) – The function to be called; it takes a 2D NumPy array in input and returns a 2D NumPy array. The first dimension represents the samples while the second one represents the components of the variables.

Returns

A function making the function ‘predict’ work with either a 1D NumPy array or a 2D NumPy array. The evaluation will have the same dimension as the input data.

Return type

Callable[[numpy.ndarray], numpy.ndarray]

classmethod format_transform(transform_inputs=True, transform_outputs=True)

Force a function to transform its input and/or output variables.

Parameters
  • transform_inputs (bool) –

    Whether to transform the input variables.

    By default it is set to True.

  • transform_outputs (bool) –

    Whether to transform the output variables.

    By default it is set to True.

Returns

A function evaluating a function of interest, after transforming its input data and/or before transforming its output data.

Return type

Callable[[numpy.ndarray], numpy.ndarray]

classmethod transform_jacobian(predict_jac)

Apply transformation to inputs and inverse transformation to outputs.

Parameters

predict_jac (Callable[[numpy.ndarray], numpy.ndarray]) – The function of interest to be called.

Returns

A function evaluating the function ‘predict_jac’, after transforming its input data and/or before transforming its output data.

Return type

Callable[[numpy.ndarray], numpy.ndarray]

learn(samples=None, fit_transformers=True)

Train the machine learning algorithm from the learning dataset.

Parameters
  • samples (Sequence[int] | None) –

    The indices of the learning samples. If None, use the whole learning dataset.

    By default it is set to None.

  • fit_transformers (bool) –

    Whether to fit the variable transformers.

    By default it is set to True.

Return type

None

load_algo(directory)

Load a machine learning algorithm from a directory.

Parameters

directory (str | Path) – The path to the directory where the machine learning algorithm is saved.

Return type

None

predict(input_data, *args, **kwargs)

Evaluate ‘predict’ with either array or dictionary-based input data.

Firstly, the pre-processing stage converts the input data to a NumPy data array, if these data are expressed as a dictionary of NumPy data arrays.

Then, the processing evaluates the function ‘predict’ from this NumPy input data array.

Lastly, the post-processing transforms the output data to a dictionary of output NumPy data array if the input data were passed as a dictionary of NumPy data arrays.

Parameters
  • input_data (Union[numpy.ndarray, Mapping[str, numpy.ndarray]]) – The input data.

  • *args – The positional arguments of the function ‘predict’.

  • **kwargs – The keyword arguments of the function ‘predict’.

Returns

The output data with the same type as the input one.

Return type

Union[numpy.ndarray, Mapping[str, numpy.ndarray]]

predict_jacobian(input_data, *args, **kwargs)

Evaluate ‘predict_jac’ with either array or dictionary-based data.

Firstly, the pre-processing stage converts the input data to a NumPy data array, if these data are expressed as a dictionary of NumPy data arrays.

Then, the processing evaluates the function ‘predict_jac’ from this NumPy input data array.

Lastly, the post-processing transforms the output data to a dictionary of output NumPy data array if the input data were passed as a dictionary of NumPy data arrays.

Parameters
  • input_data – The input data.

  • *args – The positional arguments of the function ‘predict_jac’.

  • **kwargs – The keyword arguments of the function ‘predict_jac’.

Returns

The output data with the same type as the input one.

predict_raw(input_data)

Predict output data from input data.

Parameters

input_data (numpy.ndarray) – The input data with shape (n_samples, n_inputs).

Returns

The predicted output data with shape (n_samples, n_outputs).

Return type

numpy.ndarray

save(directory=None, path='.', save_learning_set=False)

Save the machine learning algorithm.

Parameters
  • directory (str | None) –

    The name of the directory to save the algorithm.

    By default it is set to None.

  • path (str | Path) –

    The path to parent directory where to create the directory.

    By default it is set to ..

  • save_learning_set (bool) –

    Whether to save the learning set or get rid of it to lighten the saved files.

    By default it is set to False.

Returns

The path to the directory where the algorithm is saved.

Return type

str

property first_sobol_indices: dict[str, numpy.ndarray]

The first Sobol’ indices.

property input_data: numpy.ndarray

The input data matrix.

property input_dimension: int

The input space dimension.

property is_trained: bool

Return whether the algorithm is trained.

property learning_samples_indices: Sequence[int]

The indices of the learning samples used for the training.

property output_data: numpy.ndarray

The output data matrix.

property output_dimension: int

The output space dimension.

property total_sobol_indices: dict[str, numpy.ndarray]

The total Sobol’ indices.

Example