gemseo / mlearning / regression

pce module

Polynomial chaos expansion model.

The polynomial chaos expansion (PCE) model expresses an output variable as a weighted sum of polynomial functions which are orthonormal in the stochastic input space spanned by the random input variables:

\[Y = w_0 + w_1\phi_1(X) + w_2\phi_2(X) + ... + w_K\phi_K(X)\]

where \(\phi_i(x)=\psi_{\tau_1(i),1}(x_1)\times\ldots\times \psi_{\tau_d(i),d}(x_d)\).

Enumerating strategy

The choice of the function \(\tau=(\tau_1,\ldots,\tau_d)\) is an enumerating strategy and \(\tau_j(i)\) is the polynomial degree of \(\psi_{\tau_j(i),j}\).

Distributions

PCE models depend on random input variable and are often used to deal with uncertainty quantification problems.

If \(X_j\) is a Gaussian random variable, \((\psi_{ij})_{i\geq 0}\) is the Legendre basis. If \(X_j\) is an uniform random variable, \((\psi_{ij})_{i\geq 0}\) is the Hermite basis.

When the problem is deterministic, we can still use PCE models under the assumptions that the random variables are independent uniform random variables. Then, the orthonormal function basis is the Hermite one.

Degree

The degree \(P\) of a PCE model is defined in such a way that \(\text{degree}(\phi_i)=\sum_{j=1}^d\tau_j(i)\leq P\).

Estimation

The coefficients \((w_1, w_2, ..., w_K)\) and the intercept \(w_0\) are estimated either by least squares regression, sparse least squares regression or quadrature.

Dependence

The PCE model relies on the FunctionalChaosAlgorithm class of the openturns library.

class gemseo.mlearning.regression.pce.PCERegressor(data, probability_space, discipline=None, transformer=None, input_names=None, output_names=None, strategy='LS', degree=2, n_quad=None, stieltjes=True, sparse_param=None)[source]

Bases: MLRegressionAlgo

Polynomial chaos expansion model.

Parameters:
  • data (Dataset) – The learning dataset.

  • probability_space (ParameterSpace) – The set of random input variables defined by OTDistribution instances.

  • discipline (MDODiscipline | None) – The discipline to evaluate with the quadrature strategy if the learning set does not have output data. If None, use the output data from the learning set.

  • transformer (Mapping[str, TransformerType] | None) – The strategies to transform the variables. The values are instances of Transformer while the keys are the names of either the variables or the groups of variables, e.g. "inputs" or "outputs" in the case of the regression algorithms. If a group is specified, the Transformer will be applied to all the variables of this group. If IDENTITY, do not transform the variables.

  • input_names (Iterable[str] | None) – The names of the input variables. If None, consider all the input variables of the learning dataset.

  • output_names (Iterable[str] | None) – The names of the output variables. If None, consider all the output variables of the learning dataset.

  • strategy (str) –

    The strategy to compute the parameters of the PCE, either ‘LS’ for least-square, ‘Quad’ for quadrature or ‘SparseLS’ for sparse least-square.

    By default it is set to “LS”.

  • degree (int) –

    The polynomial degree of the PCE.

    By default it is set to 2.

  • n_quad (int | None) – The total number of quadrature points used by the quadrature strategy to compute the marginal number of points by input dimension. If None, this degree will be set equal to the polynomial degree of the PCE plus one.

  • stieltjes (bool) –

    Whether to use the Stieltjes method.

    By default it is set to True.

  • sparse_param (Mapping[str, int | float] | None) –

    The parameters for the Sparse Cleaning Truncation Strategy and/or hyperbolic truncation of the initial basis:

    • max_considered_terms (int) – The maximum considered terms (default: 120),

    • most_significant (int) – The most Significant number to retain (default: 30),

    • significance_factor (float) – Significance Factor (default: 1e-3),

    • hyper_factor (float) – The factor for the hyperbolic truncation strategy (default: 1.0).

    If None, use default values.

Raises:

ValueError – Either if the variables of the probability space and the input variables of the dataset are different, if transformers are specified for the inputs, if the strategy to compute the parameters of the PCE is unknown or if a probability distribution is not an OTDistribution.

class DataFormatters

Bases: DataFormatters

Machine learning regression model decorators.

classmethod format_dict(predict)

Make an array-based function be called with a dictionary of NumPy arrays.

Parameters:

predict (Callable[[ndarray], ndarray]) – The function to be called; it takes a NumPy array in input and returns a NumPy array.

Returns:

A function making the function ‘predict’ work with either a NumPy data array or a dictionary of NumPy data arrays indexed by variables names. The evaluation will have the same type as the input data.

Return type:

Callable[[Union[ndarray, Mapping[str, ndarray]]], Union[ndarray, Mapping[str, ndarray]]]

classmethod format_dict_jacobian(predict_jac)

Wrap an array-based function to make it callable with a dictionary of NumPy arrays.

Parameters:

predict_jac (Callable[[ndarray], ndarray]) – The function to be called; it takes a NumPy array in input and returns a NumPy array.

Returns:

The wrapped ‘predict_jac’ function, callable with either a NumPy data array or a dictionary of numpy data arrays indexed by variables names. The return value will have the same type as the input data.

Return type:

Callable[[Union[ndarray, Mapping[str, ndarray]]], Union[ndarray, Mapping[str, ndarray]]]

classmethod format_input_output(predict)

Make a function robust to type, array shape and data transformation.

Parameters:

predict (Callable[[ndarray], ndarray]) – The function of interest to be called.

Returns:

A function calling the function of interest ‘predict’, while guaranteeing consistency in terms of data type and array shape, and applying input and/or output data transformation if required.

Return type:

Callable[[Union[ndarray, Mapping[str, ndarray]]], Union[ndarray, Mapping[str, ndarray]]]

classmethod format_samples(predict)

Make a 2D NumPy array-based function work with 1D NumPy array.

Parameters:

predict (Callable[[ndarray], ndarray]) – The function to be called; it takes a 2D NumPy array in input and returns a 2D NumPy array. The first dimension represents the samples while the second one represents the components of the variables.

Returns:

A function making the function ‘predict’ work with either a 1D NumPy array or a 2D NumPy array. The evaluation will have the same dimension as the input data.

Return type:

Callable[[ndarray], ndarray]

classmethod format_transform(transform_inputs=True, transform_outputs=True)

Force a function to transform its input and/or output variables.

Parameters:
  • transform_inputs (bool) –

    Whether to transform the input variables.

    By default it is set to True.

  • transform_outputs (bool) –

    Whether to transform the output variables.

    By default it is set to True.

Returns:

A function evaluating a function of interest, after transforming its input data and/or before transforming its output data.

Return type:

Callable[[ndarray], ndarray]

classmethod transform_jacobian(predict_jac)

Apply transformation to inputs and inverse transformation to outputs.

Parameters:

predict_jac (Callable[[ndarray], ndarray]) – The function of interest to be called.

Returns:

A function evaluating the function ‘predict_jac’, after transforming its input data and/or before transforming its output data.

Return type:

Callable[[ndarray], ndarray]

learn(samples=None, fit_transformers=True)

Train the machine learning algorithm from the learning dataset.

Parameters:
  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • fit_transformers (bool) –

    Whether to fit the variable transformers.

    By default it is set to True.

Return type:

None

load_algo(directory)

Load a machine learning algorithm from a directory.

Parameters:

directory (str | Path) – The path to the directory where the machine learning algorithm is saved.

Return type:

None

predict(input_data, *args, **kwargs)

Evaluate ‘predict’ with either array or dictionary-based input data.

Firstly, the pre-processing stage converts the input data to a NumPy data array, if these data are expressed as a dictionary of NumPy data arrays.

Then, the processing evaluates the function ‘predict’ from this NumPy input data array.

Lastly, the post-processing transforms the output data to a dictionary of output NumPy data array if the input data were passed as a dictionary of NumPy data arrays.

Parameters:
  • input_data (Union[ndarray, Mapping[str, ndarray]]) – The input data.

  • *args – The positional arguments of the function ‘predict’.

  • **kwargs – The keyword arguments of the function ‘predict’.

Returns:

The output data with the same type as the input one.

Return type:

Union[ndarray, Mapping[str, ndarray]]

predict_jacobian(input_data, *args, **kwargs)

Evaluate ‘predict_jac’ with either array or dictionary-based data.

Firstly, the pre-processing stage converts the input data to a NumPy data array, if these data are expressed as a dictionary of NumPy data arrays.

Then, the processing evaluates the function ‘predict_jac’ from this NumPy input data array.

Lastly, the post-processing transforms the output data to a dictionary of output NumPy data array if the input data were passed as a dictionary of NumPy data arrays.

Parameters:
  • input_data – The input data.

  • *args – The positional arguments of the function ‘predict_jac’.

  • **kwargs – The keyword arguments of the function ‘predict_jac’.

Returns:

The output data with the same type as the input one.

predict_raw(input_data)

Predict output data from input data.

Parameters:

input_data (ndarray) – The input data with shape (n_samples, n_inputs).

Returns:

The predicted output data with shape (n_samples, n_outputs).

Return type:

ndarray

save(directory=None, path='.', save_learning_set=False)

Save the machine learning algorithm.

Parameters:
  • directory (str | None) – The name of the directory to save the algorithm.

  • path (str | Path) –

    The path to parent directory where to create the directory.

    By default it is set to “.”.

  • save_learning_set (bool) –

    Whether to save the learning set or get rid of it to lighten the saved files.

    By default it is set to False.

Returns:

The path to the directory where the algorithm is saved.

Return type:

str

AVAILABLE_STRATEGIES: list[str] = ['LS', 'Quad', 'SparseLS']
DEFAULT_TRANSFORMER: DefaultTransformerType = mappingproxy({'inputs': <gemseo.mlearning.transform.scaler.min_max_scaler.MinMaxScaler object>, 'outputs': <gemseo.mlearning.transform.scaler.min_max_scaler.MinMaxScaler object>})

The default transformer for the input and output data, if any.

FILENAME: ClassVar[str] = 'ml_algo.pkl'
IDENTITY: Final[DefaultTransformerType] = mappingproxy({})

A transformer leaving the input and output variables as they are.

LIBRARY: Final[str] = 'OpenTURNS'

The name of the library of the wrapped machine learning algorithm.

LS_STRATEGY: Final[str] = 'LS'
QUAD_STRATEGY: Final[str] = 'Quad'
SHORT_ALGO_NAME: ClassVar[str] = 'PCE'

The short name of the machine learning algorithm, often an acronym.

Typically used for composite names, e.g. f"{algo.SHORT_ALGO_NAME}_{dataset.name}" or f"{algo.SHORT_ALGO_NAME}_{discipline.name}".

SPARSE_STRATEGY: Final[str] = 'SparseLS'
algo: Any

The interfaced machine learning algorithm.

property covariance: ndarray

The covariance matrix of the PCE model output.

property first_sobol_indices: dict[str, numpy.ndarray]

The first-order Sobol’ indices.

property input_data: ndarray

The input data matrix.

property input_dimension: int

The input space dimension.

input_names: list[str]

The names of the input variables.

input_space_center: dict[str, ndarray]

The center of the input space.

property is_trained: bool

Return whether the algorithm is trained.

property learning_samples_indices: Sequence[int]

The indices of the learning samples used for the training.

learning_set: Dataset

The learning dataset.

property mean: ndarray

The mean vector of the PCE model output.

property output_data: ndarray

The output data matrix.

property output_dimension: int

The output space dimension.

output_names: list[str]

The names of the output variables.

parameters: dict[str, MLAlgoParameterType]

The parameters of the machine learning algorithm.

property standard_deviation: ndarray

The standard deviation vector of the PCE model output.

property total_sobol_indices: dict[str, numpy.ndarray]

The total Sobol’ indices.

transformer: dict[str, Transformer]

The strategies to transform the variables, if any.

The values are instances of Transformer while the keys are the names of either the variables or the groups of variables, e.g. “inputs” or “outputs” in the case of the regression algorithms. If a group is specified, the Transformer will be applied to all the variables of this group.

property variance: ndarray

The variance vector of the PCE model output.

Examples using PCERegressor

PCE regression

PCE regression

PCE regression