gemseo / mlearning / regression

# pce module¶

Polynomial chaos expansion model.

The polynomial chaos expansion (PCE) model expresses an output variable as a weighted sum of polynomial functions which are orthonormal in the stochastic input space spanned by the random input variables:

$Y = w_0 + w_1\phi_1(X) + w_2\phi_2(X) + ... + w_K\phi_K(X)$

where $$\phi_i(x)=\psi_{\tau_1(i),1}(x_1)\times\ldots\times \psi_{\tau_d(i),d}(x_d)$$ and $$d$$ is the number of input variables.

## Enumeration strategy¶

The choice of the function $$\tau=(\tau_1,\ldots,\tau_d)$$ is an enumeration strategy and $$\tau_j(i)$$ is the polynomial degree of $$\psi_{\tau_j(i),j}$$.

## Distributions¶

PCE models depend on random input variables and are often used to deal with uncertainty quantification problems.

If $$X_j$$ is a Gaussian random variable, $$(\psi_{ij})_{i\geq 0}$$ is the Legendre basis. If $$X_j$$ is a uniform random variable, $$(\psi_{ij})_{i\geq 0}$$ is the Hermite basis.

When the problem is deterministic, we can still use PCE models under the assumption that the input variables are independent uniform random variables. Then, the orthonormal function basis is the Hermite one.

## Degree¶

The degree $$P$$ of a PCE model is defined in such a way that $$\max_i \text{degree}(\phi_i)=\sum_{j=1}^d\tau_j(i)=P$$.

## Estimation¶

The coefficients $$(w_1, w_2, ..., w_K)$$ and the intercept $$w_0$$ are estimated either by least-squares regression or a quadrature rule. In the case of least-squares regression, a sparse strategy can be considered with the LARS algorithm and in both cases, the CleaningStrategy can also remove the non-significant coefficients.

## Dependence¶

The PCE model relies on the OpenTURNS class FunctionalChaosAlgorithm.

class gemseo.mlearning.regression.pce.CleaningOptions(max_considered_terms=100, most_significant=20, significance_factor=0.0001)[source]

Bases: object

The options of the CleaningStrategy.

Parameters:
• max_considered_terms (int) –

By default it is set to 100.

• most_significant (int) –

By default it is set to 20.

• significance_factor (float) –

By default it is set to 0.0001.

max_considered_terms: int = 100

The maximum number of coefficients of the polynomial basis to be considered.

most_significant: int = 20

The maximum number of efficient coefficients of the polynomial basis to be kept.

significance_factor: float = 0.0001

The threshold to select the efficient coefficients of the polynomial basis.

class gemseo.mlearning.regression.pce.PCERegressor(data, probability_space, transformer=mappingproxy({}), input_names=None, output_names=None, degree=2, discipline=None, use_quadrature=False, use_lars=False, use_cleaning=False, hyperbolic_parameter=1.0, n_quadrature_points=0, cleaning_options=None)[source]

Polynomial chaos expansion model.

API documentation of the OpenTURNS class FunctionalChaosAlgorithm.

Parameters:
• data (IODataset | None) – The learning dataset required in the case of the least-squares regression or when discipline is None in the case of quadrature.

• probability_space (ParameterSpace) – The set of random input variables defined by OTDistribution instances.

• transformer (TransformerType) –

The strategies to transform the variables. The values are instances of Transformer while the keys are the names of either the variables or the groups of variables, e.g. "inputs" or "outputs" in the case of the regression algorithms. If a group is specified, the Transformer will be applied to all the variables of this group. If IDENTITY, do not transform the variables.

By default it is set to {}.

• input_names (Iterable[str] | None) – The names of the input variables. If None, consider all the input variables of the learning dataset.

• output_names (Iterable[str] | None) – The names of the output variables. If None, consider all the output variables of the learning dataset.

• degree (int) –

The polynomial degree of the PCE.

By default it is set to 2.

• discipline (MDODiscipline | None) – The discipline to be sampled if use_quadrature is True and data is None.

Whether to estimate the coefficients of the PCE by a quadrature rule; if so, use the quadrature points stored in data or sample discipline. otherwise, estimate the coefficients by least-squares regression.

By default it is set to False.

• use_lars (bool) –

Whether to use the LARS algorithm in the case of the least-squares regression.

By default it is set to False.

• use_cleaning (bool) –

Whether to use the CleaningStrategy algorithm. Otherwise, use a fixed truncation strategy (FixedStrategy).

By default it is set to False.

• hyperbolic_parameter (float) –

The $$q$$-quasi norm parameter of the hyperbolic and anisotropic enumerate function, defined over the interval $$]0,1]$$.

By default it is set to 1.0.

The total number of quadrature points used by the quadrature strategy to compute the marginal number of points by input dimension when discipline is not None. If 0, use $$(1+P)^d$$ points, where $$d$$ is the dimension of the input space and $$P$$ is the polynomial degree of the PCE.

By default it is set to 0.

• cleaning_options (CleaningOptions | None) – The options of the CleaningStrategy. If None, use DEFAULT_CLEANING_OPTIONS.

Raises:

ValueError – When both data and discipline are missing, when both data and discipline are provided, when discipline is provided in the case of least-squares regression, when data is missing in the case of least-squares regression, when the probability space does not contain the distribution of an input variable, when an input variable has a data transformer or when a probability distribution is not an OTDistribution.

class DataFormatters

Bases: DataFormatters

Machine learning regression model decorators.

classmethod format_dict(predict)

Make an array-based function be called with a dictionary of NumPy arrays.

Parameters:

predict (Callable[[ndarray], ndarray]) – The function to be called; it takes a NumPy array in input and returns a NumPy array.

Returns:

A function making the function ‘predict’ work with either a NumPy data array or a dictionary of NumPy data arrays indexed by variables names. The evaluation will have the same type as the input data.

Return type:
classmethod format_dict_jacobian(predict_jac)

Wrap an array-based function to make it callable with a dictionary of NumPy arrays.

Parameters:

predict_jac (Callable[[ndarray], ndarray]) – The function to be called; it takes a NumPy array in input and returns a NumPy array.

Returns:

The wrapped ‘predict_jac’ function, callable with either a NumPy data array or a dictionary of numpy data arrays indexed by variables names. The return value will have the same type as the input data.

Return type:
classmethod format_input_output(predict)

Make a function robust to type, array shape and data transformation.

Parameters:

predict (Callable[[ndarray], ndarray]) – The function of interest to be called.

Returns:

A function calling the function of interest ‘predict’, while guaranteeing consistency in terms of data type and array shape, and applying input and/or output data transformation if required.

Return type:
classmethod format_samples(predict)

Make a 2D NumPy array-based function work with 1D NumPy array.

Parameters:

predict (Callable[[ndarray], ndarray]) – The function to be called; it takes a 2D NumPy array in input and returns a 2D NumPy array. The first dimension represents the samples while the second one represents the components of the variables.

Returns:

A function making the function ‘predict’ work with either a 1D NumPy array or a 2D NumPy array. The evaluation will have the same dimension as the input data.

Return type:
classmethod format_transform(transform_inputs=True, transform_outputs=True)

Force a function to transform its input and/or output variables.

Parameters:
• transform_inputs (bool) –

Whether to transform the input variables.

By default it is set to True.

• transform_outputs (bool) –

Whether to transform the output variables.

By default it is set to True.

Returns:

A function evaluating a function of interest, after transforming its input data and/or before transforming its output data.

Return type:
classmethod transform_jacobian(predict_jac)

Apply transformation to inputs and inverse transformation to outputs.

Parameters:

predict_jac (Callable[[ndarray], ndarray]) – The function of interest to be called.

Returns:

A function evaluating the function ‘predict_jac’, after transforming its input data and/or before transforming its output data.

Return type:
learn(samples=None, fit_transformers=True)

Train the machine learning algorithm from the learning dataset.

Parameters:
• samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

• fit_transformers (bool) –

Whether to fit the variable transformers.

By default it is set to True.

Return type:

None

Load a machine learning algorithm from a directory.

Parameters:

directory (str | Path) – The path to the directory where the machine learning algorithm is saved.

Return type:

None

predict(input_data, *args, **kwargs)

Evaluate ‘predict’ with either array or dictionary-based input data.

Firstly, the pre-processing stage converts the input data to a NumPy data array, if these data are expressed as a dictionary of NumPy data arrays.

Then, the processing evaluates the function ‘predict’ from this NumPy input data array.

Lastly, the post-processing transforms the output data to a dictionary of output NumPy data array if the input data were passed as a dictionary of NumPy data arrays.

Parameters:
• input_data (ndarray | Mapping[str, ndarray]) – The input data.

• *args – The positional arguments of the function ‘predict’.

• **kwargs – The keyword arguments of the function ‘predict’.

Returns:

The output data with the same type as the input one.

Return type:
predict_jacobian(input_data, *args, **kwargs)

Evaluate ‘predict_jac’ with either array or dictionary-based data.

Firstly, the pre-processing stage converts the input data to a NumPy data array, if these data are expressed as a dictionary of NumPy data arrays.

Then, the processing evaluates the function ‘predict_jac’ from this NumPy input data array.

Lastly, the post-processing transforms the output data to a dictionary of output NumPy data array if the input data were passed as a dictionary of NumPy data arrays.

Parameters:
• input_data – The input data.

• *args – The positional arguments of the function ‘predict_jac’.

• **kwargs – The keyword arguments of the function ‘predict_jac’.

Returns:

The output data with the same type as the input one.

predict_raw(input_data)

Predict output data from input data.

Parameters:

input_data (ndarray) – The input data with shape (n_samples, n_inputs).

Returns:

The predicted output data with shape (n_samples, n_outputs).

Return type:

ndarray

to_pickle(directory=None, path='.', save_learning_set=False)

Save the machine learning algorithm.

Parameters:
• directory (str | None) – The name of the directory to save the algorithm.

• path (str | Path) –

The path to parent directory where to create the directory.

By default it is set to “.”.

• save_learning_set (bool) –

Whether to save the learning set or get rid of it to lighten the saved files.

By default it is set to False.

Returns:

The path to the directory where the algorithm is saved.

Return type:

str

DEFAULT_TRANSFORMER: DefaultTransformerType = mappingproxy({'inputs': <gemseo.mlearning.transformers.scaler.min_max_scaler.MinMaxScaler object>, 'outputs': <gemseo.mlearning.transformers.scaler.min_max_scaler.MinMaxScaler object>})

The default transformer for the input and output data, if any.

FILENAME: ClassVar[str] = 'ml_algo.pkl'
IDENTITY: Final[DefaultTransformerType] = mappingproxy({})

A transformer leaving the input and output variables as they are.

LIBRARY: Final[str] = 'OpenTURNS'

The name of the library of the wrapped machine learning algorithm.

SHORT_ALGO_NAME: ClassVar[str] = 'PCE'

The short name of the machine learning algorithm, often an acronym.

Typically used for composite names, e.g. f"{algo.SHORT_ALGO_NAME}_{dataset.name}" or f"{algo.SHORT_ALGO_NAME}_{discipline.name}".

algo: Any

The interfaced machine learning algorithm.

property covariance: ndarray

The covariance matrix of the PCE model output.

property first_sobol_indices: list[dict[str, float]]

The first-order Sobol’ indices for the different output dimensions.

property input_data: ndarray

The input data matrix.

property input_dimension: int

The input space dimension.

input_names: list[str]

The names of the input variables.

input_space_center: dict[str, ndarray]

The center of the input space.

property is_trained: bool

Return whether the algorithm is trained.

property learning_samples_indices: Sequence[int]

The indices of the learning samples used for the training.

learning_set: Dataset

The learning dataset.

property mean: ndarray

The mean vector of the PCE model output.

property output_data: ndarray

The output data matrix.

property output_dimension: int

The output space dimension.

output_names: list[str]

The names of the output variables.

parameters: dict[str, MLAlgoParameterType]

The parameters of the machine learning algorithm.

property second_sobol_indices: list[dict[str, dict[str, float]]]

The second-order Sobol’ indices for the different output dimensions.

property standard_deviation: ndarray

The standard deviation vector of the PCE model output.

property total_sobol_indices: list[dict[str, float]]

The total Sobol’ indices for the different output dimensions.

transformer: dict[str, Transformer]

The strategies to transform the variables, if any.

The values are instances of Transformer while the keys are the names of either the variables or the groups of variables, e.g. “inputs” or “outputs” in the case of the regression algorithms. If a group is specified, the Transformer will be applied to all the variables of this group.

property variance: ndarray

The variance vector of the PCE model output.

## Examples using PCERegressor¶ PCE regression

PCE regression