gemseo / mlearning / regression

# gpr module¶

Gaussian process regression model.

## Overview¶

The Gaussian process regression (GPR) model expresses the model output as a weighted sum of kernel functions centered on the learning input data:

$y = \mu + w_1\kappa(\|x-x_1\|;\epsilon) + w_2\kappa(\|x-x_2\|;\epsilon) + ... + w_N\kappa(\|x-x_N\|;\epsilon)$

## Details¶

The GPR model relies on the assumption that the original model $$f$$ to replace is an instance of a Gaussian process (GP) with mean $$\mu$$ and covariance $$\sigma^2\kappa(\|x-x'\|;\epsilon)$$.

Then, the GP conditioned by the learning set $$(x_i,y_i)_{1\leq i \leq N}$$ is entirely defined by its expectation:

$\hat{f}(x) = \hat{\mu} + \hat{w}^T k(x)$

and its covariance:

$\hat{c}(x,x') = \hat{\sigma}^2 - k(x)^T K^{-1} k(x')$

where $$[\hat{\mu};\hat{w}]=([1_N~K]^T[1_N~K])^{-1}[1_N~K]^TY$$ with $$K_{ij}=\kappa(\|x_i-x_j\|;\hat{\epsilon})$$, $$k_i(x)=\kappa(\|x-x_i\|;\hat{\epsilon})$$ and $$Y_i=y_i$$.

The correlation length vector $$\epsilon$$ is estimated by numerical non-linear optimization.

## Surrogate model¶

The expectation $$\hat{f}$$ is the surrogate model of $$f$$.

## Error measure¶

The standard deviation $$\hat{s}$$ is a local error measure of $$\hat{f}$$:

$\hat{s}(x):=\sqrt{\hat{c}(x,x)}$

## Interpolation or regression¶

The GPR model can be regressive or interpolative according to the value of the nugget effect $$\alpha\geq 0$$ which is a regularization term applied to the correlation matrix $$K$$. When $$\alpha = 0$$, the surrogate model interpolates the learning data.

## Dependence¶

The GPR model relies on the GaussianProcessRegressor class of the scikit-learn library.

class gemseo.mlearning.regression.gpr.GaussianProcessRegressor(data, transformer=mappingproxy({}), input_names=None, output_names=None, kernel=None, bounds=None, alpha=1e-10, optimizer='fmin_l_bfgs_b', n_restarts_optimizer=10, random_state=0)[source]

Gaussian process regression model.

Parameters:
• data (IODataset) – The learning dataset.

• transformer (TransformerType) –

The strategies to transform the variables. The values are instances of BaseTransformer while the keys are the names of either the variables or the groups of variables, e.g. "inputs" or "outputs" in the case of the regression algorithms. If a group is specified, the BaseTransformer will be applied to all the variables of this group. If IDENTITY, do not transform the variables.

By default it is set to {}.

• input_names (Iterable[str] | None) – The names of the input variables. If None, consider all the input variables of the learning dataset.

• output_names (Iterable[str] | None) – The names of the output variables. If None, consider all the output variables of the learning dataset.

• kernel (Kernel | None) – The kernel specifying the covariance model. If None, use a Matérn(2.5).

• bounds (__Bounds | Mapping[str, __Bounds] | None) – The lower and upper bounds of the parameter length scales when kernel is None. Either a unique lower-upper pair common to all the inputs or lower-upper pairs for some of them. When bounds is None or when an input has no pair, the lower bound is 0.01 and the upper bound is 100.

• alpha (float | RealArray) –

The nugget effect to regularize the model.

By default it is set to 1e-10.

• optimizer (str | Callable) –

The optimization algorithm to find the parameter length scales.

By default it is set to “fmin_l_bfgs_b”.

• n_restarts_optimizer (int) –

The number of restarts of the optimizer.

By default it is set to 10.

• random_state (int | None) –

The random state passed to the random number generator. Use an integer for reproducible results.

By default it is set to 0.

Raises:

ValueError – When both the variable and the group it belongs to have a transformer.

DataFormatters
compute_samples(input_data, n_samples, seed=0)[source]

Sample a random vector from the conditioned Gaussian process.

Parameters:
• input_data (RealArray) – The $$N$$ input points of dimension $$d$$ at which to observe the conditioned Gaussian process; shaped as (N, d).

• n_samples (int) – The number of samples M.

• seed (int) –

The seed for reproducible results.

By default it is set to 0.

Returns:

The output samples per output dimension shaped as (N, M).

Return type:

tuple[RealArray]

learn(samples=None, fit_transformers=True)

Train the machine learning algorithm from the learning dataset.

Parameters:
• samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

• fit_transformers (bool) –

Whether to fit the variable transformers. Otherwise, use them as they are.

By default it is set to True.

Return type:

None

Load a machine learning algorithm from a directory.

Parameters:

directory (str | Path) – The path to the directory where the machine learning algorithm is saved.

Return type:

None

predict(input_data)

Predict output data from input data.

The user can specify these input data either as a NumPy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The type of the output data and the dimension of the output arrays will be consistent with the type of the input data and the size of the input arrays.

Parameters:

input_data (ndarray | Mapping[str, ndarray]) – The input data.

Returns:

The predicted output data.

Return type:
predict_jacobian(input_data)

Predict the Jacobians of the regression model at input_data.

The user can specify these input data either as a NumPy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the NumPy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the NumPy arrays are of dimension 1, there is a single sample.

The type of the output data and the dimension of the output arrays will be consistent with the type of the input data and the size of the input arrays.

Parameters:

input_data (DataType) – The input data.

Returns:

The predicted Jacobian data.

Return type:

NoReturn

predict_raw(input_data)

Predict output data from input data.

Parameters:

input_data (RealArray) – The input data with shape (n_samples, n_inputs).

Returns:

The predicted output data with shape (n_samples, n_outputs).

Return type:

RealArray

predict_std(input_data)[source]

Predict the standard deviation from input data.

The user can specify these input data either as a NumPy array, e.g. array([1., 2., 3.]) or as a dictionary of NumPy arrays, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the NumPy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the NumPy arrays are of dimension 1, there is a single sample.

Parameters:

input_data (DataType) – The input data.

Returns:

The standard deviation at the query points.

Warning

This statistic is expressed in relation to the transformed output space. You can sample the predict() method to estimate it in relation to the original output space if it is different from the transformed output space.

Return type:

RealArray

to_pickle(directory=None, path='.', save_learning_set=False)

Save the machine learning algorithm.

Parameters:
• directory (str | None) – The name of the directory to save the algorithm.

• path (str | Path) –

The path to parent directory where to create the directory.

By default it is set to “.”.

• save_learning_set (bool) –

Whether to save the learning set or get rid of it to lighten the saved files.

By default it is set to False.

Returns:

The path to the directory where the algorithm is saved.

Return type:

str

DEFAULT_TRANSFORMER: DefaultTransformerType = mappingproxy({'inputs': <gemseo.mlearning.transformers.scaler.min_max_scaler.MinMaxScaler object>, 'outputs': <gemseo.mlearning.transformers.scaler.min_max_scaler.MinMaxScaler object>})

The default transformer for the input and output data, if any.

FILENAME: ClassVar[str] = 'ml_algo.pkl'
IDENTITY: Final[DefaultTransformerType] = mappingproxy({})

A transformer leaving the input and output variables as they are.

LIBRARY: ClassVar[str] = 'scikit-learn'

The name of the library of the wrapped machine learning algorithm.

SHORT_ALGO_NAME: ClassVar[str] = 'GPR'

The short name of the machine learning algorithm, often an acronym.

Typically used for composite names, e.g. f"{algo.SHORT_ALGO_NAME}_{dataset.name}" or f"{algo.SHORT_ALGO_NAME}_{discipline.name}".

algo: Any

The interfaced machine learning algorithm.

property input_data: ndarray

The input data matrix.

property input_dimension: int

The input space dimension.

input_names: list[str]

The names of the input variables.

input_space_center: dict[str, ndarray]

The center of the input space.

property is_trained: bool

Return whether the algorithm is trained.

property kernel

The kernel used for prediction.

property learning_samples_indices: Sequence[int]

The indices of the learning samples used for the training.

learning_set: Dataset

The learning dataset.

property output_data: ndarray

The output data matrix.

property output_dimension: int

The output space dimension.

output_names: list[str]

The names of the output variables.

parameters: dict[str, MLAlgoParameterType]

The parameters of the machine learning algorithm.

resampling_results: dict[str, tuple[BaseResampler, list[BaseMLAlgo], list[ndarray] | ndarray]]

The resampler class names bound to the resampling results.

A resampling result is formatted as (resampler, ml_algos, predictions) where resampler is a BaseResampler, ml_algos is the list of the associated machine learning algorithms built during the resampling stage and predictions are the predictions obtained with the latter.

resampling_results stores only one resampling result per resampler type (e.g., "CrossValidation", "LeaveOneOut" and "Boostrap").

transformer: dict[str, BaseTransformer]

The strategies to transform the variables, if any.

The values are instances of BaseTransformer while the keys are the names of either the variables or the groups of variables, e.g. “inputs” or “outputs” in the case of the regression algorithms. If a group is specified, the BaseTransformer will be applied to all the variables of this group.

GP regression

GP regression

Scaling

Scaling