gemseo / mlearning / clustering

# gaussian_mixture module¶

The Gaussian mixture algorithm for clustering.

The Gaussian mixture algorithm groups the data into clusters. The number of clusters is fixed. Each cluster $$i=1, \\cdots, k$$ is defined by a mean $$\\mu_i$$ and a covariance matrix $$\\Sigma_i$$.

The prediction of the cluster value of a point is simply the cluster where the probability density of the Gaussian distribution defined by the given mean and covariance matrix is the highest:

$\begin{split}\\operatorname{cluster}(x) = \\underset{i=1,\\cdots,k}{\\operatorname{argmax}} \\ \\mathcal{N}(x; \\mu_i, \\Sigma_i)\end{split}$

where $$\\mathcal{N}(x; \\mu_i, \\Sigma_i)$$ is the value of the probability density function of a Gaussian random variable $$X \\sim \\mathcal{N}(\\mu_i, \\Sigma_i)$$ at the point $$x$$ and $$\\|x-\\mu_i\\|_{\\Sigma_i^{-1}} = \\sqrt{(x-\\mu_i)^T \\Sigma_i^{-1} (x-\\mu_i)}$$ is the Mahalanobis distance between $$x$$ and $$\\mu_i$$ weighted by $$\\Sigma_i$$. Likewise, the probability of belonging to a cluster $$i=1, \\cdots, k$$ may be determined through

$\begin{split}\\mathbb{P}(x \\in C_i) = \\frac{\\mathcal{N}(x; \\mu_i, \\Sigma_i)} {\\sum_{j=1}^k \\mathcal{N}(x; \\mu_j, \\Sigma_j)},\end{split}$

where $$C_i = \\{x\\, | \\, \\operatorname{cluster}(x) = i \\}$$.

When fitting the algorithm, the cluster centers $$\\mu_i$$ and the covariance matrices $$\\Sigma_i$$ are computed using the expectation-maximization algorithm.

This concept is implemented through the GaussianMixture class which inherits from the MLClusteringAlgo class.

## Dependence¶

This clustering algorithm relies on the GaussianMixture class of the scikit-learn library.

class gemseo.mlearning.clustering.gaussian_mixture.GaussianMixture(data, transformer=mappingproxy({}), var_names=None, n_components=5, random_state=0, **parameters)[source]

The Gaussian mixture clustering algorithm.

Parameters:
• data (Dataset) – The learning dataset.

• transformer (TransformerType) –

The strategies to transform the variables. The values are instances of Transformer while the keys are the names of either the variables or the groups of variables, e.g. "inputs" or "outputs" in the case of the regression algorithms. If a group is specified, the Transformer will be applied to all the variables of this group. If IDENTITY, do not transform the variables.

By default it is set to {}.

• var_names (Iterable[str] | None) – The names of the variables. If None, consider all variables mentioned in the learning dataset.

• n_components (int) –

The number of components of the Gaussian mixture.

By default it is set to 5.

• random_state (int | None) –

The random state passed to the random number generator. Use an integer for reproducible results.

By default it is set to 0.

• **parameters (int | float | str | bool | None) – The parameters of the machine learning algorithm.

Raises:

ValueError – When both the variable and the group it belongs to have a transformer.

learn(samples=None, fit_transformers=True)

Train the machine learning algorithm from the learning dataset.

Parameters:
• samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

• fit_transformers (bool) –

Whether to fit the variable transformers. Otherwise, use them as they are.

By default it is set to True.

Return type:

None

Load a machine learning algorithm from a directory.

Parameters:

directory (str | Path) – The path to the directory where the machine learning algorithm is saved.

Return type:

None

predict(data)

Predict the clusters from the input data.

The user can specify these input data either as a NumPy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The type of the output data and the dimension of the output arrays will be consistent with the type of the input data and the dimension of the input arrays.

Parameters:

data (DataType) – The input data.

Returns:

The predicted cluster for each input data sample.

Return type:

int | ndarray

predict_proba(data, hard=True)

Predict the probability of belonging to each cluster from input data.

The user can specify these input data either as a numpy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The dimension of the output array will be consistent with the dimension of the input arrays.

Parameters:
• data (ndarray | Mapping[str, ndarray]) – The input data.

• hard (bool) –

Whether clustering should be hard (True) or soft (False).

By default it is set to True.

Returns:

The probability of belonging to each cluster, with shape (n_samples, n_clusters) or (n_clusters,).

Return type:

ndarray

to_pickle(directory=None, path='.', save_learning_set=False)

Save the machine learning algorithm.

Parameters:
• directory (str | None) – The name of the directory to save the algorithm.

• path (str | Path) –

The path to parent directory where to create the directory.

By default it is set to “.”.

• save_learning_set (bool) –

Whether to save the learning set or get rid of it to lighten the saved files.

By default it is set to False.

Returns:

The path to the directory where the algorithm is saved.

Return type:

str

DEFAULT_TRANSFORMER: DefaultTransformerType = mappingproxy({})

The default transformer for the input and output data, if any.

DataFormatters: ClassVar[type[BaseDataFormatters]]

The data formatters for the learning and prediction methods.

FILENAME: ClassVar[str] = 'ml_algo.pkl'
IDENTITY: Final[DefaultTransformerType] = mappingproxy({})

A transformer leaving the input and output variables as they are.

LIBRARY: Final[str] = 'scikit-learn'

The name of the library of the wrapped machine learning algorithm.

SHORT_ALGO_NAME: ClassVar[str] = 'GMM'

The short name of the machine learning algorithm, often an acronym.

Typically used for composite names, e.g. f"{algo.SHORT_ALGO_NAME}_{dataset.name}" or f"{algo.SHORT_ALGO_NAME}_{discipline.name}".

algo: Any

The interfaced machine learning algorithm.

input_names: list[str]

The names of the variables.

property is_trained: bool

Return whether the algorithm is trained.

labels: list[int]

The indices of the clusters for the different samples.

property learning_samples_indices: Sequence[int]

The indices of the learning samples used for the training.

learning_set: Dataset

The learning dataset.

n_clusters: int

The number of clusters.

parameters: dict[str, MLAlgoParameterType]

The parameters of the machine learning algorithm.

resampling_results: dict[str, tuple[Resampler, list[MLAlgo], list[ndarray] | ndarray]]

The resampler class names bound to the resampling results.

A resampling result is formatted as (resampler, ml_algos, predictions) where resampler is a Resampler, ml_algos is the list of the associated machine learning algorithms built during the resampling stage and predictions are the predictions obtained with the latter.

resampling_results stores only one resampling result per resampler type (e.g., "CrossValidation", "LeaveOneOut" and "Boostrap").

transformer: dict[str, Transformer]

The strategies to transform the variables, if any.

The values are instances of Transformer while the keys are the names of either the variables or the groups of variables, e.g. “inputs” or “outputs” in the case of the regression algorithms. If a group is specified, the Transformer will be applied to all the variables of this group.

## Examples using GaussianMixture¶

Gaussian Mixtures

Gaussian Mixtures