The Gaussian mixture algorithm for clustering.

The Gaussian mixture algorithm groups the data into clusters. The number of clusters is fixed. Each cluster \(i=1, \cdots, k\) is defined by a mean \(\mu_i\) and a covariance matrix \(\Sigma_i\).

The prediction of the cluster value of a point is simply the cluster where the probability density of the Gaussian distribution defined by the given mean and covariance matrix is the highest:

\[\operatorname{cluster}(x) = \underset{i=1,\cdots,k}{\operatorname{argmax}} \ \mathcal{N}(x; \mu_i, \Sigma_i)\]

where \(\mathcal{N}(x; \mu_i, \Sigma_i)\) is the value of the probability density function of a Gaussian random variable \(X \sim \mathcal{N}(\mu_i, \Sigma_i)\) at the point \(x\) and \(\|x-\mu_i\|_{\Sigma_i^{-1}} = \sqrt{(x-\mu_i)^T \Sigma_i^{-1} (x-\mu_i)}\) is the Mahalanobis distance between \(x\) and \(\mu_i\) weighted by \(\Sigma_i\). Likewise, the probability of belonging to a cluster \(i=1, \cdots, k\) may be determined through

\[\mathbb{P}(x \in C_i) = \frac{\mathcal{N}(x; \mu_i, \Sigma_i)} {\sum_{j=1}^k \mathcal{N}(x; \mu_j, \Sigma_j)},\]

where \(C_i = \{x\, | \, \operatorname{cluster}(x) = i \}\).

When fitting the algorithm, the cluster centers \(\mu_i\) and the covariance matrices \(\Sigma_i\) are computed using the expectation-maximization algorithm.

This concept is implemented through the GaussianMixture class which inherits from the MLClusteringAlgo class.

Dependence

This clustering algorithm relies on the GaussianMixture class of the scikit-learn library.

class gemseo.mlearning.cluster.gaussian_mixture.GaussianMixture(data, transformer=None, var_names=None, n_components=5, **parameters)[source]

The Gaussian mixture clustering algorithm.

Parameters
  • data (Dataset) – The learning dataset.

  • transformer (Mapping[str, TransformerType] | None) –

    The strategies to transform the variables. The values are instances of Transformer while the keys are the names of either the variables or the groups of variables, e.g. “inputs” or “outputs” in the case of the regression algorithms. If a group is specified, the Transformer will be applied to all the variables of this group. If None, do not transform the variables.

    By default it is set to None.

  • var_names (Iterable[str] | None) –

    The names of the variables. If None, consider all variables mentioned in the learning dataset.

    By default it is set to None.

  • n_components (int) –

    The number of components of the Gaussian mixture.

    By default it is set to 5.

  • **parameters (int | float | str | bool | None) – The parameters of the machine learning algorithm.

Raises

ValueError – When both the variable and the group it belongs to have a transformer.

class DataFormatters

Decorators for the internal MLAlgo methods.

Noindex

learn(samples=None, fit_transformers=True)

Train the machine learning algorithm from the learning dataset.

Parameters
  • samples (Sequence[int] | None) –

    The indices of the learning samples. If None, use the whole learning dataset.

    By default it is set to None.

  • fit_transformers (bool) –

    Whether to fit the variable transformers.

    By default it is set to True.

Return type

None

load_algo(directory)

Load a machine learning algorithm from a directory.

Parameters

directory (str | Path) – The path to the directory where the machine learning algorithm is saved.

Return type

None

predict(data)

Predict the clusters from the input data.

The user can specify these input data either as a NumPy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The type of the output data and the dimension of the output arrays will be consistent with the type of the input data and the dimension of the input arrays.

Parameters

data (DataType) – The input data.

Returns

The predicted cluster for each input data sample.

Return type

int | ndarray

predict_proba(data, hard=True)

Predict the probability of belonging to each cluster from input data.

The user can specified these input data either as a numpy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The dimension of the output array will be consistent with the dimension of the input arrays.

Parameters
  • data (Union[numpy.ndarray, Mapping[str, numpy.ndarray]]) – The input data.

  • hard (bool) –

    Whether clustering should be hard (True) or soft (False).

    By default it is set to True.

Returns

The probability of belonging to each cluster, with shape (n_samples, n_clusters) or (n_clusters,).

Return type

numpy.ndarray

save(directory=None, path='.', save_learning_set=False)

Save the machine learning algorithm.

Parameters
  • directory (str | None) –

    The name of the directory to save the algorithm.

    By default it is set to None.

  • path (str | Path) –

    The path to parent directory where to create the directory.

    By default it is set to ..

  • save_learning_set (bool) –

    Whether to save the learning set or get rid of it to lighten the saved files.

    By default it is set to False.

Returns

The path to the directory where the algorithm is saved.

Return type

str

property is_trained: bool

Return whether the algorithm is trained.

property learning_samples_indices: Sequence[int]

The indices of the learning samples used for the training.

Example