gemseo / mlearning / cluster

# gaussian_mixture module¶

The Gaussian mixture algorithm for clustering.

The Gaussian mixture algorithm groups the data into clusters. The number of clusters is fixed. Each cluster $$i=1, \cdots, k$$ is defined by a mean $$\mu_i$$ and a covariance matrix $$\Sigma_i$$.

The prediction of the cluster value of a point is simply the cluster where the probability density of the Gaussian distribution defined by the given mean and covariance matrix is the highest:

$\operatorname{cluster}(x) = \underset{i=1,\cdots,k}{\operatorname{argmax}} \ \mathcal{N}(x; \mu_i, \Sigma_i)$

where $$\mathcal{N}(x; \mu_i, \Sigma_i)$$ is the value of the probability density function of a Gaussian random variable $$X \sim \mathcal{N}(\mu_i, \Sigma_i)$$ at the point $$x$$ and $$\|x-\mu_i\|_{\Sigma_i^{-1}} = \sqrt{(x-\mu_i)^T \Sigma_i^{-1} (x-\mu_i)}$$ is the Mahalanobis distance between $$x$$ and $$\mu_i$$ weighted by $$\Sigma_i$$. Likewise, the probability of belonging to a cluster $$i=1, \cdots, k$$ may be determined through

$\mathbb{P}(x \in C_i) = \frac{\mathcal{N}(x; \mu_i, \Sigma_i)} {\sum_{j=1}^k \mathcal{N}(x; \mu_j, \Sigma_j)},$

where $$C_i = \{x\, | \, \operatorname{cluster}(x) = i \}$$.

When fitting the algorithm, the cluster centers $$\mu_i$$ and the covariance matrices $$\Sigma_i$$ are computed using the expectation-maximization algorithm.

This concept is implemented through the GaussianMixture class which inherits from the MLClusteringAlgo class.

## Dependence¶

This clustering algorithm relies on the GaussianMixture class of the scikit-learn library.

Classes:

 GaussianMixture(data[, transformer, …]) The Gaussian mixture clustering algorithm.
class gemseo.mlearning.cluster.gaussian_mixture.GaussianMixture(data, transformer=None, var_names=None, n_components=5, **parameters)[source]

The Gaussian mixture clustering algorithm.

Parameters
• n_components (int) – The number of components of the Gaussian mixture.

• data (Dataset) –

• transformer (Optional[TransformerType]) –

• var_names (Optional[Iterable[str]]) –

• parameters (Optional[Union[int,float,str,bool]]) –

Return type

None

Attributes:

 ABBR FILENAME LIBRARY is_trained Return whether the algorithm is trained.

Classes:

 Decorators for the internal MLAlgo methods.

Methods:

 learn([samples]) Train the machine learning algorithm from the learning dataset. load_algo(directory) Load a machine learning algorithm from a directory. predict(data) Predict the clusters from the input data. predict_proba(data[, hard]) Predict the probability of belonging to each cluster from input data. save([directory, path, save_learning_set]) Save the machine learning algorithm.
ABBR = 'GaussMix'
class DataFormatters

Bases: object

Decorators for the internal MLAlgo methods.

FILENAME = 'ml_algo.pkl'
LIBRARY = None
property is_trained

Return whether the algorithm is trained.

learn(samples=None)

Train the machine learning algorithm from the learning dataset.

Parameters

samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

Return type

None

Load a machine learning algorithm from a directory.

Parameters

directory (str) – The path to the directory where the machine learning algorithm is saved.

Return type

None

predict(data)

Predict the clusters from the input data.

The user can specify these input data either as a NumPy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The type of the output data and the dimension of the output arrays will be consistent with the type of the input data and the dimension of the input arrays.

Parameters

data (Union[numpy.ndarray, Dict[str, numpy.ndarray]]) – The input data.

Returns

The predicted cluster for each input data sample.

Return type

Union[int, numpy.ndarray]

predict_proba(data, hard=True)

Predict the probability of belonging to each cluster from input data.

The user can specified these input data either as a numpy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The dimension of the output array will be consistent with the dimension of the input arrays.

Parameters
• data (Union[numpy.ndarray, Dict[str, numpy.ndarray]]) – The input data.

• hard (bool) – Whether clustering should be hard (True) or soft (False).

Returns

The probability of belonging to each cluster, with shape (n_samples, n_clusters) or (n_clusters,).

Return type

numpy.ndarray

save(directory=None, path='.', save_learning_set=False)

Save the machine learning algorithm.

Parameters
• directory (Optional[str]) – The name of the directory to save the algorithm.

• path (str) – The path to parent directory where to create the directory.

• save_learning_set (bool) – If False, do not save the learning set to lighten the saved files.

Returns

The path to the directory where the algorithm is saved.

Return type

str