The k-means algorithm for clustering.

The k-means algorithm groups the data into clusters, where the number of clusters $$k$$ is fixed. This is done by initializing $$k$$ centroids in the design space. The points are grouped into clusters according to their nearest centroid.

When fitting the algorithm, each centroid is successively moved to the mean of its corresponding cluster, and the cluster value of each point is then reset to the cluster value of the closest centroid. This process is repeated until convergence.

Cluster values of new points may be predicted by returning the value of the closest centroid. Denoting $$(c_1, \cdots, c_k) \in \mathbb{R}^{n \times k}$$ the centroids, and assuming no overlap between the centroids, we may compute the prediction

$\operatorname{cluster}(x) = \underset{i=1,\cdots,k}{\operatorname{argmin}} \|x-c_i\|.$

A probability measure may also be provided, using the distances from the point to each of the centroids:

$\begin{split}\mathbb{P}(x \in C_i) = \begin{cases} 1 & \operatorname{if} x = c_i\\ 0 & \operatorname{if} x = c_j,\ j \neq i\\ \frac{\frac{1}{\|x-c_i\|}}{\sum_{j=1}^k \frac{1}{\|x-c_j\|}} & \operatorname{if} x \neq c_j\, \forall j=1,\cdots,k \end{cases},\end{split}$

where $$C_i = \{x\, | \, \operatorname{cluster}(x) = i \}$$. Here, $$\mathbb{P}(x \in C_i)$$ represents the probability of cluster $$i$$ given the point $$x$$.

This concept is implemented through the KMeans class which inherits from the MLClusteringAlgo class.

# Dependence¶

This clustering algorithm relies on the KMeans class of the scikit-learn library.

Classes:

 KMeans(data[, transformer, var_names, …]) The k-means clustering algorithm.
class gemseo.mlearning.cluster.kmeans.KMeans(data, transformer=None, var_names=None, n_clusters=5, random_state=0, **parameters)[source]

The k-means clustering algorithm.

Parameters
• n_clusters (int) – The number of clusters of the K-means algorithm.

• random_state (Optional[int]) – If None, use a random generation of the initial centroids. If not None, the integer is used to make the initialization deterministic.

• data (Dataset) –

• transformer (Optional[TransformerType]) –

• var_names (Optional[Iterable[str]]) –

• parameters (Optional[Union[int,float,bool,str]]) –

Return type

None

Classes:

 Decorators for the internal MLAlgo methods.

Attributes:

 is_trained Return whether the algorithm is trained.

Methods:

 learn([samples]) Train the machine learning algorithm from the learning dataset. load_algo(directory) Load a machine learning algorithm from a directory. predict(data) Predict the clusters from the input data. predict_proba(data[, hard]) Predict the probability of belonging to each cluster from input data. save([directory, path, save_learning_set]) Save the machine learning algorithm.
class DataFormatters

Decorators for the internal MLAlgo methods.

property is_trained

Return whether the algorithm is trained.

learn(samples=None)

Train the machine learning algorithm from the learning dataset.

Parameters

samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

Return type

None

Load a machine learning algorithm from a directory.

Parameters

directory (str) – The path to the directory where the machine learning algorithm is saved.

Return type

None

predict(data)

Predict the clusters from the input data.

The user can specify these input data either as a NumPy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The type of the output data and the dimension of the output arrays will be consistent with the type of the input data and the dimension of the input arrays.

Parameters

data (Union[numpy.ndarray, Dict[str, numpy.ndarray]]) – The input data.

Returns

The predicted cluster for each input data sample.

Return type

Union[int, numpy.ndarray]

predict_proba(data, hard=True)

Predict the probability of belonging to each cluster from input data.

The user can specified these input data either as a numpy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The dimension of the output array will be consistent with the dimension of the input arrays.

Parameters
• data (Union[numpy.ndarray, Dict[str, numpy.ndarray]]) – The input data.

• hard (bool) – Whether clustering should be hard (True) or soft (False).

Returns

The probability of belonging to each cluster, with shape (n_samples, n_clusters) or (n_clusters,).

Return type

numpy.ndarray

save(directory=None, path='.', save_learning_set=False)

Save the machine learning algorithm.

Parameters
• directory (Optional[str]) – The name of the directory to save the algorithm.

• path (str) – The path to parent directory where to create the directory.

• save_learning_set (bool) – If False, do not save the learning set to lighten the saved files.

Returns

The path to the directory where the algorithm is saved.

Return type

str