knn module¶
The k-nearest neighbors for classification.
The k-nearest neighbor classification algorithm is an approach to predict the output class of a new input point by selecting the majority class among the k nearest neighbors in a training set through voting. The algorithm may also predict the probabilities of belonging to each class by counting the number of occurrences of the class withing the k nearest neighbors.
Let \((x_i)_{i=1,\\cdots,n_{\\text{samples}}}\\in \\mathbb{R}^{n_{\\text{samples}}\\times n_{\\text{inputs}}}\) and \((y_i)_{i=1,\\cdots,n_{\\text{samples}}}\\in \\{1,\\cdots,n_{\\text{classes}}\\}^{n_{\\text{samples}}}\) denote the input and output training data respectively.
The procedure for predicting the class of a new input point \(x\\in \\mathbb{R}^{n_{\\text{inputs}}}\) is the following:
Let \(i_1(x), \\cdots, i_{n_{\\text{samples}}}(x)\) be the indices of the input training points sorted by distance to the prediction point \(x\), i.e.
The ordered indices may be formally determined through the inductive formula
where
that is
Then, by denoting \(\\operatorname{mode}(\\cdot)\) the mode operator, i.e. the operator that extracts the element with the highest occurrence, we may define the prediction operator as the mode of the set of output classes associated to the \(k\) first indices (classes of the \(k\)-nearest neighbors of \(x\)):
This concept is implemented through the KNNClassifier
class which
inherits from the MLClassificationAlgo
class.
Dependence¶
The classifier relies on the KNeighborsClassifier class of the scikit-learn library.
- class gemseo.mlearning.classification.knn.KNNClassifier(data, transformer=mappingproxy({}), input_names=None, output_names=None, n_neighbors=5, **parameters)[source]
Bases:
MLClassificationAlgo
The k-nearest neighbors classification algorithm.
- Parameters:
data (IODataset) – The learning dataset.
transformer (TransformerType) –
The strategies to transform the variables. The values are instances of
Transformer
while the keys are the names of either the variables or the groups of variables, e.g."inputs"
or"outputs"
in the case of the regression algorithms. If a group is specified, theTransformer
will be applied to all the variables of this group. IfIDENTITY
, do not transform the variables.By default it is set to {}.
input_names (Iterable[str] | None) – The names of the input variables. If
None
, consider all the input variables of the learning dataset.output_names (Iterable[str] | None) – The names of the output variables. If
None
, consider all the output variables of the learning dataset.n_neighbors (int) –
The number of neighbors.
By default it is set to 5.
**parameters (int | str) – The parameters of the machine learning algorithm.
- Raises:
ValueError – When both the variable and the group it belongs to have a transformer.
- LIBRARY: Final[str] = 'scikit-learn'
The name of the library of the wrapped machine learning algorithm.
- SHORT_ALGO_NAME: ClassVar[str] = 'KNN'
The short name of the machine learning algorithm, often an acronym.
Typically used for composite names, e.g.
f"{algo.SHORT_ALGO_NAME}_{dataset.name}"
orf"{algo.SHORT_ALGO_NAME}_{discipline.name}"
.
- algo: Any
The interfaced machine learning algorithm.
- learning_set: Dataset
The learning dataset.
- n_classes: int
The number of classes.
- resampling_results: dict[str, tuple[Resampler, list[MLAlgo], list[ndarray] | ndarray]]
The resampler class names bound to the resampling results.
A resampling result is formatted as
(resampler, ml_algos, predictions)
whereresampler
is aResampler
,ml_algos
is the list of the associated machine learning algorithms built during the resampling stage andpredictions
are the predictions obtained with the latter.resampling_results
stores only one resampling result per resampler type (e.g.,"CrossValidation"
,"LeaveOneOut"
and"Boostrap"
).
- transformer: dict[str, Transformer]
The strategies to transform the variables, if any.
The values are instances of
Transformer
while the keys are the names of either the variables or the groups of variables, e.g. “inputs” or “outputs” in the case of the regression algorithms. If a group is specified, theTransformer
will be applied to all the variables of this group.
Examples using KNNClassifier¶
K nearest neighbors classification