gemseo / mlearning / classification

Hide inherited members

svm module

The Support Vector Machine algorithm for classification.

This module implements the SVMClassifier class. A support vector machine (SVM) passes the data through a kernel in order to increase its dimension and thereby make the classes linearly separable.

Dependence

The classifier relies on the SVC class of the scikit-learn library.

class gemseo.mlearning.classification.svm.SVMClassifier(data, transformer=mappingproxy({}), input_names=None, output_names=None, C=1.0, kernel='rbf', probability=False, random_state=0, **parameters)[source]

Bases: MLClassificationAlgo

The Support Vector Machine algorithm for classification.

Parameters:
  • data (IODataset) – The learning dataset.

  • transformer (TransformerType) –

    The strategies to transform the variables. The values are instances of Transformer while the keys are the names of either the variables or the groups of variables, e.g. "inputs" or "outputs" in the case of the regression algorithms. If a group is specified, the Transformer will be applied to all the variables of this group. If IDENTITY, do not transform the variables.

    By default it is set to {}.

  • input_names (Iterable[str] | None) – The names of the input variables. If None, consider all the input variables of the learning dataset.

  • output_names (Iterable[str] | None) – The names of the output variables. If None, consider all the output variables of the learning dataset.

  • C (float) –

    The inverse L2 regularization parameter. Higher values give less regularization.

    By default it is set to 1.0.

  • kernel (str | Callable | None) –

    The name of the kernel or a callable for the SVM. Examples: “linear”, “poly”, “rbf”, “sigmoid”, “precomputed” or a callable.

    By default it is set to “rbf”.

  • probability (bool) –

    Whether to enable the probability estimates. The algorithm is faster if set to False.

    By default it is set to False.

  • random_state (int | None) –

    The random state passed to the random number generator. Use an integer for reproducible results.

    By default it is set to 0.

  • **parameters (int | float | bool | str | None) – The parameters of the machine learning algorithm.

Raises:

ValueError – When both the variable and the group it belongs to have a transformer.

DataFormatters

alias of SupervisedDataFormatters

learn(samples=None, fit_transformers=True)

Train the machine learning algorithm from the learning dataset.

Parameters:
  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • fit_transformers (bool) –

    Whether to fit the variable transformers. Otherwise, use them as they are.

    By default it is set to True.

Return type:

None

load_algo(directory)

Load a machine learning algorithm from a directory.

Parameters:

directory (str | Path) – The path to the directory where the machine learning algorithm is saved.

Return type:

None

predict(input_data)

Predict output data from input data.

The user can specify these input data either as a NumPy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The type of the output data and the dimension of the output arrays will be consistent with the type of the input data and the size of the input arrays.

Parameters:

input_data (ndarray | Mapping[str, ndarray]) – The input data.

Returns:

The predicted output data.

Return type:

ndarray | Mapping[str, ndarray]

predict_proba(input_data, hard=True)

Predict the probability of belonging to each cluster from input data.

The user can specify these input data either as a numpy array, e.g. array([1., 2., 3.]) or as a dictionary, e.g. {'a': array([1.]), 'b': array([2., 3.])}.

If the numpy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the numpy arrays are of dimension 1, there is a single sample.

The type of the output data and the dimension of the output arrays will be consistent with the type of the input data and the size of the input arrays.

Parameters:
  • input_data (DataType) – The input data.

  • hard (bool) –

    Whether clustering should be hard (True) or soft (False).

    By default it is set to True.

Returns:

The probability of belonging to each cluster.

Return type:

ndarray

to_pickle(directory=None, path='.', save_learning_set=False)

Save the machine learning algorithm.

Parameters:
  • directory (str | None) – The name of the directory to save the algorithm.

  • path (str | Path) –

    The path to parent directory where to create the directory.

    By default it is set to “.”.

  • save_learning_set (bool) –

    Whether to save the learning set or get rid of it to lighten the saved files.

    By default it is set to False.

Returns:

The path to the directory where the algorithm is saved.

Return type:

str

DEFAULT_TRANSFORMER: DefaultTransformerType = mappingproxy({'inputs': <gemseo.mlearning.transformers.scaler.min_max_scaler.MinMaxScaler object>})

The default transformer for the input and output data, if any.

FILENAME: ClassVar[str] = 'ml_algo.pkl'
IDENTITY: Final[DefaultTransformerType] = mappingproxy({})

A transformer leaving the input and output variables as they are.

LIBRARY: Final[str] = 'scikit-learn'

The name of the library of the wrapped machine learning algorithm.

SHORT_ALGO_NAME: ClassVar[str] = 'SVM'

The short name of the machine learning algorithm, often an acronym.

Typically used for composite names, e.g. f"{algo.SHORT_ALGO_NAME}_{dataset.name}" or f"{algo.SHORT_ALGO_NAME}_{discipline.name}".

algo: Any

The interfaced machine learning algorithm.

property input_data: ndarray

The input data matrix.

property input_dimension: int

The input space dimension.

input_names: list[str]

The names of the input variables.

input_space_center: dict[str, ndarray]

The center of the input space.

property is_trained: bool

Return whether the algorithm is trained.

property learning_samples_indices: Sequence[int]

The indices of the learning samples used for the training.

learning_set: Dataset

The learning dataset.

n_classes: int

The number of classes.

property output_data: ndarray

The output data matrix.

property output_dimension: int

The output space dimension.

output_names: list[str]

The names of the output variables.

parameters: dict[str, MLAlgoParameterType]

The parameters of the machine learning algorithm.

resampling_results: dict[str, tuple[Resampler, list[MLAlgo], list[ndarray] | ndarray]]

The resampler class names bound to the resampling results.

A resampling result is formatted as (resampler, ml_algos, predictions) where resampler is a Resampler, ml_algos is the list of the associated machine learning algorithms built during the resampling stage and predictions are the predictions obtained with the latter.

resampling_results stores only one resampling result per resampler type (e.g., "CrossValidation", "LeaveOneOut" and "Boostrap").

transformer: dict[str, Transformer]

The strategies to transform the variables, if any.

The values are instances of Transformer while the keys are the names of either the variables or the groups of variables, e.g. “inputs” or “outputs” in the case of the regression algorithms. If a group is specified, the Transformer will be applied to all the variables of this group.