Introduction to machine learning#

Introduction#

When an Discipline is costly to evaluate, it can be replaced by a SurrogateDiscipline cheap to evaluate, e.g. linear model, Kriging, RBF regressor, ... This SurrogateDiscipline is built from a few evaluations of this Discipline. This learning phase commonly relies on a regression model calibrated by machine learning techniques. This is the reason why GEMSEO provides a machine learning package which includes the BaseRegressor class implementing the concept of regression model. In addition, this machine learning package has a much broader set of features than regression: clustering, classification, dimension reduction, data scaling, ...

See also

Surrogate models

This module contains the base class for machine learning algorithms.

Machine learning is the art of building models from data, the latter being samples of properties of interest that can sometimes be sorted by group, such as inputs, outputs, categories, ...

In the absence of such groups, the data can be analyzed through a study of commonalities, leading to plausible clusters. This is referred to as clustering, a branch of unsupervised learning dedicated to the detection of patterns in unlabeled data.

See also

unsupervised, clustering

When data can be separated into at least two categories by a human, supervised learning can start with classification whose purpose is to model the relations between these categories and the properties of interest. Once trained, a classification model can predict the category corresponding to new property values.

See also

supervised, classification

When the distinction between inputs and outputs can be made among the data properties, another branch of supervised learning can be considered: regression modeling. Once trained, a regression model can predict the outputs corresponding to new inputs values.

See also

supervised, regression

The quality of a machine learning algorithm can be measured using a BaseMLAlgoQuality either with respect to the training dataset or to a test dataset or using resampling methods, such as K-folds or leave-one-out cross-validation techniques. The challenge is to avoid over-learning the learning data leading to a loss of generality. We often want to build models that are not too dataset-dependent. For that, we want to maximize both a learning quality and a generalization quality. In unsupervised learning, a quality measure can represent the robustness of clusters definition while in supervised learning, a quality measure can be interpreted as an error, whether it is a misclassification in the case of the classification algorithms or a prediction one in the case of the regression algorithms. This quality can often be improved by building machine learning models from standardized data in such a way that the data properties have the same order of magnitude.

See also

quality_measure, transformer

Lastly, a machine learning algorithm often depends on hyperparameters to be carefully tuned in order to maximize the generalization power of the model.

class BaseMLAlgo(data, settings_model=None, **settings)[source]

An abstract machine learning algorithm.

Parameters:
  • data (Dataset) -- The training dataset.

  • settings_model (BaseMLAlgoSettings | None) -- The machine learning algorithm settings as a Pydantic model. If None, use **settings.

  • **settings (Any) -- The machine learning algorithm settings. These arguments are ignored when settings_model is not None.

Raises:

ValueError -- When both the variable and the group it belongs to have a transformer.

learn(samples=(), fit_transformers=True)[source]

Train the machine learning algorithm from the training dataset.

Parameters:
  • samples (Sequence[int]) --

    The indices of the learning samples. If empty, use the whole training dataset.

    By default it is set to ().

  • fit_transformers (bool) --

    Whether to fit the variable transformers. Otherwise, use them as they are.

    By default it is set to True.

Return type:

None

DEFAULT_TRANSFORMER: DefaultTransformerType = mappingproxy({})

The default transformer for the input and output data, if any.

DataFormatters: ClassVar[type[BaseDataFormatters]]

The data formatters for the learning and prediction methods.

LIBRARY: ClassVar[str] = ''

The name of the library of the wrapped machine learning algorithm.

SHORT_ALGO_NAME: ClassVar[str] = 'BaseMLAlgo'

The short name of the machine learning algorithm, often an acronym.

Typically used for composite names, e.g. f"{algo.SHORT_ALGO_NAME}_{dataset.name}" or f"{algo.SHORT_ALGO_NAME}_{discipline.name}".

Settings: ClassVar[type[BaseMLAlgoSettings]]

The Pydantic model class for the settings of the machine learning algorithm.

algo: Any

The interfaced machine learning algorithm.

property is_trained: bool

Return whether the algorithm is trained.

property learning_samples_indices: Sequence[int]

The indices of the learning samples used for the training.

learning_set: Dataset

The training dataset.

resampling_results: dict[str, tuple[BaseResampler, list[BaseMLAlgo], list[ndarray] | ndarray]]

The resampler class names bound to the resampling results.

A resampling result is formatted as (resampler, ml_algos, predictions) where resampler is a BaseResampler, ml_algos is the list of the associated machine learning algorithms built during the resampling stage and predictions are the predictions obtained with the latter.

resampling_results stores only one resampling result per resampler type (e.g., "CrossValidation", "LeaveOneOut" and "Boostrap").

transformer: dict[str, BaseTransformer]

The strategies to transform the variables, if any.

The values are instances of BaseTransformer while the keys are the names of either the variables or the groups of variables, e.g. "inputs" or "outputs" in the case of the regression algorithms. If a group is specified, the BaseTransformer will be applied to all the variables of this group.

Development#

This diagram shows the hierarchy of all machine learning algorithms, and where they interact with Dataset, BaseMLAlgoQuality, BaseTransformer and MLAlgoCalibration.

class Dataset {
}

abstract class BaseMLAlgo {
 +SHORT_ALGO_NAME
 +LIBRARY
 +algo
 +is_trained
 +learning_set
 +parameters
 +transformer
 +DataFormatters
 +learn()
 +save()
 #save_algo()
 +load_algo()
 #get_objects_to_save()
}

abstract class BaseMLUnsupervisedAlgo {
 +var_names
 +learn()
 #fit()
}

abstract class BaseClusterer {
 +n_clusters
 +labels
 +learn()
 +predict()
 +predict_proba()
 #predict_proba()
 #predict_proba_hard()
 #predict_proba_soft()
}

abstract class BaseMLSupervisedAlgo {
 +input_names
 +input_dimension
 +output_names
 +output_dimension
 +DataFormatters
 +learn()
 +predict()
 #fit()
 #predict()
 #get_objects_to_save()
}

abstract class BaseClassifier {
 +n_classes
 +learn()
 +predict_proba()
 #predict_proba()
 #predict_proba_hard()
 #predict_proba_soft()
 #get_objects_to_save()
}

abstract class BaseRegressor {
 +DataFormatters
 +predict_raw()
 +predict_jacobian()
 #predict_jacobian()
}

abstract class BaseMLAlgoQuality

abstract class SurrogateDiscipline {
 +input_grammar
 +output_grammar
 +execute()
 +linearize()
}

abstract class BaseTransformer {
 +duplicate()
 +fit()
 +transform()
 +inverse_transform()
 +fit_transform()
 +compute_jacobian()
 +compute_jacobian_inverse()
}


BaseMLAlgo *-- Dataset
BaseMLAlgo *-- BaseTransformer
BaseMLAlgo <|-down- BaseMLUnsupervisedAlgo
BaseMLAlgo <|-down- BaseMLSupervisedAlgo
BaseMLUnsupervisedAlgo <|-down- BaseClusterer
BaseMLSupervisedAlgo <|-- BaseRegressor
BaseMLSupervisedAlgo <|-- BaseClassifier
BaseMLAlgoQuality *-left- BaseMLAlgo
SurrogateDiscipline *-down- BaseRegressor