gemseo / mlearning / quality_measures

Show inherited members

quality_measure module

Measuring the quality of a machine learning algorithm.

class gemseo.mlearning.quality_measures.quality_measure.MLQualityMeasure(algo, fit_transformers=True)[source]

Bases: object

An abstract quality measure to assess a machine learning algorithm.

This measure can be minimized (e.g. MSEMeasure) or maximized (e.g. R2Measure).

It can be evaluated from the learning dataset, from a test dataset or using resampling techniques such as boostrap, cross-validation or leave-one-out.

The machine learning algorithm is usually trained. If not but required by the evaluation technique, the quality measure will train it.

Lastly, the transformers of the algorithm fitted from the learning dataset can be used as is by the resampling methods or re-fitted for each algorithm trained on a subset of the learning dataset.

Parameters:
  • algo (MLAlgo) – A machine learning algorithm.

  • fit_transformers (bool) –

    Whether to re-fit the transformers when using resampling techniques. If False, use the transformers of the algorithm fitted from the whole learning dataset.

    By default it is set to True.

class EvaluationMethod(value)[source]

Bases: StrEnum

The evaluation method.

BOOTSTRAP = 'BOOTSTRAP'

The name of the method to evaluate the measure by bootstrap.

KFOLDS = 'KFOLDS'

The name of the method to evaluate the measure by cross-validation.

LEARN = 'LEARN'

The name of the method to evaluate the measure on the learning dataset.

LOO = 'LOO'

The name of the method to evaluate the measure by leave-one-out.

TEST = 'TEST'

The name of the method to evaluate the measure on a test dataset.

abstract evaluate_bootstrap(n_replicates=100, samples=None, multioutput=True, seed=None)[source]

Evaluate the quality measure using the bootstrap technique.

Parameters:
  • n_replicates (int) –

    The number of bootstrap replicates.

    By default it is set to 100.

  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    If True, return the quality measure for each output component. Otherwise, average these measures.

    By default it is set to True.

  • seed (int | None) – The seed of the pseudo-random number generator. If None, then an unpredictable generator will be used.

Returns:

The value of the quality measure.

Return type:

MeasureType

abstract evaluate_kfolds(n_folds=5, samples=None, multioutput=True, randomize=True, seed=None)[source]

Evaluate the quality measure using the k-folds technique.

Parameters:
  • n_folds (int) –

    The number of folds.

    By default it is set to 5.

  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    If True, return the quality measure for each output component. Otherwise, average these measures.

    By default it is set to True.

  • randomize (bool) –

    Whether to shuffle the samples before dividing them in folds.

    By default it is set to True.

  • seed (int | None) – The seed of the pseudo-random number generator. If None, then an unpredictable generator will be used.

Returns:

The value of the quality measure.

Return type:

MeasureType

abstract evaluate_learn(samples=None, multioutput=True)[source]

Evaluate the quality measure from the learning dataset.

Parameters:
  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    If True, return the quality measure for each output component. Otherwise, average these measures.

    By default it is set to True.

Returns:

The value of the quality measure.

Return type:

MeasureType

evaluate_loo(samples=None, multioutput=True)[source]

Evaluate the quality measure using the leave-one-out technique.

Parameters:
  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    If True, return the quality measure for each output component. Otherwise, average these measures.

    By default it is set to True.

Returns:

The value of the quality measure.

Return type:

MeasureType

abstract evaluate_test(test_data, samples=None, multioutput=True)[source]

Evaluate the quality measure using a test dataset.

Parameters:
  • test_data (Dataset) – The test dataset.

  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    If True, return the quality measure for each output component. Otherwise, average these measures.

    By default it is set to True.

Returns:

The value of the quality measure.

Return type:

MeasureType

classmethod is_better(val1, val2)[source]

Compare the quality between two values.

This method returns True if the first one is better than the second one.

For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.

Parameters:
  • val1 (float) – The value of the first quality measure.

  • val2 (float) – The value of the second quality measure.

Returns:

Whether val1 is of better quality than val2.

Return type:

bool

SMALLER_IS_BETTER: ClassVar[bool] = True

Whether to minimize or maximize the measure.

algo: MLAlgo

The machine learning algorithm usually trained.

class gemseo.mlearning.quality_measures.quality_measure.MLQualityMeasureFactory[source]

Bases: BaseFactory

A factory of MLQualityMeasure.

Return type:

BaseFactory

failed_imports: dict[str, str]

The class names bound to the import errors.

Examples using MLQualityMeasure

Calibration of a polynomial regression

Calibration of a polynomial regression

Machine learning algorithm selection example

Machine learning algorithm selection example

MSE example - test-train split

MSE example - test-train split

Quality measure for surrogate model comparison

Quality measure for surrogate model comparison

Advanced mixture of experts

Advanced mixture of experts