gemseo / mlearning / quality_measures

Show inherited members

quality_measure module

Measuring the quality of a machine learning algorithm.

class gemseo.mlearning.quality_measures.quality_measure.BaseMLQualityMeasure(algo, fit_transformers=True)[source]

Bases: object

An abstract quality measure to assess a machine learning algorithm.

This measure can be minimized (e.g. MSEMeasure) or maximized (e.g. R2Measure).

It can be evaluated from the learning dataset, from a test dataset or using resampling techniques such as boostrap, cross-validation or leave-one-out.

The machine learning algorithm is usually trained. If not but required by the evaluation technique, the quality measure will train it.

Lastly, the transformers of the algorithm fitted from the learning dataset can be used as is by the resampling methods or re-fitted for each algorithm trained on a subset of the learning dataset.

Parameters:
  • algo (BaseMLAlgo) – A machine learning algorithm.

  • fit_transformers (bool) –

    Whether to re-fit the transformers when using resampling techniques. If False, use the transformers of the algorithm fitted from the whole learning dataset.

    By default it is set to True.

class EvaluationFunctionName(value)[source]

Bases: StrEnum

The name of the function associated with an evaluation method.

BOOTSTRAP = 'evaluate_bootstrap'
KFOLDS = 'evaluate_kfolds'
LEARN = 'evaluate_learn'
LOO = 'evaluate_loo'
TEST = 'evaluate_test'
class EvaluationMethod(value)[source]

Bases: StrEnum

The evaluation method.

BOOTSTRAP = 'BOOTSTRAP'

The name of the method to evaluate the measure by bootstrap.

KFOLDS = 'KFOLDS'

The name of the method to evaluate the measure by cross-validation.

LEARN = 'LEARN'

The name of the method to evaluate the measure on the learning dataset.

LOO = 'LOO'

The name of the method to evaluate the measure by leave-one-out.

TEST = 'TEST'

The name of the method to evaluate the measure on a test dataset.

abstract compute_bootstrap_measure(n_replicates=100, samples=None, multioutput=True, seed=None, store_resampling_result=False)[source]

Evaluate the quality measure using the bootstrap technique.

Parameters:
  • n_replicates (int) –

    The number of bootstrap replicates.

    By default it is set to 100.

  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    Whether the quality measure is returned for each component of the outputs. Otherwise, the average quality measure.

    By default it is set to True.

  • seed (int | None) – The seed of the pseudo-random number generator. If None, an unpredictable generator will be used.

  • store_resampling_result (bool) –

    Whether to store the \(n\) machine learning algorithms and associated predictions generated by the resampling stage where \(n\) is the number of bootstrap replicates.

    By default it is set to False.

Returns:

The value of the quality measure.

Return type:

MeasureType

abstract compute_cross_validation_measure(n_folds=5, samples=None, multioutput=True, randomize=True, seed=None, store_resampling_result=False)[source]

Evaluate the quality measure using the k-folds technique.

Parameters:
  • n_folds (int) –

    The number of folds.

    By default it is set to 5.

  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    Whether the quality measure is returned for each component of the outputs. Otherwise, the average quality measure.

    By default it is set to True.

  • randomize (bool) –

    Whether to shuffle the samples before dividing them in folds.

    By default it is set to True.

  • seed (int | None) – The seed of the pseudo-random number generator. If None, an unpredictable generator is used.

  • store_resampling_result (bool) –

    Whether to store the \(n\) machine learning algorithms and associated predictions generated by the resampling stage where \(n\) is the number of folds.

    By default it is set to False.

Returns:

The value of the quality measure.

Return type:

MeasureType

abstract compute_learning_measure(samples=None, multioutput=True)[source]

Evaluate the quality measure from the learning dataset.

Parameters:
  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    Whether the quality measure is returned for each component of the outputs. Otherwise, the average quality measure.

    By default it is set to True.

Returns:

The value of the quality measure.

Return type:

MeasureType

compute_leave_one_out_measure(samples=None, multioutput=True, store_resampling_result=True)[source]

Evaluate the quality measure using the leave-one-out technique.

Parameters:
  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    Whether the quality measure is returned for each component of the outputs. Otherwise, the average quality measure.

    By default it is set to True.

  • store_resampling_result (bool) –

    Whether to store the \(n\) machine learning algorithms and associated predictions generated by the resampling stage where \(n\) is the number of learning samples.

    By default it is set to True.

Returns:

The value of the quality measure.

Return type:

MeasureType

abstract compute_test_measure(test_data, samples=None, multioutput=True)[source]

Evaluate the quality measure using a test dataset.

Parameters:
  • test_data (Dataset) – The test dataset.

  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    Whether the quality measure is returned for each component of the outputs. Otherwise, the average quality measure.

    By default it is set to True.

Returns:

The value of the quality measure.

Return type:

MeasureType

evaluate_loo(samples=None, multioutput=True, store_resampling_result=True)

Evaluate the quality measure using the leave-one-out technique.

Parameters:
  • samples (Sequence[int] | None) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) –

    Whether the quality measure is returned for each component of the outputs. Otherwise, the average quality measure.

    By default it is set to True.

  • store_resampling_result (bool) –

    Whether to store the \(n\) machine learning algorithms and associated predictions generated by the resampling stage where \(n\) is the number of learning samples.

    By default it is set to True.

Returns:

The value of the quality measure.

Return type:

MeasureType

classmethod is_better(val1, val2)[source]

Compare the quality between two values.

This method returns True if the first one is better than the second one.

For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.

Parameters:
  • val1 (float) – The value of the first quality measure.

  • val2 (float) – The value of the second quality measure.

Returns:

Whether val1 is of better quality than val2.

Return type:

bool

SMALLER_IS_BETTER: ClassVar[bool] = True

Whether to minimize or maximize the measure.

algo: BaseMLAlgo

The machine learning algorithm whose quality we want to measure.

class gemseo.mlearning.quality_measures.quality_measure.MLQualityMeasureFactory[source]

Bases: BaseFactory

A factory of BaseMLQualityMeasure.

Return type:

Any

failed_imports: dict[str, str]

The class names bound to the import errors.

Examples using BaseMLQualityMeasure

Calibration of a polynomial regression

Calibration of a polynomial regression

Machine learning algorithm selection example

Machine learning algorithm selection example

Cross-validation

Cross-validation

Leave-one-out

Leave-one-out

MSE for regression models

MSE for regression models

R2 for regression models

R2 for regression models

RMSE for regression models

RMSE for regression models

Advanced mixture of experts

Advanced mixture of experts

Scaling

Scaling