error_measure module¶
Here is the baseclass to measure the error of machine learning algorithms.
The concept of error measure is implemented with the MLErrorMeasure
class and
proposes different evaluation methods.
- class gemseo.mlearning.qual_measure.error_measure.MLErrorMeasure(algo, fit_transformers=True)[source]¶
Bases:
MLQualityMeasure
An abstract error measure for machine learning.
- Parameters:
- evaluate(method='learn', samples=None, multioutput=True, **options)¶
Evaluate the quality measure.
- Parameters:
method (str) –
The name of the method to evaluate the quality measure.
By default it is set to “learn”.
samples (Sequence[int] | None) – The indices of the learning samples. If
None
, use the whole learning dataset.multioutput (bool) –
If
True
, return the quality measure for each output component. Otherwise, average these measures.By default it is set to True.
**options (OptionType | None) – The options of the estimation method (e.g.
test_data
for the test method,n_replicates
for the bootstrap one, …).
- Returns:
The value of the quality measure.
- Raises:
ValueError – When the name of the method is unknown.
- Return type:
MeasureType
- evaluate_bootstrap(n_replicates=100, samples=None, multioutput=True, seed=None, as_dict=False)[source]¶
Evaluate the quality measure using the bootstrap technique.
- Parameters:
n_replicates (int) –
The number of bootstrap replicates.
By default it is set to 100.
samples (Sequence[int] | None) – The indices of the learning samples. If
None
, use the whole learning dataset.multioutput (bool) –
If
True
, return the quality measure for each output component. Otherwise, average these measures.By default it is set to True.
seed (None | None) – The seed of the pseudo-random number generator. If
None
, then an unpredictable generator will be used.as_dict (bool) –
Whether to express the measure as a dictionary whose keys are the output names.
By default it is set to False.
- Returns:
The value of the quality measure.
- Return type:
MeasureType
- evaluate_kfolds(n_folds=5, samples=None, multioutput=True, randomize=True, seed=None, as_dict=False)[source]¶
Evaluate the quality measure using the k-folds technique.
- Parameters:
n_folds (int) –
The number of folds.
By default it is set to 5.
samples (Sequence[int] | None) – The indices of the learning samples. If
None
, use the whole learning dataset.multioutput (bool) –
If
True
, return the quality measure for each output component. Otherwise, average these measures.By default it is set to True.
randomize (bool) –
Whether to shuffle the samples before dividing them in folds.
By default it is set to True.
seed (int | None) – The seed of the pseudo-random number generator. If
None
, then an unpredictable generator will be used.as_dict (bool) –
Whether to express the measure as a dictionary whose keys are the output names.
By default it is set to False.
- Returns:
The value of the quality measure.
- Return type:
MeasureType
- evaluate_learn(samples=None, multioutput=True, as_dict=False)[source]¶
Evaluate the quality measure from the learning dataset.
- Parameters:
samples (Sequence[int] | None) – The indices of the learning samples. If
None
, use the whole learning dataset.multioutput (bool) –
If
True
, return the quality measure for each output component. Otherwise, average these measures.By default it is set to True.
as_dict (bool) –
Whether to express the measure as a dictionary whose keys are the output names.
By default it is set to False.
- Returns:
The value of the quality measure.
- Return type:
MeasureType
- evaluate_loo(samples=None, multioutput=True, as_dict=False)[source]¶
Evaluate the quality measure using the leave-one-out technique.
- Parameters:
samples (Sequence[int] | None) – The indices of the learning samples. If
None
, use the whole learning dataset.multioutput (bool) –
If
True
, return the quality measure for each output component. Otherwise, average these measures.By default it is set to True.
as_dict (bool) –
Whether to express the measure as a dictionary whose keys are the output names.
By default it is set to False.
- Returns:
The value of the quality measure.
- Return type:
MeasureType
- evaluate_test(test_data, samples=None, multioutput=True, as_dict=False)[source]¶
Evaluate the quality measure using a test dataset.
- Parameters:
test_data (Dataset) – The test dataset.
samples (Sequence[int] | None) – The indices of the learning samples. If
None
, use the whole learning dataset.multioutput (bool) –
If
True
, return the quality measure for each output component. Otherwise, average these measures.By default it is set to True.
as_dict (bool) –
Whether to express the measure as a dictionary whose keys are the output names.
By default it is set to False.
- Returns:
The value of the quality measure.
- Return type:
MeasureType
- classmethod is_better(val1, val2)¶
Compare the quality between two values.
This method returns
True
if the first one is better than the second one.For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.
- BOOTSTRAP: ClassVar[str] = 'bootstrap'¶
The name of the method to evaluate the measure by bootstrap.
- KFOLDS: ClassVar[str] = 'kfolds'¶
The name of the method to evaluate the measure by cross-validation.
Examples using MLErrorMeasure¶
Calibration of a polynomial regression
Machine learning algorithm selection example
MSE example - test-train split
Quality measure for surrogate model comparison