Measure the quality of a machine learning algorithm

Here is the baseclass to measure the quality of machine learning algorithms.

The concept of quality measure is implemented with the MLQualityMeasure class.

Classes:

MLQualityMeasure(algo)

An abstract quality measure for machine learning algorithms.

class gemseo.mlearning.qual_measure.quality_measure.MLQualityMeasure(algo)[source]

An abstract quality measure for machine learning algorithms.

Attributes

algo (MLAlgo) – The machine learning algorithm.

Parameters

algo (MLAlgo) – A machine learning algorithm.

Return type

None

Methods:

evaluate([method, samples])

Evaluate the quality measure.

evaluate_bootstrap([n_replicates, samples, …])

Evaluate the quality measure using the bootstrap technique.

evaluate_kfolds([n_folds, samples, multioutput])

Evaluate the quality measure using the k-folds technique.

evaluate_learn([samples, multioutput])

Evaluate the quality measure using the learning dataset.

evaluate_loo([samples, multioutput])

Evaluate the quality measure using the leave-one-out technique.

evaluate_test(test_data[, samples, multioutput])

Evaluate the quality measure using a test dataset.

is_better(val1, val2)

Compare the quality between two values.

evaluate(method='learn', samples=None, **options)[source]

Evaluate the quality measure.

Parameters
  • method (str) – The name of the method to evaluate the quality measure.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • **options – The options of the estimation method (e.g. ‘test_data’ for

  • 'test' method (the) –

  • for the bootstrap one ('n_replicates') –

  • ...)

  • options (Optional[Union[List[int], bool, int, gemseo.core.dataset.Dataset]]) –

Returns

The value of the quality measure.

Raises

ValueError – If the name of the method is unknown.

Return type

Union[float, numpy.ndarray]

evaluate_bootstrap(n_replicates=100, samples=None, multioutput=True)[source]

Evaluate the quality measure using the bootstrap technique.

Parameters
  • n_replicates (int) – The number of bootstrap replicates.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

NoReturn

evaluate_kfolds(n_folds=5, samples=None, multioutput=True)[source]

Evaluate the quality measure using the k-folds technique.

Parameters
  • n_folds (int) – The number of folds.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

NoReturn

evaluate_learn(samples=None, multioutput=True)[source]

Evaluate the quality measure using the learning dataset.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

NoReturn

evaluate_loo(samples=None, multioutput=True)[source]

Evaluate the quality measure using the leave-one-out technique.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_test(test_data, samples=None, multioutput=True)[source]

Evaluate the quality measure using a test dataset.

Parameters
  • dataset – The test dataset.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

  • test_data (gemseo.core.dataset.Dataset) –

Returns

The value of the quality measure.

Return type

NoReturn

classmethod is_better(val1, val2)[source]

Compare the quality between two values.

This methods returns True if the first one is better than the second one.

For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.

Parameters
  • val1 (float) – The value of the first quality measure.

  • val2 (float) – The value of the second quality measure.

Returns

Whether val1 is of better quality than val2.

Return type

bool

Here is the baseclass to measure the error of machine learning algorithms.

The concept of error measure is implemented with the MLErrorMeasure class and proposes different evaluation methods.

Classes:

MLErrorMeasure(algo)

An abstract error measure for machine learning.

Functions:

choice(a[, size, replace, p])

Generates a random sample from a given 1-D array

class gemseo.mlearning.qual_measure.error_measure.MLErrorMeasure(algo)[source]

An abstract error measure for machine learning.

Attributes

algo (MLAlgo) – The machine learning algorithm.

Parameters

algo (MLSupervisedAlgo) – A machine learning algorithm for supervised learning.

Return type

None

Methods:

evaluate([method, samples])

Evaluate the quality measure.

evaluate_bootstrap([n_replicates, samples, …])

Evaluate the quality measure using the bootstrap technique.

evaluate_kfolds([n_folds, samples, multioutput])

Evaluate the quality measure using the k-folds technique.

evaluate_learn([samples, multioutput])

Evaluate the quality measure using the learning dataset.

evaluate_loo([samples, multioutput])

Evaluate the quality measure using the leave-one-out technique.

evaluate_test(test_data[, samples, multioutput])

Evaluate the quality measure using a test dataset.

is_better(val1, val2)

Compare the quality between two values.

evaluate(method='learn', samples=None, **options)

Evaluate the quality measure.

Parameters
  • method (str) – The name of the method to evaluate the quality measure.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • **options – The options of the estimation method (e.g. ‘test_data’ for

  • 'test' method (the) –

  • for the bootstrap one ('n_replicates') –

  • ...)

  • options (Optional[Union[List[int], bool, int, gemseo.core.dataset.Dataset]]) –

Returns

The value of the quality measure.

Raises

ValueError – If the name of the method is unknown.

Return type

Union[float, numpy.ndarray]

evaluate_bootstrap(n_replicates=100, samples=None, multioutput=True)[source]

Evaluate the quality measure using the bootstrap technique.

Parameters
  • n_replicates (int) – The number of bootstrap replicates.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_kfolds(n_folds=5, samples=None, multioutput=True)[source]

Evaluate the quality measure using the k-folds technique.

Parameters
  • n_folds (int) – The number of folds.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_learn(samples=None, multioutput=True)[source]

Evaluate the quality measure using the learning dataset.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_loo(samples=None, multioutput=True)

Evaluate the quality measure using the leave-one-out technique.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_test(test_data, samples=None, multioutput=True)[source]

Evaluate the quality measure using a test dataset.

Parameters
  • dataset – The test dataset.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

  • test_data (gemseo.core.dataset.Dataset) –

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

classmethod is_better(val1, val2)

Compare the quality between two values.

This methods returns True if the first one is better than the second one.

For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.

Parameters
  • val1 (float) – The value of the first quality measure.

  • val2 (float) – The value of the second quality measure.

Returns

Whether val1 is of better quality than val2.

Return type

bool

gemseo.mlearning.qual_measure.error_measure.choice(a, size=None, replace=True, p=None)

Generates a random sample from a given 1-D array

New in version 1.7.0.

Note

New code should use the choice method of a default_rng() instance instead; please see the random-quick-start.

Parameters
  • a (1-D array-like or int) – If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated as if a were np.arange(a)

  • size (int or tuple of ints, optional) – Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. Default is None, in which case a single value is returned.

  • replace (boolean, optional) – Whether the sample is with or without replacement

  • p (1-D array-like, optional) – The probabilities associated with each entry in a. If not given the sample assumes a uniform distribution over all entries in a.

Returns

samples – The generated random samples

Return type

single item or ndarray

Raises

ValueError – If a is an int and less than zero, if a or p are not 1-dimensional, if a is an array-like of size 0, if p is not a vector of probabilities, if a and p have different lengths, or if replace=False and the sample size is greater than the population size

See also

randint, shuffle, permutation

Generator.choice

which should be used in new code

Notes

Sampling random rows from a 2-D array is not possible with this function, but is possible with Generator.choice through its axis keyword.

Examples

Generate a uniform random sample from np.arange(5) of size 3:

>>> np.random.choice(5, 3)
array([0, 3, 4]) # random
>>> #This is equivalent to np.random.randint(0,5,3)

Generate a non-uniform random sample from np.arange(5) of size 3:

>>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0])
array([3, 3, 0]) # random

Generate a uniform random sample from np.arange(5) of size 3 without replacement:

>>> np.random.choice(5, 3, replace=False)
array([3,1,0]) # random
>>> #This is equivalent to np.random.permutation(np.arange(5))[:3]

Generate a non-uniform random sample from np.arange(5) of size 3 without replacement:

>>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0])
array([2, 3, 0]) # random

Any of the above can be repeated with an arbitrary array-like instead of just integers. For instance:

>>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher']
>>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3])
array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random
      dtype='<U11')

Here is the baseclass to measure the quality of machine learning algorithms.

The concept of clustering quality measure is implemented with the MLClusteringMeasure class and proposes different evaluation methods.

Classes:

MLClusteringMeasure(algo)

An abstract clustering measure for clustering algorithms.

MLPredictiveClusteringMeasure(algo)

An abstract clustering measure for predictive clustering algorithms.

Functions:

choice(a[, size, replace, p])

Generates a random sample from a given 1-D array

class gemseo.mlearning.qual_measure.cluster_measure.MLClusteringMeasure(algo)[source]

An abstract clustering measure for clustering algorithms.

Attributes

algo (MLAlgo) – The machine learning algorithm.

Parameters

algo (MLClusteringAlgo) – A machine learning algorithm for clustering.

Return type

None

Methods:

evaluate([method, samples])

Evaluate the quality measure.

evaluate_bootstrap([n_replicates, samples, …])

Evaluate the quality measure using the bootstrap technique.

evaluate_kfolds([n_folds, samples, multioutput])

Evaluate the quality measure using the k-folds technique.

evaluate_learn([samples, multioutput])

Evaluate the quality measure using the learning dataset.

evaluate_loo([samples, multioutput])

Evaluate the quality measure using the leave-one-out technique.

evaluate_test(test_data[, samples, multioutput])

Evaluate the quality measure using a test dataset.

is_better(val1, val2)

Compare the quality between two values.

evaluate(method='learn', samples=None, **options)

Evaluate the quality measure.

Parameters
  • method (str) – The name of the method to evaluate the quality measure.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • **options – The options of the estimation method (e.g. ‘test_data’ for

  • 'test' method (the) –

  • for the bootstrap one ('n_replicates') –

  • ...)

  • options (Optional[Union[List[int], bool, int, gemseo.core.dataset.Dataset]]) –

Returns

The value of the quality measure.

Raises

ValueError – If the name of the method is unknown.

Return type

Union[float, numpy.ndarray]

evaluate_bootstrap(n_replicates=100, samples=None, multioutput=True)

Evaluate the quality measure using the bootstrap technique.

Parameters
  • n_replicates (int) – The number of bootstrap replicates.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

NoReturn

evaluate_kfolds(n_folds=5, samples=None, multioutput=True)

Evaluate the quality measure using the k-folds technique.

Parameters
  • n_folds (int) – The number of folds.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

NoReturn

evaluate_learn(samples=None, multioutput=True)[source]

Evaluate the quality measure using the learning dataset.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_loo(samples=None, multioutput=True)

Evaluate the quality measure using the leave-one-out technique.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_test(test_data, samples=None, multioutput=True)

Evaluate the quality measure using a test dataset.

Parameters
  • dataset – The test dataset.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

  • test_data (gemseo.core.dataset.Dataset) –

Returns

The value of the quality measure.

Return type

NoReturn

classmethod is_better(val1, val2)

Compare the quality between two values.

This methods returns True if the first one is better than the second one.

For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.

Parameters
  • val1 (float) – The value of the first quality measure.

  • val2 (float) – The value of the second quality measure.

Returns

Whether val1 is of better quality than val2.

Return type

bool

class gemseo.mlearning.qual_measure.cluster_measure.MLPredictiveClusteringMeasure(algo)[source]

An abstract clustering measure for predictive clustering algorithms.

Attributes
  • algo (MLAlgo) – The machine learning algorithm.

  • algo (MLAlgo) – The machine learning algorithm.

Parameters

algo (MLPredictiveClusteringAlgo) – A machine learning algorithm for predictive clustering.

Return type

None

Methods:

evaluate([method, samples])

Evaluate the quality measure.

evaluate_bootstrap([n_replicates, samples, …])

Evaluate the quality measure using the bootstrap technique.

evaluate_kfolds([n_folds, samples, multioutput])

Evaluate the quality measure using the k-folds technique.

evaluate_learn([samples, multioutput])

Evaluate the quality measure using the learning dataset.

evaluate_loo([samples, multioutput])

Evaluate the quality measure using the leave-one-out technique.

evaluate_test(test_data[, samples, multioutput])

Evaluate the quality measure using a test dataset.

is_better(val1, val2)

Compare the quality between two values.

evaluate(method='learn', samples=None, **options)

Evaluate the quality measure.

Parameters
  • method (str) – The name of the method to evaluate the quality measure.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • **options – The options of the estimation method (e.g. ‘test_data’ for

  • 'test' method (the) –

  • for the bootstrap one ('n_replicates') –

  • ...)

  • options (Optional[Union[List[int], bool, int, gemseo.core.dataset.Dataset]]) –

Returns

The value of the quality measure.

Raises

ValueError – If the name of the method is unknown.

Return type

Union[float, numpy.ndarray]

evaluate_bootstrap(n_replicates=100, samples=None, multioutput=True)[source]

Evaluate the quality measure using the bootstrap technique.

Parameters
  • n_replicates (int) – The number of bootstrap replicates.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_kfolds(n_folds=5, samples=None, multioutput=True)[source]

Evaluate the quality measure using the k-folds technique.

Parameters
  • n_folds (int) – The number of folds.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_learn(samples=None, multioutput=True)

Evaluate the quality measure using the learning dataset.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_loo(samples=None, multioutput=True)

Evaluate the quality measure using the leave-one-out technique.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_test(test_data, samples=None, multioutput=True)[source]

Evaluate the quality measure using a test dataset.

Parameters
  • dataset – The test dataset.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

  • test_data (gemseo.core.dataset.Dataset) –

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

classmethod is_better(val1, val2)

Compare the quality between two values.

This methods returns True if the first one is better than the second one.

For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.

Parameters
  • val1 (float) – The value of the first quality measure.

  • val2 (float) – The value of the second quality measure.

Returns

Whether val1 is of better quality than val2.

Return type

bool

gemseo.mlearning.qual_measure.cluster_measure.choice(a, size=None, replace=True, p=None)

Generates a random sample from a given 1-D array

New in version 1.7.0.

Note

New code should use the choice method of a default_rng() instance instead; please see the random-quick-start.

Parameters
  • a (1-D array-like or int) – If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated as if a were np.arange(a)

  • size (int or tuple of ints, optional) – Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. Default is None, in which case a single value is returned.

  • replace (boolean, optional) – Whether the sample is with or without replacement

  • p (1-D array-like, optional) – The probabilities associated with each entry in a. If not given the sample assumes a uniform distribution over all entries in a.

Returns

samples – The generated random samples

Return type

single item or ndarray

Raises

ValueError – If a is an int and less than zero, if a or p are not 1-dimensional, if a is an array-like of size 0, if p is not a vector of probabilities, if a and p have different lengths, or if replace=False and the sample size is greater than the population size

See also

randint, shuffle, permutation

Generator.choice

which should be used in new code

Notes

Sampling random rows from a 2-D array is not possible with this function, but is possible with Generator.choice through its axis keyword.

Examples

Generate a uniform random sample from np.arange(5) of size 3:

>>> np.random.choice(5, 3)
array([0, 3, 4]) # random
>>> #This is equivalent to np.random.randint(0,5,3)

Generate a non-uniform random sample from np.arange(5) of size 3:

>>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0])
array([3, 3, 0]) # random

Generate a uniform random sample from np.arange(5) of size 3 without replacement:

>>> np.random.choice(5, 3, replace=False)
array([3,1,0]) # random
>>> #This is equivalent to np.random.permutation(np.arange(5))[:3]

Generate a non-uniform random sample from np.arange(5) of size 3 without replacement:

>>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0])
array([2, 3, 0]) # random

Any of the above can be repeated with an arbitrary array-like instead of just integers. For instance:

>>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher']
>>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3])
array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random
      dtype='<U11')

The mean squared error to measure the quality of a regression algorithm.

The mse_measure module implements the concept of mean squared error measures for machine learning algorithms.

This concept is implemented through the MSEMeasure class and overloads the MLErrorMeasure._compute_measure() method.

The mean squared error (MSE) is defined by

\[\operatorname{MSE}(\hat{y})=\frac{1}{n}\sum_{i=1}^n(\hat{y}_i-y_i)^2,\]

where \(\hat{y}\) are the predictions and \(y\) are the data points.

Classes:

MSEMeasure(algo)

The Mean Squared Error measure for machine learning.

class gemseo.mlearning.qual_measure.mse_measure.MSEMeasure(algo)[source]

The Mean Squared Error measure for machine learning.

Attributes
  • algo (MLAlgo) – The machine learning algorithm.

  • algo (MLAlgo) – The machine learning algorithm.

Parameters

algo (MLRegressionAlgo) – A machine learning algorithm for regression.

Return type

None

Methods:

evaluate([method, samples])

Evaluate the quality measure.

evaluate_bootstrap([n_replicates, samples, …])

Evaluate the quality measure using the bootstrap technique.

evaluate_kfolds([n_folds, samples, multioutput])

Evaluate the quality measure using the k-folds technique.

evaluate_learn([samples, multioutput])

Evaluate the quality measure using the learning dataset.

evaluate_loo([samples, multioutput])

Evaluate the quality measure using the leave-one-out technique.

evaluate_test(test_data[, samples, multioutput])

Evaluate the quality measure using a test dataset.

is_better(val1, val2)

Compare the quality between two values.

evaluate(method='learn', samples=None, **options)

Evaluate the quality measure.

Parameters
  • method (str) – The name of the method to evaluate the quality measure.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • **options – The options of the estimation method (e.g. ‘test_data’ for

  • 'test' method (the) –

  • for the bootstrap one ('n_replicates') –

  • ...)

  • options (Optional[Union[List[int], bool, int, gemseo.core.dataset.Dataset]]) –

Returns

The value of the quality measure.

Raises

ValueError – If the name of the method is unknown.

Return type

Union[float, numpy.ndarray]

evaluate_bootstrap(n_replicates=100, samples=None, multioutput=True)

Evaluate the quality measure using the bootstrap technique.

Parameters
  • n_replicates (int) – The number of bootstrap replicates.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_kfolds(n_folds=5, samples=None, multioutput=True)

Evaluate the quality measure using the k-folds technique.

Parameters
  • n_folds (int) – The number of folds.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_learn(samples=None, multioutput=True)

Evaluate the quality measure using the learning dataset.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_loo(samples=None, multioutput=True)

Evaluate the quality measure using the leave-one-out technique.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_test(test_data, samples=None, multioutput=True)

Evaluate the quality measure using a test dataset.

Parameters
  • dataset – The test dataset.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

  • test_data (gemseo.core.dataset.Dataset) –

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

classmethod is_better(val1, val2)

Compare the quality between two values.

This methods returns True if the first one is better than the second one.

For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.

Parameters
  • val1 (float) – The value of the first quality measure.

  • val2 (float) – The value of the second quality measure.

Returns

Whether val1 is of better quality than val2.

Return type

bool

The R2 to measure the quality of a regression algorithm.

The r2_measure module implements the concept of R2 measures for machine learning algorithms.

This concept is implemented through the R2Measure class and overloads the MLErrorMeasure._compute_measure() method.

The R2 is defined by

\[R_2(\hat{y}) = 1 - \frac{\sum_i (\hat{y}_i - y_i)^2} {\sum_i (y_i-\bar{y})^2},\]

where \(\hat{y}\) are the predictions, \(y\) are the data points and \(\bar{y}\) is the mean of \(y\).

Classes:

R2Measure(algo)

The R2 measure for machine learning.

class gemseo.mlearning.qual_measure.r2_measure.R2Measure(algo)[source]

The R2 measure for machine learning.

Attributes
  • algo (MLAlgo) – The machine learning algorithm.

  • algo (MLAlgo) – The machine learning algorithm.

Parameters

algo (MLRegressionAlgo) – A machine learning algorithm for regression.

Return type

None

Methods:

evaluate([method, samples])

Evaluate the quality measure.

evaluate_bootstrap([n_replicates, samples, …])

Evaluate the quality measure using the bootstrap technique.

evaluate_kfolds([n_folds, samples, multioutput])

Evaluate the quality measure using the k-folds technique.

evaluate_learn([samples, multioutput])

Evaluate the quality measure using the learning dataset.

evaluate_loo([samples, multioutput])

Evaluate the quality measure using the leave-one-out technique.

evaluate_test(test_data[, samples, multioutput])

Evaluate the quality measure using a test dataset.

is_better(val1, val2)

Compare the quality between two values.

evaluate(method='learn', samples=None, **options)

Evaluate the quality measure.

Parameters
  • method (str) – The name of the method to evaluate the quality measure.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • **options – The options of the estimation method (e.g. ‘test_data’ for

  • 'test' method (the) –

  • for the bootstrap one ('n_replicates') –

  • ...)

  • options (Optional[Union[List[int], bool, int, gemseo.core.dataset.Dataset]]) –

Returns

The value of the quality measure.

Raises

ValueError – If the name of the method is unknown.

Return type

Union[float, numpy.ndarray]

evaluate_bootstrap(n_replicates=100, samples=None, multioutput=True)[source]

Evaluate the quality measure using the bootstrap technique.

Parameters
  • n_replicates (int) – The number of bootstrap replicates.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

NoReturn

evaluate_kfolds(n_folds=5, samples=None, multioutput=True)[source]

Evaluate the quality measure using the k-folds technique.

Parameters
  • n_folds (int) – The number of folds.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_learn(samples=None, multioutput=True)

Evaluate the quality measure using the learning dataset.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_loo(samples=None, multioutput=True)

Evaluate the quality measure using the leave-one-out technique.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_test(test_data, samples=None, multioutput=True)

Evaluate the quality measure using a test dataset.

Parameters
  • dataset – The test dataset.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

  • test_data (gemseo.core.dataset.Dataset) –

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

classmethod is_better(val1, val2)

Compare the quality between two values.

This methods returns True if the first one is better than the second one.

For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.

Parameters
  • val1 (float) – The value of the first quality measure.

  • val2 (float) – The value of the second quality measure.

Returns

Whether val1 is of better quality than val2.

Return type

bool

The F1 to measure the quality of a classification algorithm.

The F1 is defined by

\[F_1 = 2\frac{\mathit{precision}\mathit{recall}} {\mathit{precision}+\mathit{recall}}\]

where \(\mathit{precision}\) is the number of correctly predicted positives divided by the total number of predicted positives and \(\mathit{recall}\) is the number of correctly predicted positives divided by the total number of true positives.

Classes:

F1Measure(algo)

The F1 measure for machine learning.

class gemseo.mlearning.qual_measure.f1_measure.F1Measure(algo)[source]

The F1 measure for machine learning.

Attributes
  • algo (MLAlgo) – The machine learning algorithm.

  • algo (MLAlgo) – The machine learning algorithm.

Parameters

algo (MLClassificationAlgo) – A machine learning algorithm for classification.

Return type

None

Methods:

evaluate([method, samples])

Evaluate the quality measure.

evaluate_bootstrap([n_replicates, samples, …])

Evaluate the quality measure using the bootstrap technique.

evaluate_kfolds([n_folds, samples, multioutput])

Evaluate the quality measure using the k-folds technique.

evaluate_learn([samples, multioutput])

Evaluate the quality measure using the learning dataset.

evaluate_loo([samples, multioutput])

Evaluate the quality measure using the leave-one-out technique.

evaluate_test(test_data[, samples, multioutput])

Evaluate the quality measure using a test dataset.

is_better(val1, val2)

Compare the quality between two values.

evaluate(method='learn', samples=None, **options)

Evaluate the quality measure.

Parameters
  • method (str) – The name of the method to evaluate the quality measure.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • **options – The options of the estimation method (e.g. ‘test_data’ for

  • 'test' method (the) –

  • for the bootstrap one ('n_replicates') –

  • ...)

  • options (Optional[Union[List[int], bool, int, gemseo.core.dataset.Dataset]]) –

Returns

The value of the quality measure.

Raises

ValueError – If the name of the method is unknown.

Return type

Union[float, numpy.ndarray]

evaluate_bootstrap(n_replicates=100, samples=None, multioutput=True)

Evaluate the quality measure using the bootstrap technique.

Parameters
  • n_replicates (int) – The number of bootstrap replicates.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_kfolds(n_folds=5, samples=None, multioutput=True)

Evaluate the quality measure using the k-folds technique.

Parameters
  • n_folds (int) – The number of folds.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_learn(samples=None, multioutput=True)

Evaluate the quality measure using the learning dataset.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_loo(samples=None, multioutput=True)

Evaluate the quality measure using the leave-one-out technique.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_test(test_data, samples=None, multioutput=True)

Evaluate the quality measure using a test dataset.

Parameters
  • dataset – The test dataset.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

  • test_data (gemseo.core.dataset.Dataset) –

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

classmethod is_better(val1, val2)

Compare the quality between two values.

This methods returns True if the first one is better than the second one.

For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.

Parameters
  • val1 (float) – The value of the first quality measure.

  • val2 (float) – The value of the second quality measure.

Returns

Whether val1 is of better quality than val2.

Return type

bool

The silhouette coefficient to measure the quality of a clustering algorithm.

The silhouette module implements the concept of silhouette coefficient measure for machine learning algorithms.

This concept is implemented through the SilhouetteMeasure class and overloads the MLClusteringMeasure._compute_measure() method.

The silhouette coefficient is defined for each point as the difference between the average distance from the point to each of the other points in its cluster and the average distance from the point to each of the points in the nearest cluster different from its own.

More formally, the silhouette coefficient \(s_i\) of a point \(x_i\) is given by

\[\begin{split}a_i = \\frac{1}{|C_{k_i}| - 1} \\sum_{j\\in C_{k_i}\setminus\{i\} } \\|x_i-x_j\\|\\\\ b_i = \\underset{\\ell=1,\\cdots,K\\atop{\\ell\\neq k_i}}{\\min}\\ \\frac{1}{|C_\\ell|} \\sum_{j\\in C_\\ell} \\|x_i-x_j\\|\\\\ s_i = \\frac{b_i-a_i}{\\max(b_i,a_i)}\end{split}\]

where \(k_i\) is the index of the cluster to which \(x_i\) belongs, \(K\) is the number of clusters, \(C_k\) is the set of indices of points belonging to the cluster \(k\) (\(k=1,\\cdots,K\)), and \(|C_k| = \\sum_{j\\in C_k} 1\) is the number of points in the cluster \(k\), \(k=1,\\cdots,K\).

Classes:

SilhouetteMeasure(algo)

The silhouette coefficient measure for machine learning.

class gemseo.mlearning.qual_measure.silhouette.SilhouetteMeasure(algo)[source]

The silhouette coefficient measure for machine learning.

Attributes
  • algo (MLAlgo) – The machine learning algorithm.

  • algo (MLAlgo) – The machine learning algorithm.

  • algo (MLAlgo) – The machine learning algorithm.

  • algo (MLAlgo) – The machine learning algorithm.

Parameters

algo (MLPredictiveClusteringAlgo) – A machine learning algorithm for clustering.

Return type

None

Methods:

evaluate([method, samples])

Evaluate the quality measure.

evaluate_bootstrap([n_replicates, samples, …])

Evaluate the quality measure using the bootstrap technique.

evaluate_kfolds([n_folds, samples, multioutput])

Evaluate the quality measure using the k-folds technique.

evaluate_learn([samples, multioutput])

Evaluate the quality measure using the learning dataset.

evaluate_loo([samples, multioutput])

Evaluate the quality measure using the leave-one-out technique.

evaluate_test(test_data[, samples, multioutput])

Evaluate the quality measure using a test dataset.

is_better(val1, val2)

Compare the quality between two values.

evaluate(method='learn', samples=None, **options)

Evaluate the quality measure.

Parameters
  • method (str) – The name of the method to evaluate the quality measure.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • **options – The options of the estimation method (e.g. ‘test_data’ for

  • 'test' method (the) –

  • for the bootstrap one ('n_replicates') –

  • ...)

  • options (Optional[Union[List[int], bool, int, gemseo.core.dataset.Dataset]]) –

Returns

The value of the quality measure.

Raises

ValueError – If the name of the method is unknown.

Return type

Union[float, numpy.ndarray]

evaluate_bootstrap(n_replicates=100, samples=None, multioutput=True)[source]

Evaluate the quality measure using the bootstrap technique.

Parameters
  • n_replicates (int) – The number of bootstrap replicates.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_kfolds(n_folds=5, samples=None, multioutput=True)[source]

Evaluate the quality measure using the k-folds technique.

Parameters
  • n_folds (int) – The number of folds.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_learn(samples=None, multioutput=True)

Evaluate the quality measure using the learning dataset.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_loo(samples=None, multioutput=True)

Evaluate the quality measure using the leave-one-out technique.

Parameters
  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

evaluate_test(test_data, samples=None, multioutput=True)[source]

Evaluate the quality measure using a test dataset.

Parameters
  • dataset – The test dataset.

  • samples (Optional[List[int]]) – The indices of the learning samples. If None, use the whole learning dataset.

  • multioutput (bool) – If True, return the quality measure for each output component. Otherwise, average these measures.

  • test_data (gemseo.core.dataset.Dataset) –

Returns

The value of the quality measure.

Return type

Union[float, numpy.ndarray]

classmethod is_better(val1, val2)

Compare the quality between two values.

This methods returns True if the first one is better than the second one.

For most measures, a smaller value is “better” than a larger one (MSE etc.). But for some, like an R2-measure, higher values are better than smaller ones. This comparison method correctly handles this, regardless of the type of measure.

Parameters
  • val1 (float) – The value of the first quality measure.

  • val2 (float) – The value of the second quality measure.

Returns

Whether val1 is of better quality than val2.

Return type

bool

Examples

Development