gemseo_mlearning / adaptive

Show inherited members

acquisition module

Acquisition of learning data from a machine learning algorithm and a criterion.

class gemseo_mlearning.adaptive.acquisition.MLDataAcquisition(criterion, input_space, distribution, **options)[source]

Bases: object

Data acquisition for adaptive learning.

Parameters:
  • criterion (str) – The name of a data acquisition criterion selecting new point(s) to reach a particular goal (name of a class inheriting from MLDataAcquisitionCriterion).

  • input_space (DesignSpace) – The input space on which to look for the new learning point.

  • distribution (MLRegressorDistribution) – The distribution of the machine learning algorithm.

  • **options (MLDataAcquisitionCriterionOptionType) – The options of the acquisition criterion.

Raises:

NotImplementedError – When the output dimension is greater than 1.

compute_next_input_data(as_dict=False)[source]

Find the next learning point.

Parameters:

as_dict (bool) –

Whether to return the input data split by input names. Otherwise, return a unique array.

By default it is set to False.

Returns:

The next learning point.

Return type:

DataType

set_acquisition_algorithm(algo_name, **options)[source]

Set sampling or optimization algorithm.

Parameters:
  • algo_name (str) – The name of the algorithm to find the learning point(s). Typically a DoE or an optimizer.

  • **options (Any) – The values of some algorithm options; use the default values for the other ones.

Return type:

None

update_algo(discipline, n_samples=1)[source]

Update the machine learning algorithm by learning new samples.

This method acquires new learning input-output samples and trains the machine learning algorithm with the resulting enriched learning set.

Parameters:
  • discipline (MDODiscipline) – The discipline computing the reference output data from the input data provided by the acquisition process.

  • n_samples (int) –

    The number of samples to update the machine learning algorithm.

    By default it is set to 1.

Returns:

The concatenation of the optimization histories related to the different points and the last optimization problem.

Return type:

tuple[Database, OptimizationProblem]

update_problem()[source]

Update the optimization problem.

Return type:

None

default_algo_name: ClassVar[str] = 'NLOPT_COBYLA'

The name of the default algorithm to find the point(s).

Typically a DoE or an optimizer.

default_doe_options: ClassVar[dict[str, Any]] = {'n_samples': 100}

The names and values of the default DoE options.

default_opt_options: ClassVar[dict[str, Any]] = {'max_iter': 100}

The names and values of the default optimization options.