problem module¶
Reference problem for benchmarking.
A benchmarking problem is a problem class to be solved by iterative algorithms for
comparison purposes. A benchmarking problem is characterized by its functions (e.g.
objective and constraints for an optimization problem), its starting points (each
defining an instance of the problem) and its targets (refer to
data_profiles.target_values
).
- class gemseo_benchmark.problems.problem.Problem(name, optimization_problem_creator, start_points=None, target_values=None, doe_algo_name=None, doe_size=None, doe_options=None, description=None, target_values_algorithms_configurations=None, target_values_number=None, optimum=None)[source]
Bases:
object
An optimization benchmarking problem.
An optimization benchmarking problem is characterized by - its functions (objective and constraints, including bounds), - its starting points, - its target values.
- name
The name of the benchmarking problem.
- Type:
- optimization_problem_creator
A callable
- Type:
Callable[[], OptimizationProblem]
- that returns an instance of the optimization problem.
- start_points
The starting points of the benchmarking problem.
- Type:
Iterable[ndarray]
- optimum
The best feasible objective value of the problem. Set to None if unknown.
- Type:
- Parameters:
name (str) – The name of the benchmarking problem.
optimization_problem_creator (Callable[[], OptimizationProblem]) – A callable object that returns an instance of the problem.
start_points (InputStartPoints | None) – The starting points of the benchmarking problem. If
None
: ifdoe_algo_name
,doe_size
, anddoe_options
are notNone
then the starting points will be generated as a DOE; otherwise the current value of the optimization problem will set as the single starting point.target_values (TargetValues | None) – The target values of the benchmarking problem. If
None
, the target values will have to be generated later with the generate_targets method.doe_algo_name (str | None) – The name of the DOE algorithm. If
None
, the current point of the problem design space is set as the only starting point.doe_size (int | None) – The number of starting points. If
None
, this number is set as the problem dimension or 10 if bigger.doe_options (Mapping[str, DOELibraryOptionType] | None) – The options of the DOE algorithm. If
None
, no option other than the DOE size is passed to the algorithm.description (str | None) – The description of the problem (to appear in a report). If
None
, the problem will not have a description.target_values_algorithms_configurations (AlgorithmsConfigurations | None) – The configurations of the optimization algorithms for the computation of target values. If
None
, the target values will not be computed.target_values_number (int | None) – The number of target values to compute. If
None
, the target values will not be computed. N.B. the number of target values shall be the same for all the benchmarking problems of a same group.optimum (float | None) – The best feasible objective value of the problem. If
None
, it will not be set. If notNone
, it will be set as the hardest target value.
- Raises:
TypeError – If the return type of the creator is not
gemseo.algos.opt_problem.OptimizationProblem
, or if a starting point is not of type ndarray.ValueError – If neither starting points nor DOE configurations are passed, or if a starting point is of inappropriate shape.
- compute_data_profile(algos_configurations, results, show=False, file_path=None, infeasibility_tolerance=0.0, max_eval_number=None)[source]
Generate the data profiles of given algorithms.
- Parameters:
algos_configurations (AlgorithmsConfigurations) – The algorithms configurations.
results (Results) – The paths to the reference histories for each algorithm.
show (bool) –
Whether to display the plot.
By default it is set to False.
file_path (str | Path | None) – The path where to save the plot. If
None
, the plot is not saved.infeasibility_tolerance (float) –
The tolerance on the infeasibility measure.
By default it is set to 0.0.
max_eval_number (int | None) – The maximum evaluations number to be displayed. If
None
, this value is inferred from the longest history.
- Return type:
None
- static compute_performance(problem)[source]
Extract the performance history from a solved optimization problem.
- compute_targets(targets_number, ref_algo_configurations, only_feasible=True, budget_min=1, show=False, file_path=None, best_target_tolerance=0.0, disable_stopping=True)[source]
Generate targets based on reference algorithms.
- Parameters:
targets_number (int) – The number of targets to generate.
ref_algo_configurations (AlgorithmsConfigurations) – The configurations of the reference algorithms.
only_feasible (bool) –
Whether to generate only feasible targets.
By default it is set to True.
budget_min (int) –
The evaluation budget to be used to define the easiest target.
By default it is set to 1.
show (bool) –
If True, show the plot.
By default it is set to False.
file_path (str | None) – The path where to save the plot. If
None
, the plot is not saved.best_target_tolerance (float) –
The relative tolerance for comparisons with the best target value.
By default it is set to 0.0.
disable_stopping (bool) –
Whether to disable the stopping criteria.
By default it is set to True.
- Returns:
The generated targets.
- Return type:
- is_algorithm_suited(name)[source]
Check whether an algorithm is suited to the problem.
- load_start_point(path)[source]
Load the start points from a NumPy binary.
- Parameters:
path (Path) – The path to the NumPy binary.
- Return type:
None
- plot_histories(algos_configurations, results, show=False, file_path=None, plot_all_histories=False, alpha=0.3, markevery=None, infeasibility_tolerance=0.0, max_eval_number=None, use_log_scale=False)[source]
Plot the histories of a problem.
- Parameters:
algos_configurations (AlgorithmsConfigurations) – The algorithms configurations.
results (Results) – The paths to the reference histories for each algorithm.
show (bool) –
Whether to display the plot.
By default it is set to False.
file_path (Path | None) – The path where to save the plot. If
None
, the plot is not saved.plot_all_histories (bool) –
Whether to plot all the performance histories.
By default it is set to False.
alpha (float) –
The opacity level for overlapping areas. Refer to the Matplotlib documentation.
By default it is set to 0.3.
markevery (MarkeveryType | None) – The sampling parameter for the markers of the plot. Refer to the Matplotlib documentation.
infeasibility_tolerance (float) –
The tolerance on the infeasibility measure.
By default it is set to 0.0.
max_eval_number (int | None) – The maximum evaluations number displayed. If
None
, this value is inferred from the longest history.use_log_scale (bool) –
Whether to use a logarithmic scale on the value axis.
By default it is set to False.
- Return type:
None
- save_start_points(path)[source]
Save the start points as a NumPy binary.
- Parameters:
path (Path) – The path to the NumPy binary.
- Return type:
None
- property description: str
The description of the problem.
- property dimension: int
The dimension of the problem.
- property objective_name: str
The name of the objective function.
- property start_points: list[ndarray]
The starting points of the problem.
- Raises:
ValueError – If the problem has no starting point, or if the starting points are passed as a NumPy array with an invalid shape.
- property target_values: TargetValues
The target values of the benchmarking problem.
- Raises:
ValueError – If the benchmarking problem has no target value.
- property targets_generator: TargetsGenerator
The generator for target values.