lib_nlopt module¶
NLopt library wrapper.
- exception gemseo.algos.opt.lib_nlopt.NloptRoundOffException[source]¶
Bases:
Exception
NLopt roundoff error.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- class gemseo.algos.opt.lib_nlopt.NLoptAlgorithmDescription(algorithm_name, internal_algorithm_name, library_name='NLopt', description='', website='', handle_integer_variables=False, require_gradient=False, handle_equality_constraints=False, handle_inequality_constraints=False, handle_multiobjective=False, positive_constraints=False, problem_type='non-linear')[source]¶
Bases:
OptimizationAlgorithmDescription
The description of an optimization algorithm from the NLopt library.
- Parameters:
algorithm_name (str) –
internal_algorithm_name (str) –
library_name (str) –
By default it is set to “NLopt”.
description (str) –
By default it is set to “”.
website (str) –
By default it is set to “”.
handle_integer_variables (bool) –
By default it is set to False.
require_gradient (bool) –
By default it is set to False.
handle_equality_constraints (bool) –
By default it is set to False.
handle_inequality_constraints (bool) –
By default it is set to False.
handle_multiobjective (bool) –
By default it is set to False.
positive_constraints (bool) –
By default it is set to False.
problem_type (str) –
By default it is set to “non-linear”.
- handle_equality_constraints: bool = False¶
Whether the optimization algorithm handles equality constraints.
- handle_inequality_constraints: bool = False¶
Whether the optimization algorithm handles inequality constraints.
- handle_integer_variables: bool = False¶
Whether the optimization algorithm handles integer variables.
- handle_multiobjective: bool = False¶
Whether the optimization algorithm handles multiple objectives.
- positive_constraints: bool = False¶
Whether the optimization algorithm requires positive constraints.
- problem_type: str = 'non-linear'¶
The type of problem (see
OptimizationProblem.AVAILABLE_PB_TYPES
).
- class gemseo.algos.opt.lib_nlopt.Nlopt[source]¶
Bases:
OptimizationLibrary
NLopt optimization library interface.
See OptimizationLibrary.
Note
The missing current values of the
DesignSpace
attached to theOptimizationProblem
are automatically initialized with the methodDesignSpace.initialize_missing_current_values()
.- algorithm_handles_eqcstr(algo_name)¶
Check if an algorithm handles equality constraints.
- algorithm_handles_ineqcstr(algo_name)¶
Check if an algorithm handles inequality constraints.
- deactivate_progress_bar()¶
Deactivate the progress bar.
- Return type:
None
- driver_has_option(option_name)¶
Check the existence of an option.
- ensure_bounds(orig_func, normalize=True)¶
Project the design vector onto the design space before execution.
- Parameters:
orig_func – The original function.
normalize –
Whether to use the normalized design space.
By default it is set to True.
- Returns:
A function calling the original function with the input data projected onto the design space.
- execute(problem, algo_name=None, eval_obs_jac=False, skip_int_check=False, **options)¶
Execute the driver.
- Parameters:
problem (OptimizationProblem) – The problem to be solved.
algo_name (str | None) – The name of the algorithm. If None, use the algo_name attribute which may have been set by the factory.
eval_obs_jac (bool) –
Whether to evaluate the Jacobian of the observables.
By default it is set to False.
skip_int_check (bool) –
Whether to skip the integer variable handling check of the selected algorithm.
By default it is set to False.
**options (DriverLibOptionType) – The options for the algorithm.
- Returns:
The optimization result.
- Raises:
ValueError – If algo_name was not either set by the factory or given as an argument.
- Return type:
- filter_adapted_algorithms(problem)¶
Filter the algorithms capable of solving the problem.
- finalize_iter_observer()¶
Finalize the iteration observer.
- Return type:
None
- get_optimum_from_database(message=None, status=None)¶
Retrieve the optimum from the database and build an optimization.
- get_right_sign_constraints()¶
Transform the problem constraints into their opposite sign counterpart.
This is done if the algorithm requires positive constraints.
- get_x0_and_bounds_vects(normalize_ds)¶
Return x0 and bounds.
- Parameters:
normalize_ds – Whether to normalize the input variables that are not integers, according to the normalization policy of the design space.
- Returns:
The current value, the lower bounds and the upper bounds.
- init_iter_observer(max_iter, message='...')¶
Initialize the iteration observer.
It will handle the stopping criterion and the logging of the progress bar.
- Parameters:
- Raises:
ValueError – If the max_iter is not greater than or equal to one.
- Return type:
None
- init_options_grammar(algo_name)¶
Initialize the options’ grammar.
- Parameters:
algo_name (str) – The name of the algorithm.
- Return type:
- is_algo_requires_grad(algo_name)¶
Returns True if the algorithm requires a gradient evaluation.
- Parameters:
algo_name – The name of the algorithm.
- is_algo_requires_positive_cstr(algo_name)¶
Check if an algorithm requires positive constraints.
- classmethod is_algorithm_suited(algorithm_description, problem)¶
Check if an algorithm is suited to a problem according to its description.
- Parameters:
algorithm_description (AlgorithmDescription) – The description of the algorithm.
problem (Any) – The problem to be solved.
- Returns:
Whether the algorithm is suited to the problem.
- Return type:
- new_iteration_callback(x_vect=None)¶
Verify the design variable and objective value stopping criteria.
- Parameters:
x_vect (ndarray | None) – The design variables values. If None, use the values of the last iteration.
- Raises:
FtolReached – If the defined relative or absolute function tolerance is reached.
XtolReached – If the defined relative or absolute x tolerance is reached.
- Return type:
None
- COMPLEX_STEP_METHOD = 'complex_step'¶
- CTOL_ABS = 'ctol_abs'¶
- DIFFERENTIATION_METHODS = ['user', 'complex_step', 'finite_differences']¶
- EQ_TOLERANCE = 'eq_tolerance'¶
- EVAL_OBS_JAC_OPTION = 'eval_obs_jac'¶
- FAILURE = 'NLOPT_FAILURE: Generic failure code'¶
- FINITE_DIFF_METHOD = 'finite_differences'¶
- FORCED_STOP = 'NLOPT_FORCED_STOP: Halted because of a forced termination: the user called nlopt_force_stop(opt) on the optimization’s nlopt_opt object opt from the user’s objective function or constraints.'¶
- FTOL_REACHED = 'NLOPT_FTOL_REACHED: Optimization stopped because ftol_rel or ftol_abs (above) was reached'¶
- F_TOL_ABS = 'ftol_abs'¶
- F_TOL_REL = 'ftol_rel'¶
- INEQ_TOLERANCE = 'ineq_tolerance'¶
- INIT_STEP = 'init_step'¶
- INNER_MAXEVAL = 'inner_maxeval'¶
- INVALID_ARGS = 'NLOPT_INVALID_ARGS: Invalid arguments (e.g. lower bounds are bigger than upper bounds, an unknown algorithm was specified, etcetera).'¶
- LIB_COMPUTE_GRAD = False¶
- LS_STEP_NB_MAX = 'max_ls_step_nb'¶
- LS_STEP_SIZE_MAX = 'max_ls_step_size'¶
- MAXEVAL_REACHED = 'NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached'¶
- MAXTIME_REACHED = 'NLOPT_MAXTIME_REACHED: Optimization stopped because maxtime (above) was reached'¶
- MAX_DS_SIZE_PRINT = 40¶
- MAX_FUN_EVAL = 'max_fun_eval'¶
- MAX_ITER = 'max_iter'¶
- MAX_TIME = 'max_time'¶
- NLOPT_MESSAGES = {-5: 'NLOPT_FORCED_STOP: Halted because of a forced termination: the user called nlopt_force_stop(opt) on the optimization’s nlopt_opt object opt from the user’s objective function or constraints.', -4: 'NLOPT_ROUNDOFF_LIMITED: Halted because roundoff errors limited progress. (In this case, the optimization still typically returns a useful result.)', -3: 'OUT_OF_MEMORY: Ran out of memory', -2: 'NLOPT_INVALID_ARGS: Invalid arguments (e.g. lower bounds are bigger than upper bounds, an unknown algorithm was specified, etcetera).', -1: 'NLOPT_FAILURE: Generic failure code', 1: 'NLOPT_SUCCESS: Generic success return value', 2: 'NLOPT_STOPVAL_REACHED: Optimization stopped because stopval (above) was reached', 3: 'NLOPT_FTOL_REACHED: Optimization stopped because ftol_rel or ftol_abs (above) was reached', 4: 'NLOPT_XTOL_REACHED Optimization stopped because xtol_rel or xtol_abs (above) was reached', 5: 'NLOPT_MAXEVAL_REACHED: Optimization stopped because maxeval (above) was reached', 6: 'NLOPT_MAXTIME_REACHED: Optimization stopped because maxtime (above) was reached'}¶
- NORMALIZE_DESIGN_SPACE_OPTION = 'normalize_design_space'¶
- OPTIONS_DIR: Final[str] = 'options'¶
The name of the directory containing the files of the grammars of the options.
- OPTIONS_MAP: dict[str, str] = {}¶
The names of the options in GEMSEO mapping to those in the wrapped library.
- OUT_OF_MEMORY = 'OUT_OF_MEMORY: Ran out of memory'¶
- PG_TOL = 'pg_tol'¶
- ROUNDOFF_LIMITED = 'NLOPT_ROUNDOFF_LIMITED: Halted because roundoff errors limited progress. (In this case, the optimization still typically returns a useful result.)'¶
- ROUND_INTS_OPTION = 'round_ints'¶
- STOPVAL = 'stopval'¶
- STOPVAL_REACHED = 'NLOPT_STOPVAL_REACHED: Optimization stopped because stopval (above) was reached'¶
- STOP_CRIT_NX = 'stop_crit_n_x'¶
- SUCCESS = 'NLOPT_SUCCESS: Generic success return value'¶
- USER_DEFINED_GRADIENT = 'user'¶
- USE_DATABASE_OPTION = 'use_database'¶
- VERBOSE = 'verbose'¶
- XTOL_REACHED = 'NLOPT_XTOL_REACHED Optimization stopped because xtol_rel or xtol_abs (above) was reached'¶
- X_TOL_ABS = 'xtol_abs'¶
- X_TOL_REL = 'xtol_rel'¶
- activate_progress_bar: ClassVar[bool] = True¶
Whether to activate the progress bar in the optimization log.
- descriptions: dict[str, AlgorithmDescription]¶
The description of the algorithms contained in the library.
- internal_algo_name: str | None¶
The internal name of the algorithm used currently.
It typically corresponds to the name of the algorithm in the wrapped library if any.
- opt_grammar: JSONGrammar | None¶
The grammar defining the options of the current algorithm.