gemseo.scenarios.base_scenario module#
The base class for the scenarios.
- class BaseScenario(disciplines, objective_name, design_space, name='', maximize_objective=False, formulation_settings_model=None, **formulation_settings)[source]#
Bases:
BaseMonitoredProcess
Base class for the scenarios.
The instantiation of a
Scenario
creates anOptimizationProblem
, by linkingDiscipline
objects with anBaseMDOFormulation
and defining both the objective to minimize or maximize and theDesignSpace
on which to solve the problem. Constraints can also be added to theOptimizationProblem
with theScenario.add_constraint()
method, as well as observables with theScenario.add_observable()
method.Then, the
Scenario.execute()
method takes a driver (seeBaseDriverLibrary
) with options as input data and uses it to solve the optimization problem. This driver is in charge of executing the multidisciplinary process.To view the results, use the
Scenario.post_process()
method after execution with one of the available post-processors that can be listed byScenario.posts
.Initialize self. See help(type(self)) for accurate signature.
- Parameters:
disciplines (Sequence[Discipline]) -- The disciplines used to compute the objective, constraints and observables from the design variables.
objective_name (str | Sequence[str]) -- The name(s) of the discipline output(s) used as objective. If multiple names are passed, the objective will be a vector.
design_space (DesignSpace) -- The search space including at least the design variables (some formulations requires additional variables, e.g.
IDF
with the coupling variables).name (str) --
The name to be given to this scenario. If empty, use the name of the class.
By default it is set to "".
maximize_objective (bool) --
Whether to maximize the objective.
By default it is set to False.
formulation_settings_model (BaseFormulationSettings | None) -- The formulation settings as a Pydantic model. If
None
, use**settings
.**formulation_settings (Any) -- The formulation settings, including the formulation name (use the keyword
"formulation_name"
). These arguments are ignored whensettings_model
is notNone
.
- class DifferentiationMethod(value)#
Bases:
StrEnum
The differentiation methods.
- CENTERED_DIFFERENCES = 'centered_differences'#
- COMPLEX_STEP = 'complex_step'#
- FINITE_DIFFERENCES = 'finite_differences'#
- NO_DERIVATIVE = 'no_derivative'#
- USER_GRAD = 'user'#
- add_constraint(output_name, constraint_type=ConstraintType.EQ, constraint_name='', value=0, positive=False, **kwargs)[source]#
Add an equality or inequality constraint to the optimization problem.
An equality constraint is written as \(c(x)=a\), a positive inequality constraint is written as \(c(x)\geq a\) and a negative inequality constraint is written as \(c(x)\leq a\).
This constraint is in addition to those created by the formulation, e.g. consistency constraints in IDF.
The strategy of repartition of the constraints is defined by the formulation.
- Parameters:
output_name (str | Sequence[str]) -- The name(s) of the outputs computed by \(c(x)\). If several names are given, a single discipline must provide all outputs.
constraint_type (MDOFunction.ConstraintType) --
The type of constraint.
By default it is set to "eq".
constraint_name (str) --
The name of the constraint to be stored. If empty, the name of the constraint is generated from
output_name
,constraint_type
,value
andpositive
.By default it is set to "".
value (float) --
The value \(a\).
By default it is set to 0.
positive (bool) --
Whether the inequality constraint is positive.
By default it is set to False.
- Raises:
ValueError -- If the constraint type is neither 'eq' nor 'ineq'.
- Return type:
None
- add_observable(output_names, observable_name='', discipline=None)[source]#
Add an observable to the optimization problem.
The repartition strategy of the observable is defined in the formulation class. When more than one output name is provided, the observable function returns a concatenated array of the output values.
- Parameters:
output_names (Sequence[str]) -- The names of the outputs to observe.
observable_name (str) --
The name to be given to the observable. If empty, the output name is used by default.
By default it is set to "".
discipline (Discipline | None) -- The discipline used to build the observable function. If
None
, detect the discipline from the inner disciplines.
- Return type:
None
- execute(algo_settings_model=None, **algo_settings)[source]#
Execute a scenario.
- Parameters:
algo_settings_model (BaseDriverSettings | None) -- The algorithm settings as a Pydantic model. If
None
, use**settings
if any. IfNone
and no settings, the method will use the settings defined byset_algorithm()
.**algo_settings (Any) -- The algorithm settings, including the algorithm name (use the keyword
"algo_name"
). These arguments are ignored whensettings_model
is notNone
.
- Return type:
None
- get_result(name='', **options)[source]#
Return the result of the scenario execution.
- Parameters:
name (str) --
The class name of the
ScenarioResult
. If empty, use a default one (seecreate_scenario_result()
).By default it is set to "".
**options (Any) -- The options of the
ScenarioResult
.
- Returns:
The result of the scenario execution.
- Return type:
ScenarioResult | None
- post_process(settings_model=None, **settings)[source]#
Post-process the optimization history.
- Parameters:
settings_model (BasePostSettings | None) -- The post-processor settings as a Pydantic model. If
None
, use**settings
.**settings (Any) -- The post-processor settings, including the algorithm name (use the keyword
"post_name"
). These arguments are ignored whensettings_model
is notNone
.
- Returns:
The post-processor.
- Return type:
- print_execution_metrics()[source]#
Print the total number of executions and cumulated runtime by discipline.
- Return type:
None
- save_optimization_history(file_path, file_format=HistoryFileFormat.HDF5, append=False)[source]#
Save the optimization history of the scenario to a file.
- Parameters:
file_path (str | Path) -- The path of the file to save the history.
file_format (OptimizationProblem.HistoryFileFormat) --
The format of the file.
By default it is set to "hdf5".
append (bool) --
If
True
, the history is appended to the file if not empty.By default it is set to False.
- Return type:
None
- set_algorithm(algo_settings_model=None, **algo_settings)[source]#
Define the algorithm to execute the scenario.
- Parameters:
algo_settings_model (BaseDriverSettings | None) -- The algorithm settings as a Pydantic model. If
None
, use**settings
.**algo_settings (Any) -- The algorithm settings, including the algorithm name (use the keyword
"algo_name"
). These arguments are ignored whensettings_model
is notNone
.
- Return type:
None
- set_differentiation_method(method=DifferentiationMethod.USER_GRAD, step=1e-06, cast_default_inputs_to_complex=False)[source]#
Set the differentiation method for the process.
When the selected method to differentiate the process is
complex_step
theDesignSpace
current value will be cast tocomplex128
; additionally, if the optioncast_default_inputs_to_complex
isTrue
, the default inputs of the scenario's disciplines will be cast as well provided that they arendarray
withdtype
float64
.- Parameters:
method (DifferentiationMethod) --
The method to use to differentiate the process.
By default it is set to "user".
step (float) --
The finite difference step.
By default it is set to 1e-06.
cast_default_inputs_to_complex (bool) --
Whether to cast all float default inputs of the scenario's disciplines if the selected method is
"complex_step"
.By default it is set to False.
- Return type:
None
- set_optimization_history_backup(file_path, at_each_iteration=False, at_each_function_call=True, erase=False, load=False, plot=False)[source]#
Set the backup file to store the evaluations of the functions during the run.
- Parameters:
file_path (str | Path) -- The backup file path.
at_each_iteration (bool) --
Whether the backup file is updated at every iteration of the optimization.
By default it is set to False.
at_each_function_call (bool) --
Whether the backup file is updated at every function call.
By default it is set to True.
erase (bool) --
Whether the backup file is erased before the run.
By default it is set to False.
load (bool) --
Whether the backup file is loaded before run, useful after a crash.
By default it is set to False.
plot (bool) --
Whether to plot the optimization history view at each iteration. The plots will be generated only after the first two iterations.
By default it is set to False.
- Raises:
ValueError -- If both
erase
andpre_load
areTrue
.- Return type:
None
- to_dataset(name='', categorize=True, opt_naming=True, export_gradients=False)[source]#
Export the database of the optimization problem to a
Dataset
.The variables can be classified into groups:
Dataset.DESIGN_GROUP
orDataset.INPUT_GROUP
for the design variables andDataset.FUNCTION_GROUP
orDataset.OUTPUT_GROUP
for the functions (objective, constraints and observables).- Parameters:
name (str) --
The name to be given to the dataset. If empty, use the name of the
OptimizationProblem.database
.By default it is set to "".
categorize (bool) --
Whether to distinguish between the different groups of variables. Otherwise, group all the variables in
Dataset.PARAMETER_GROUP`
.By default it is set to True.
opt_naming (bool) --
Whether to use
Dataset.DESIGN_GROUP
andDataset.FUNCTION_GROUP
as groups. Otherwise, useDataset.INPUT_GROUP
andDataset.OUTPUT_GROUP
.By default it is set to True.
export_gradients (bool) --
Whether to export the gradients of the functions (objective function, constraints and observables) if the latter are available in the database of the optimization problem.
By default it is set to False.
- Returns:
A dataset built from the database of the optimization problem.
- Return type:
- xdsmize(monitor=False, directory_path='.', log_workflow_status=False, file_name='xdsm', show_html=False, save_html=True, save_json=False, save_pdf=False, pdf_build=True, pdf_cleanup=True, pdf_batchmode=True)[source]#
Create a XDSM diagram of the scenario.
- Parameters:
monitor (bool) --
Whether to update the generated file at each discipline status change.
By default it is set to False.
log_workflow_status (bool) --
Whether to log the evolution of the workflow's status.
By default it is set to False.
directory_path (str | Path) --
The path of the directory to save the files.
By default it is set to ".".
file_name (str) --
The file name without the file extension.
By default it is set to "xdsm".
show_html (bool) --
Whether to open the web browser and display the XDSM.
By default it is set to False.
save_html (bool) --
Whether to save the XDSM as a HTML file.
By default it is set to True.
save_json (bool) --
Whether to save the XDSM as a JSON file.
By default it is set to False.
save_pdf (bool) --
Whether to save the XDSM as a PDF file.
By default it is set to False.
pdf_build (bool) --
Whether the standalone pdf of the XDSM will be built.
By default it is set to True.
pdf_cleanup (bool) --
Whether pdflatex built files will be cleaned up after build is complete.
By default it is set to True.
pdf_batchmode (bool) --
Whether pdflatex is run in batchmode.
By default it is set to True.
- Returns:
A view of the XDSM if
monitor
isFalse
.- Return type:
XDSM | None
- property design_space: DesignSpace#
The design space on which the scenario is performed.
- property disciplines: tuple[BaseDiscipline, ...]#
The disciplines.
- formulation: BaseMDOFormulation#
The MDO formulation.
- optimization_result: OptimizationResult | None#
The optimization result if the scenario has been executed; otherwise
None
.
- property post_factory: PostFactory#
The factory for post-processors if any.
- property use_standardized_objective: bool#
Whether to use the standardized objective for logging and post-processing.
The objective is
OptimizationProblem.objective
.