mdo_scenario module¶
A Scenario which driver is an optimization algorithm¶
Classes:
|
A scenario adapter that overwrites the local data with the optimal objective function value. |
|
Multidisciplinary Design Optimization Scenario, main user interface Creates an optimization problem and solves it with an optimizer. |
|
An adapter class for MDO Scenario: |
- class gemseo.core.mdo_scenario.MDOObjScenarioAdapter(scenario, inputs_list, outputs_list, reset_x0_before_opt=False, set_x0_before_opt=False, set_bounds_before_opt=False, cache_type='SimpleCache', output_multipliers=False)[source]¶
Bases:
gemseo.core.mdo_scenario.MDOScenarioAdapter
A scenario adapter that overwrites the local data with the optimal objective function value.
Initialize the scenario adapter.
- Parameters
scenario (MDOScenario) – the scenario to adapt
inputs_list (list(str)) – list of inputs to overload at sub scenario execution
outputs_list (list(str)) – list of outputs to get from scenario execution
reset_x0_before_opt (bool) – before running the sub optimization, reset the initial guess
set_x0_before_opt (bool) – if True, sets the initial point of the sub scenario, useful for multi-start
set_bounds_before_opt (bool) – if True, sets the bounds of the design space, useful for trust regions
cache_type (str) – type of cache policy, SIMPLE_CACHE or HDF5_CACHE
output_multipliers (bool) – if True then the Lagrange multipliers of the scenario optimal solution are computed and added to the outputs
Attributes:
Accessor to the cache input tolerance.
Accessor to the default inputs.
Return the cumulated execution time.
Accessor to the linearization mode.
Return the number of calls to execute() which triggered the _run().
Return the number of calls to linearize() which triggered the _compute_jacobian() method.
Status accessor.
Methods:
Activate the time stamps.
add_differentiated_inputs
([inputs])Add inputs to the differentiation list.
add_differentiated_outputs
([outputs])Add outputs to the differentiation list.
add_outputs
(outputs_names)Add outputs to the scenario adapter.
add_status_observer
(obs)Add an observer for the status.
auto_get_grammar_file
([is_input, name, comp_dir])Use a naming convention to associate a grammar file to a discipline.
check_input_data
(input_data[, raise_exception])Check the input data validity.
check_jacobian
([input_data, derr_approx, …])Check if the jacobian provided by the linearize() method is correct.
check_output_data
([raise_exception])Check the output data validity.
Deactivate the time stamps for storing start and end times of execution and linearizations.
deserialize
(in_file)Derialize the discipline from a file.
execute
([input_data])Execute the discipline.
Accessor for the input data as a list of values.
Accessor for the output data as a list of values.
Define the attributes to be serialized.
get_bnd_mult_name
(variable_name, is_upper)Return the name of the lower bound-constraint multiplier of a variable.
get_cstr_mult_name
(constraint_name)Return the name of the multiplier of a constraint.
get_data_list_from_dict
(keys, data_dict)Filter the dict from a list of keys or a single key.
Return the expected data exchange sequence.
Return the expected execution sequence.
Accessor for the input data as a dict of values.
Accessor for the input names as a list.
Accessor for the input and output names as a list.
Accessor for the outputs as a large numpy array.
get_inputs_by_name
(data_names)Accessor for the inputs as a list.
get_local_data_by_name
(data_names)Accessor for the local data of the discipline as a dict of values.
Accessor for the output data as a dict of values.
Accessor for the output names as a list.
Accessor for the outputs as a large numpy array.
get_outputs_by_name
(data_names)Accessor for the outputs as a list.
Gets the sub disciplines of self By default, empty.
is_all_inputs_existing
(data_names)Test if all the names in data_names are inputs of the discipline.
is_all_outputs_existing
(data_names)Test if all the names in data_names are outputs of the discipline.
is_input_existing
(data_name)Test if input named data_name is an input of the discipline.
is_output_existing
(data_name)Test if output named data_name is an output of the discipline.
Return True if self is a scenario.
linearize
([input_data, force_all, force_no_exec])Execute the linearized version of the code.
Notify all status observers that the status has changed.
Remove an observer for the status.
Sets all the statuses to PENDING.
serialize
(out_file)Serialize the discipline.
set_cache_policy
([cache_type, …])Set the type of cache to use and the tolerance level.
set_disciplines_statuses
(status)Set the sub disciplines statuses.
Set the jacobian approximation method.
set_optimal_fd_step
([outputs, inputs, …])Compute the optimal finite-difference step.
store_local_data
(**kwargs)Store discipline data in local data.
- APPROX_MODES = ['finite_differences', 'complex_step']¶
- AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')¶
- COMPLEX_STEP = 'complex_step'¶
- FINITE_DIFFERENCES = 'finite_differences'¶
- HDF5_CACHE = 'HDF5Cache'¶
- JSON_GRAMMAR_TYPE = 'JSON'¶
- LOWER_BND_SUFFIX = '_lower_bnd'¶
- MEMORY_FULL_CACHE = 'MemoryFullCache'¶
- MULTIPLIER_SUFFIX = '_multiplier'¶
- N_CPUS = 2¶
- RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'¶
- RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'¶
- SIMPLE_CACHE = 'SimpleCache'¶
- SIMPLE_GRAMMAR_TYPE = 'Simple'¶
- STATUS_DONE = 'DONE'¶
- STATUS_FAILED = 'FAILED'¶
- STATUS_PENDING = 'PENDING'¶
- STATUS_RUNNING = 'RUNNING'¶
- STATUS_VIRTUAL = 'VIRTUAL'¶
- UPPER_BND_SUFFIX = '_upper_bnd'¶
- classmethod activate_time_stamps()¶
Activate the time stamps.
For storing start and end times of execution and linearizations.
- add_differentiated_inputs(inputs=None)¶
Add inputs to the differentiation list.
This method updates self._differentiated_inputs with inputs
- Parameters
inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)
- add_differentiated_outputs(outputs=None)¶
Add outputs to the differentiation list.
Update self._differentiated_inputs with inputs.
- Parameters
outputs – list of output variables to differentiate if None, all outputs of discipline are used
- add_outputs(outputs_names)¶
Add outputs to the scenario adapter.
- Parameters
outputs_names (list(str)) – names of the outputs to be added
- add_status_observer(obs)¶
Add an observer for the status.
Add an observer for the status to be notified when self changes of status.
- Parameters
obs – the observer to add
- auto_get_grammar_file(is_input=True, name=None, comp_dir=None)¶
Use a naming convention to associate a grammar file to a discipline.
This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json
- Parameters
is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)
name – the name of the discipline (Default value = None)
comp_dir – the containing directory if None, use self.comp_dir (Default value = None)
- Returns
path to the grammar file
- Return type
string
- property cache_tol¶
Accessor to the cache input tolerance.
- check_input_data(input_data, raise_exception=True)¶
Check the input data validity.
- Parameters
input_data – the input data dict
raise_exception – Default value = True)
- check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)¶
Check if the jacobian provided by the linearize() method is correct.
- Parameters
input_data – input data dict (Default value = None)
derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)
threshold – acceptance threshold for the jacobian error (Default value = 1e-8)
linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)
inputs – list of inputs wrt which to differentiate (Default value = None)
outputs – list of outputs to differentiate (Default value = None)
step – the step for finite differences or complex step
parallel – if True, executes in parallel
n_processes – maximum number of processors on which to run
use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing
wait_time_between_fork – time waited between two forks of the process /Thread
auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation
plot_result – plot the result of the validation (computed and approximate jacobians)
file_path – path to the output file if plot_result is True
show – if True, open the figure
figsize_x – x size of the figure in inches
figsize_y – y size of the figure in inches
- Returns
True if the check is accepted, False otherwise
- check_output_data(raise_exception=True)¶
Check the output data validity.
- Parameters
raise_exception – if true, an exception is raised when data is invalid (Default value = True)
- classmethod deactivate_time_stamps()¶
Deactivate the time stamps for storing start and end times of execution and linearizations.
- property default_inputs¶
Accessor to the default inputs.
- static deserialize(in_file)¶
Derialize the discipline from a file.
- Parameters
in_file – input file for serialization
- Returns
a discipline instance
- property exec_time¶
Return the cumulated execution time.
Multiprocessing safe.
- execute(input_data=None)¶
Execute the discipline.
This method executes the discipline:
- Adds default inputs to the input_data if some inputs are not defined
in input_data but exist in self._default_data
- Checks if the last execution of the discipline wan not called with
identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)
Caches the inputs
Checks the input data against self.input_grammar
if self.data_processor is not None: runs the preprocessor
updates the status to RUNNING
calls the _run() method, that shall be defined
if self.data_processor is not None: runs the postprocessor
checks the output data
Caches the outputs
updates the status to DONE or FAILED
updates summed execution time
- Parameters
input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)
- Returns
the discipline local data after execution
- Return type
dict
- get_all_inputs()¶
Accessor for the input data as a list of values.
The order is given by self.get_input_data_names().
- Returns
the data
- get_all_outputs()¶
Accessor for the output data as a list of values.
The order is given by self.get_output_data_names().
- Returns
the data
- get_attributes_to_serialize()¶
Define the attributes to be serialized.
Shall be overloaded by disciplines
- Returns
the list of attributes names
- Return type
list
- static get_bnd_mult_name(variable_name, is_upper)¶
Return the name of the lower bound-constraint multiplier of a variable.
- Parameters
variable_name (str) – name of the variable
is_upper (bool) – if True then return the upper bound-constraint multiplier name, otherwise return the lower bound-constraint multiplier
- Returns
name of the lower bound-constraint multiplier
- Return type
str
- static get_cstr_mult_name(constraint_name)¶
Return the name of the multiplier of a constraint.
- Parameters
constraint_name (str) – name of the constraint
- Returns
name of the multiplier
- Return type
str
- static get_data_list_from_dict(keys, data_dict)¶
Filter the dict from a list of keys or a single key.
If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.
- Parameters
keys – a sting key or a list of keys
data_dict – the dict to get the data from
- Returns
a data or a generator of data
- get_expected_dataflow()¶
Return the expected data exchange sequence.
This method is used for the XDSM representation.
Default to empty list See MDOFormulation.get_expected_dataflow
- Returns
a list representing the data exchange arcs
- get_expected_workflow()¶
Return the expected execution sequence.
This method is used for XDSM representation Default to the execution of the discipline itself See MDOFormulation.get_expected_workflow
- get_input_data()¶
Accessor for the input data as a dict of values.
- Returns
the data dict
- get_input_data_names()¶
Accessor for the input names as a list.
- Returns
the data names list
- get_input_output_data_names()¶
Accessor for the input and output names as a list.
- Returns
the data names list
- get_inputs_asarray()¶
Accessor for the outputs as a large numpy array.
The order is the one of self.get_all_outputs().
- Returns
the outputs array
- Return type
ndarray
- get_inputs_by_name(data_names)¶
Accessor for the inputs as a list.
- Parameters
data_names – the data names list
- Returns
the data list
- get_local_data_by_name(data_names)¶
Accessor for the local data of the discipline as a dict of values.
- Parameters
data_names – the names of the data which will be the keys of the dictionary
- Returns
the data list
- get_output_data()¶
Accessor for the output data as a dict of values.
- Returns
the data dict
- get_output_data_names()¶
Accessor for the output names as a list.
- Returns
the data names list
- get_outputs_asarray()¶
Accessor for the outputs as a large numpy array.
The order is the one of self.get_all_outputs()
- Returns
the outputs array
- Return type
ndarray
- get_outputs_by_name(data_names)¶
Accessor for the outputs as a list.
- Parameters
data_names – the data names list
- Returns
the data list
- get_sub_disciplines()¶
Gets the sub disciplines of self By default, empty.
- Returns
the list of disciplines
- is_all_inputs_existing(data_names)¶
Test if all the names in data_names are inputs of the discipline.
- Parameters
data_names – the names of the inputs
- Returns
True if data_names are all in input grammar
- Return type
logical
- is_all_outputs_existing(data_names)¶
Test if all the names in data_names are outputs of the discipline.
- Parameters
data_names – the names of the outputs
- Returns
True if data_names are all in output grammar
- Return type
logical
- is_input_existing(data_name)¶
Test if input named data_name is an input of the discipline.
- Parameters
data_name – the name of the output
- Returns
True if data_name is in input grammar
- Return type
logical
- is_output_existing(data_name)¶
Test if output named data_name is an output of the discipline.
- Parameters
data_name – the name of the output
- Returns
True if data_name is in output grammar
- Return type
logical
- static is_scenario()¶
Return True if self is a scenario.
- Returns
True if self is a scenario
- property linearization_mode¶
Accessor to the linearization mode.
- linearize(input_data=None, force_all=False, force_no_exec=False)¶
Execute the linearized version of the code.
- Parameters
input_data – the input data dict needed to execute the disciplines according to the discipline input grammar
force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)
force_no_exec – if True, the discipline is not re executed, cache is loaded anyway
- property n_calls¶
Return the number of calls to execute() which triggered the _run().
Multiprocessing safe.
- property n_calls_linearize¶
Return the number of calls to linearize() which triggered the _compute_jacobian() method.
Multiprocessing safe.
- notify_status_observers()¶
Notify all status observers that the status has changed.
- remove_status_observer(obs)¶
Remove an observer for the status.
- Parameters
obs – the observer to remove
- reset_statuses_for_run()¶
Sets all the statuses to PENDING.
- serialize(out_file)¶
Serialize the discipline.
- Parameters
out_file – destination file for serialization
- set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)¶
Set the type of cache to use and the tolerance level.
This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g.
SimpleCache
andMemoryFullCache
, or on the disk, e.g.HDF5Cache
.CacheFactory.caches
provides the list of available types of caches.- Parameters
cache_type (str) – type of cache to use.
cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal
cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used
cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used
is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.
- set_disciplines_statuses(status)¶
Set the sub disciplines statuses.
To be implemented in subclasses. :param status: the status
- set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)¶
Set the jacobian approximation method.
Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize
- Parameters
jac_approx_type – “complex_step” or “finite_differences”
jax_approx_step – the step for finite differences or complex step
jac_approx_n_processes – maximum number of processors on which to run
jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing
jac_approx_wait_time – time waited between two forks of the process /Thread
- set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)¶
Compute the optimal finite-difference step.
Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.
Warning: this calls the discipline execution two times per input variables.
See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”
- Parameters
inputs – inputs wrt the linearization is made. If None, use differentiated inputs
outputs – outputs of the linearization is made. If None, use differentiated outputs
force_all – if True, all inputs and outputs are used
print_errors – if True, displays the estimated errors
numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution
- Returns
the estimated errors of truncation and cancelation error.
- property status¶
Status accessor.
- store_local_data(**kwargs)¶
Store discipline data in local data.
- Parameters
kwargs – the data as key value pairs
- time_stamps = None¶
- class gemseo.core.mdo_scenario.MDOScenario(disciplines, formulation, objective_name, design_space, name=None, **formulation_options)[source]¶
Bases:
gemseo.core.scenario.Scenario
Multidisciplinary Design Optimization Scenario, main user interface Creates an optimization problem and solves it with an optimizer.
The main differences between Scenario and MDOScenario are the allowed inputs in the MDOScenario.json, which differs from DOEScenario.json, at least on the driver names
MDO Problem description: links the disciplines and the formulation to create an optimization problem. Use the class by instantiation.
Create your disciplines beforehand.
Specify the formulation by giving the class name such as the string “MDF”
The reference_input_data is the typical input data dict that is provided to the run method of the disciplines
Specify the objective function name, which must be an output of a discipline of the scenario, with the “objective_name” attribute
If you want to add additional design constraints, use the add_user_defined_constraint method
To view the results, use the “post_process” method after execution. You can view:
the design variables history, the objective value, the constraints, by using: scenario.post_process(“OptHistoryView”, show=False, save=True)
Quadratic approximations of the functions close to the optimum, when using gradient based algorithms, by using: scenario.post_process(“QuadApprox”, method=”SR1”, show=False, save=True, function=”my_objective_name”, file_path=”appl_dir”)
Self Organizing Maps of the design space, by using: scenario.post_process(“SOM”, save=True, file_path=”appl_dir”)
To list post-processing on your setup, use the method scenario.posts For more detains on their options, go to the “gemseo.post” package
Constructor, initializes the MDO scenario Objects instantiation and checks are made before run intentionally.
- Parameters
disciplines – the disciplines of the scenario
formulation – the formulation name, the class name of the formulation in gemseo.formulations
objective_name – the objective function name
design_space – the design space
name – scenario name
formulation_options – options for creation of the formulation
Attributes:
Accessor to the cache input tolerance.
Accessor to the default inputs.
Proxy for formulation.design_space.
Return the cumulated execution time.
Accessor to the linearization mode.
Return the number of calls to execute() which triggered the _run().
Return the number of calls to linearize() which triggered the _compute_jacobian() method.
Lists the available post processings.
Status accessor.
Methods:
Activate the time stamps.
add_constraint
(output_name[, …])Add a user constraint, i.e. a design constraint in addition to formulation specific constraints such as targets in IDF.
add_differentiated_inputs
([inputs])Add inputs to the differentiation list.
add_differentiated_outputs
([outputs])Add outputs to the differentiation list.
add_observable
(output_names[, …])Add observable to the optimization problem.
add_status_observer
(obs)Add an observer for the status.
auto_get_grammar_file
([is_input, name, comp_dir])Use a naming convention to associate a grammar file to a discipline.
check_input_data
(input_data[, raise_exception])Check the input data validity.
check_jacobian
([input_data, derr_approx, …])Check if the jacobian provided by the linearize() method is correct.
check_output_data
([raise_exception])Check the output data validity.
Deactivate the time stamps for storing start and end times of execution and linearizations.
deserialize
(in_file)Derialize the discipline from a file.
execute
([input_data])Execute the discipline.
Accessor for the input data as a list of values.
Accessor for the output data as a list of values.
Define the attributes to be serialized.
Returns the list of available drivers.
get_data_list_from_dict
(keys, data_dict)Filter the dict from a list of keys or a single key.
Retrieves the disciplines statuses.
Overriden method from MDODiscipline base class delegated to formulation object.
Overriden method from MDODiscipline base class delegated to formulation object.
Accessor for the input data as a dict of values.
Accessor for the input names as a list.
Accessor for the input and output names as a list.
Accessor for the outputs as a large numpy array.
get_inputs_by_name
(data_names)Accessor for the inputs as a list.
get_local_data_by_name
(data_names)Accessor for the local data of the discipline as a dict of values.
A convenience function to access formulation design variables names.
Return the optimization results.
Accessor for the output data as a dict of values.
Accessor for the output names as a list.
Accessor for the outputs as a large numpy array.
get_outputs_by_name
(data_names)Accessor for the outputs as a list.
Gets the sub disciplines of self By default, empty.
is_all_inputs_existing
(data_names)Test if all the names in data_names are inputs of the discipline.
is_all_outputs_existing
(data_names)Test if all the names in data_names are outputs of the discipline.
is_input_existing
(data_name)Test if input named data_name is an input of the discipline.
is_output_existing
(data_name)Test if output named data_name is an output of the discipline.
Retuns True if self is a scenario.
linearize
([input_data, force_all, force_no_exec])Execute the linearized version of the code.
Notify all status observers that the status has changed.
post_process
(post_name, **options)Finds the appropriate library and executes the post processing on the problem.
Prints total number of executions and cumulated runtime by discipline.
Remove an observer for the status.
Sets all the statuses to PENDING.
save_optimization_history
(file_path[, …])Saves the optimization history of the scenario to a file.
serialize
(out_file)Serialize the discipline.
set_cache_policy
([cache_type, …])Set the type of cache to use and the tolerance level.
set_differentiation_method
([method, step])Sets the differentiation method for the process.
set_disciplines_statuses
(status)Set the sub disciplines statuses.
Set the jacobian approximation method.
set_optimal_fd_step
([outputs, inputs, …])Compute the optimal finite-difference step.
set_optimization_history_backup
(file_path[, …])Sets the backup file for the optimization history during the run.
store_local_data
(**kwargs)Store discipline data in local data.
xdsmize
([monitor, outdir, print_statuses, …])Creates an xdsm.json file from the current scenario.
- ALGO = 'algo'¶
- ALGO_OPTIONS = 'algo_options'¶
- APPROX_MODES = ['finite_differences', 'complex_step']¶
- AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')¶
- COMPLEX_STEP = 'complex_step'¶
- FINITE_DIFFERENCES = 'finite_differences'¶
- HDF5_CACHE = 'HDF5Cache'¶
- JSON_GRAMMAR_TYPE = 'JSON'¶
- L_BOUNDS = 'l_bounds'¶
- MAX_ITER = 'max_iter'¶
- MEMORY_FULL_CACHE = 'MemoryFullCache'¶
- N_CPUS = 2¶
- RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'¶
- RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'¶
- SIMPLE_CACHE = 'SimpleCache'¶
- SIMPLE_GRAMMAR_TYPE = 'Simple'¶
- STATUS_DONE = 'DONE'¶
- STATUS_FAILED = 'FAILED'¶
- STATUS_PENDING = 'PENDING'¶
- STATUS_RUNNING = 'RUNNING'¶
- STATUS_VIRTUAL = 'VIRTUAL'¶
- U_BOUNDS = 'u_bounds'¶
- X_0 = 'x_0'¶
- X_OPT = 'x_opt'¶
- classmethod activate_time_stamps()¶
Activate the time stamps.
For storing start and end times of execution and linearizations.
- add_constraint(output_name, constraint_type='eq', constraint_name=None, value=None, positive=False, **kwargs)¶
Add a user constraint, i.e. a design constraint in addition to formulation specific constraints such as targets in IDF. The strategy of repartition of constraints is defined in the formulation class.
- Parameters
output_name – the output name to be used as constraint for instance, if g_1 is given and constraint_type=”eq”, g_1=0 will be added as constraint to the optimizer If a list is given, a single discipline must provide all outputs
constraint_type – the type of constraint, “eq” for equality, “ineq” for inequality constraint (Default value = MDOFunction.TYPE_EQ)
constraint_name – name of the constraint to be stored, if None, generated from the output name (Default value = None)
value – Default value = None)
positive – Default value = False)
- Returns
the constraint function as an MDOFunction
- add_differentiated_inputs(inputs=None)¶
Add inputs to the differentiation list.
This method updates self._differentiated_inputs with inputs
- Parameters
inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)
- add_differentiated_outputs(outputs=None)¶
Add outputs to the differentiation list.
Update self._differentiated_inputs with inputs.
- Parameters
outputs – list of output variables to differentiate if None, all outputs of discipline are used
- add_observable(output_names, observable_name=None, discipline=None)¶
Add observable to the optimization problem. The repartition strategy of the observable is defined in the formulation class. When more than one output name is provided, the observable function returns a concatenated array of the output values.
- Parameters
output_names – names of the outputs to observe
observable_name (str) – name of the observable, optional. If None, the output name is used by default.
discipline (MDODiscipline) – if None, detected from inner disciplines, otherwise the discipline used to build the function (Default value = None)
- add_status_observer(obs)¶
Add an observer for the status.
Add an observer for the status to be notified when self changes of status.
- Parameters
obs – the observer to add
- auto_get_grammar_file(is_input=True, name=None, comp_dir=None)¶
Use a naming convention to associate a grammar file to a discipline.
This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json
- Parameters
is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)
name – the name of the discipline (Default value = None)
comp_dir – the containing directory if None, use self.comp_dir (Default value = None)
- Returns
path to the grammar file
- Return type
string
- property cache_tol¶
Accessor to the cache input tolerance.
- check_input_data(input_data, raise_exception=True)¶
Check the input data validity.
- Parameters
input_data – the input data dict
raise_exception – Default value = True)
- check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)¶
Check if the jacobian provided by the linearize() method is correct.
- Parameters
input_data – input data dict (Default value = None)
derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)
threshold – acceptance threshold for the jacobian error (Default value = 1e-8)
linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)
inputs – list of inputs wrt which to differentiate (Default value = None)
outputs – list of outputs to differentiate (Default value = None)
step – the step for finite differences or complex step
parallel – if True, executes in parallel
n_processes – maximum number of processors on which to run
use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing
wait_time_between_fork – time waited between two forks of the process /Thread
auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation
plot_result – plot the result of the validation (computed and approximate jacobians)
file_path – path to the output file if plot_result is True
show – if True, open the figure
figsize_x – x size of the figure in inches
figsize_y – y size of the figure in inches
- Returns
True if the check is accepted, False otherwise
- check_output_data(raise_exception=True)¶
Check the output data validity.
- Parameters
raise_exception – if true, an exception is raised when data is invalid (Default value = True)
- classmethod deactivate_time_stamps()¶
Deactivate the time stamps for storing start and end times of execution and linearizations.
- property default_inputs¶
Accessor to the default inputs.
- static deserialize(in_file)¶
Derialize the discipline from a file.
- Parameters
in_file – input file for serialization
- Returns
a discipline instance
- property design_space¶
Proxy for formulation.design_space.
- Returns
the design space
- property exec_time¶
Return the cumulated execution time.
Multiprocessing safe.
- execute(input_data=None)¶
Execute the discipline.
This method executes the discipline:
- Adds default inputs to the input_data if some inputs are not defined
in input_data but exist in self._default_data
- Checks if the last execution of the discipline wan not called with
identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)
Caches the inputs
Checks the input data against self.input_grammar
if self.data_processor is not None: runs the preprocessor
updates the status to RUNNING
calls the _run() method, that shall be defined
if self.data_processor is not None: runs the postprocessor
checks the output data
Caches the outputs
updates the status to DONE or FAILED
updates summed execution time
- Parameters
input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)
- Returns
the discipline local data after execution
- Return type
dict
- get_all_inputs()¶
Accessor for the input data as a list of values.
The order is given by self.get_input_data_names().
- Returns
the data
- get_all_outputs()¶
Accessor for the output data as a list of values.
The order is given by self.get_output_data_names().
- Returns
the data
- get_attributes_to_serialize()¶
Define the attributes to be serialized.
Shall be overloaded by disciplines
- Returns
the list of attributes names
- Return type
list
- get_available_driver_names()¶
Returns the list of available drivers.
- static get_data_list_from_dict(keys, data_dict)¶
Filter the dict from a list of keys or a single key.
If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.
- Parameters
keys – a sting key or a list of keys
data_dict – the dict to get the data from
- Returns
a data or a generator of data
- get_disciplines_statuses()¶
Retrieves the disciplines statuses.
- Returns
the statuses dict, key: discipline name, value: status
- get_expected_dataflow()¶
Overriden method from MDODiscipline base class delegated to formulation object.
- get_expected_workflow()¶
Overriden method from MDODiscipline base class delegated to formulation object.
- get_input_data()¶
Accessor for the input data as a dict of values.
- Returns
the data dict
- get_input_data_names()¶
Accessor for the input names as a list.
- Returns
the data names list
- get_input_output_data_names()¶
Accessor for the input and output names as a list.
- Returns
the data names list
- get_inputs_asarray()¶
Accessor for the outputs as a large numpy array.
The order is the one of self.get_all_outputs().
- Returns
the outputs array
- Return type
ndarray
- get_inputs_by_name(data_names)¶
Accessor for the inputs as a list.
- Parameters
data_names – the data names list
- Returns
the data list
- get_local_data_by_name(data_names)¶
Accessor for the local data of the discipline as a dict of values.
- Parameters
data_names – the names of the data which will be the keys of the dictionary
- Returns
the data list
- get_optim_variables_names()¶
A convenience function to access formulation design variables names.
- Returns
the decision variables of the scenario
- Return type
list(str)
- get_optimum()¶
Return the optimization results.
- Returns
Optimal solution found by the scenario if executed, None otherwise
- Return type
- get_output_data()¶
Accessor for the output data as a dict of values.
- Returns
the data dict
- get_output_data_names()¶
Accessor for the output names as a list.
- Returns
the data names list
- get_outputs_asarray()¶
Accessor for the outputs as a large numpy array.
The order is the one of self.get_all_outputs()
- Returns
the outputs array
- Return type
ndarray
- get_outputs_by_name(data_names)¶
Accessor for the outputs as a list.
- Parameters
data_names – the data names list
- Returns
the data list
- get_sub_disciplines()¶
Gets the sub disciplines of self By default, empty.
- Returns
the list of disciplines
- is_all_inputs_existing(data_names)¶
Test if all the names in data_names are inputs of the discipline.
- Parameters
data_names – the names of the inputs
- Returns
True if data_names are all in input grammar
- Return type
logical
- is_all_outputs_existing(data_names)¶
Test if all the names in data_names are outputs of the discipline.
- Parameters
data_names – the names of the outputs
- Returns
True if data_names are all in output grammar
- Return type
logical
- is_input_existing(data_name)¶
Test if input named data_name is an input of the discipline.
- Parameters
data_name – the name of the output
- Returns
True if data_name is in input grammar
- Return type
logical
- is_output_existing(data_name)¶
Test if output named data_name is an output of the discipline.
- Parameters
data_name – the name of the output
- Returns
True if data_name is in output grammar
- Return type
logical
- static is_scenario()¶
Retuns True if self is a scenario.
- Returns
True if self is a scenario
- property linearization_mode¶
Accessor to the linearization mode.
- linearize(input_data=None, force_all=False, force_no_exec=False)¶
Execute the linearized version of the code.
- Parameters
input_data – the input data dict needed to execute the disciplines according to the discipline input grammar
force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)
force_no_exec – if True, the discipline is not re executed, cache is loaded anyway
- property n_calls¶
Return the number of calls to execute() which triggered the _run().
Multiprocessing safe.
- property n_calls_linearize¶
Return the number of calls to linearize() which triggered the _compute_jacobian() method.
Multiprocessing safe.
- notify_status_observers()¶
Notify all status observers that the status has changed.
- post_process(post_name, **options)¶
Finds the appropriate library and executes the post processing on the problem.
- Parameters
post_name – the post processing name
options – options for the post method, see its package
- property posts¶
Lists the available post processings.
- Returns
the list of methods
- print_execution_metrics()¶
Prints total number of executions and cumulated runtime by discipline.
- remove_status_observer(obs)¶
Remove an observer for the status.
- Parameters
obs – the observer to remove
- reset_statuses_for_run()¶
Sets all the statuses to PENDING.
- save_optimization_history(file_path, file_format='hdf5', append=False)¶
Saves the optimization history of the scenario to a file.
- Parameters
file_path – The path to the file to save the history
file_format – The format of the file, either “hdf5” or “ggobi” (Default value = “hdf5”)
append – if True, data is appended to the file if not empty (Default value = False)
- serialize(out_file)¶
Serialize the discipline.
- Parameters
out_file – destination file for serialization
- set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)¶
Set the type of cache to use and the tolerance level.
This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g.
SimpleCache
andMemoryFullCache
, or on the disk, e.g.HDF5Cache
.CacheFactory.caches
provides the list of available types of caches.- Parameters
cache_type (str) – type of cache to use.
cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal
cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used
cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used
is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.
- set_differentiation_method(method='user', step=1e-06)¶
Sets the differentiation method for the process.
- Parameters
method – the method to use, either “user”, “finite_differences”, or “complex_step” or “no_derivatives”, which is equivalent to None. (Default value = “user”)
step – Default value = 1e-6)
- set_disciplines_statuses(status)¶
Set the sub disciplines statuses.
To be implemented in subclasses. :param status: the status
- set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)¶
Set the jacobian approximation method.
Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize
- Parameters
jac_approx_type – “complex_step” or “finite_differences”
jax_approx_step – the step for finite differences or complex step
jac_approx_n_processes – maximum number of processors on which to run
jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing
jac_approx_wait_time – time waited between two forks of the process /Thread
- set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)¶
Compute the optimal finite-difference step.
Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.
Warning: this calls the discipline execution two times per input variables.
See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”
- Parameters
inputs – inputs wrt the linearization is made. If None, use differentiated inputs
outputs – outputs of the linearization is made. If None, use differentiated outputs
force_all – if True, all inputs and outputs are used
print_errors – if True, displays the estimated errors
numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution
- Returns
the estimated errors of truncation and cancelation error.
- set_optimization_history_backup(file_path, each_new_iter=False, each_store=True, erase=False, pre_load=False, generate_opt_plot=False)¶
Sets the backup file for the optimization history during the run.
- Parameters
file_path – The path to the file to save the history
each_new_iter – if True, callback at every iteration
each_store – if True, callback at every call to store() in the database
erase – if True, the backup file is erased before the run
pre_load – if True, the backup file is loaded before run, useful after a crash
generate_opt_plot – generates the optimization history view at backup
- property status¶
Status accessor.
- store_local_data(**kwargs)¶
Store discipline data in local data.
- Parameters
kwargs – the data as key value pairs
- time_stamps = None¶
- xdsmize(monitor=False, outdir='.', print_statuses=False, outfilename='xdsm.html', latex_output=False, open_browser=False, html_output=True, json_output=False)¶
Creates an xdsm.json file from the current scenario. If monitor is set to True, the xdsm.json file is updated to reflect discipline status update (hence monitor name).
- Parameters
monitor (bool) – if True, updates the generated file at each discipline status change
outdir (str) – the directory where XDSM json file is generated
print_statuses (bool) – print the statuses in the console at each update
outfilename – file name of the output. THe basename is used and the extension adapted for the HTML / JSON / PDF outputs
latex_output (bool) – build .tex, .tikz and .pdf file
open_browser – if True, opens the web browser with the XDSM
html_output – if True, outputs a self contained HTML file
json_output – if True, outputs a JSON file for XDSMjs
- class gemseo.core.mdo_scenario.MDOScenarioAdapter(scenario, inputs_list, outputs_list, reset_x0_before_opt=False, set_x0_before_opt=False, set_bounds_before_opt=False, cache_type='SimpleCache', output_multipliers=False)[source]¶
Bases:
gemseo.core.discipline.MDODiscipline
An adapter class for MDO Scenario:
input variables are specified they update default_data in the top level discipline they output data from the top level discipline outputs.
Initialize the scenario adapter.
- Parameters
scenario (MDOScenario) – the scenario to adapt
inputs_list (list(str)) – list of inputs to overload at sub scenario execution
outputs_list (list(str)) – list of outputs to get from scenario execution
reset_x0_before_opt (bool) – before running the sub optimization, reset the initial guess
set_x0_before_opt (bool) – if True, sets the initial point of the sub scenario, useful for multi-start
set_bounds_before_opt (bool) – if True, sets the bounds of the design space, useful for trust regions
cache_type (str) – type of cache policy, SIMPLE_CACHE or HDF5_CACHE
output_multipliers (bool) – if True then the Lagrange multipliers of the scenario optimal solution are computed and added to the outputs
Attributes:
Accessor to the cache input tolerance.
Accessor to the default inputs.
Return the cumulated execution time.
Accessor to the linearization mode.
Return the number of calls to execute() which triggered the _run().
Return the number of calls to linearize() which triggered the _compute_jacobian() method.
Status accessor.
Methods:
Activate the time stamps.
add_differentiated_inputs
([inputs])Add inputs to the differentiation list.
add_differentiated_outputs
([outputs])Add outputs to the differentiation list.
add_outputs
(outputs_names)Add outputs to the scenario adapter.
add_status_observer
(obs)Add an observer for the status.
auto_get_grammar_file
([is_input, name, comp_dir])Use a naming convention to associate a grammar file to a discipline.
check_input_data
(input_data[, raise_exception])Check the input data validity.
check_jacobian
([input_data, derr_approx, …])Check if the jacobian provided by the linearize() method is correct.
check_output_data
([raise_exception])Check the output data validity.
Deactivate the time stamps for storing start and end times of execution and linearizations.
deserialize
(in_file)Derialize the discipline from a file.
execute
([input_data])Execute the discipline.
Accessor for the input data as a list of values.
Accessor for the output data as a list of values.
Define the attributes to be serialized.
get_bnd_mult_name
(variable_name, is_upper)Return the name of the lower bound-constraint multiplier of a variable.
get_cstr_mult_name
(constraint_name)Return the name of the multiplier of a constraint.
get_data_list_from_dict
(keys, data_dict)Filter the dict from a list of keys or a single key.
Return the expected data exchange sequence.
Return the expected execution sequence.
Accessor for the input data as a dict of values.
Accessor for the input names as a list.
Accessor for the input and output names as a list.
Accessor for the outputs as a large numpy array.
get_inputs_by_name
(data_names)Accessor for the inputs as a list.
get_local_data_by_name
(data_names)Accessor for the local data of the discipline as a dict of values.
Accessor for the output data as a dict of values.
Accessor for the output names as a list.
Accessor for the outputs as a large numpy array.
get_outputs_by_name
(data_names)Accessor for the outputs as a list.
Gets the sub disciplines of self By default, empty.
is_all_inputs_existing
(data_names)Test if all the names in data_names are inputs of the discipline.
is_all_outputs_existing
(data_names)Test if all the names in data_names are outputs of the discipline.
is_input_existing
(data_name)Test if input named data_name is an input of the discipline.
is_output_existing
(data_name)Test if output named data_name is an output of the discipline.
Return True if self is a scenario.
linearize
([input_data, force_all, force_no_exec])Execute the linearized version of the code.
Notify all status observers that the status has changed.
Remove an observer for the status.
Sets all the statuses to PENDING.
serialize
(out_file)Serialize the discipline.
set_cache_policy
([cache_type, …])Set the type of cache to use and the tolerance level.
set_disciplines_statuses
(status)Set the sub disciplines statuses.
Set the jacobian approximation method.
set_optimal_fd_step
([outputs, inputs, …])Compute the optimal finite-difference step.
store_local_data
(**kwargs)Store discipline data in local data.
- APPROX_MODES = ['finite_differences', 'complex_step']¶
- AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')¶
- COMPLEX_STEP = 'complex_step'¶
- FINITE_DIFFERENCES = 'finite_differences'¶
- HDF5_CACHE = 'HDF5Cache'¶
- JSON_GRAMMAR_TYPE = 'JSON'¶
- LOWER_BND_SUFFIX = '_lower_bnd'¶
- MEMORY_FULL_CACHE = 'MemoryFullCache'¶
- MULTIPLIER_SUFFIX = '_multiplier'¶
- N_CPUS = 2¶
- RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'¶
- RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'¶
- SIMPLE_CACHE = 'SimpleCache'¶
- SIMPLE_GRAMMAR_TYPE = 'Simple'¶
- STATUS_DONE = 'DONE'¶
- STATUS_FAILED = 'FAILED'¶
- STATUS_PENDING = 'PENDING'¶
- STATUS_RUNNING = 'RUNNING'¶
- STATUS_VIRTUAL = 'VIRTUAL'¶
- UPPER_BND_SUFFIX = '_upper_bnd'¶
- classmethod activate_time_stamps()¶
Activate the time stamps.
For storing start and end times of execution and linearizations.
- add_differentiated_inputs(inputs=None)¶
Add inputs to the differentiation list.
This method updates self._differentiated_inputs with inputs
- Parameters
inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)
- add_differentiated_outputs(outputs=None)¶
Add outputs to the differentiation list.
Update self._differentiated_inputs with inputs.
- Parameters
outputs – list of output variables to differentiate if None, all outputs of discipline are used
- add_outputs(outputs_names)[source]¶
Add outputs to the scenario adapter.
- Parameters
outputs_names (list(str)) – names of the outputs to be added
- add_status_observer(obs)¶
Add an observer for the status.
Add an observer for the status to be notified when self changes of status.
- Parameters
obs – the observer to add
- auto_get_grammar_file(is_input=True, name=None, comp_dir=None)¶
Use a naming convention to associate a grammar file to a discipline.
This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json
- Parameters
is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)
name – the name of the discipline (Default value = None)
comp_dir – the containing directory if None, use self.comp_dir (Default value = None)
- Returns
path to the grammar file
- Return type
string
- property cache_tol¶
Accessor to the cache input tolerance.
- check_input_data(input_data, raise_exception=True)¶
Check the input data validity.
- Parameters
input_data – the input data dict
raise_exception – Default value = True)
- check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)¶
Check if the jacobian provided by the linearize() method is correct.
- Parameters
input_data – input data dict (Default value = None)
derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)
threshold – acceptance threshold for the jacobian error (Default value = 1e-8)
linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)
inputs – list of inputs wrt which to differentiate (Default value = None)
outputs – list of outputs to differentiate (Default value = None)
step – the step for finite differences or complex step
parallel – if True, executes in parallel
n_processes – maximum number of processors on which to run
use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing
wait_time_between_fork – time waited between two forks of the process /Thread
auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation
plot_result – plot the result of the validation (computed and approximate jacobians)
file_path – path to the output file if plot_result is True
show – if True, open the figure
figsize_x – x size of the figure in inches
figsize_y – y size of the figure in inches
- Returns
True if the check is accepted, False otherwise
- check_output_data(raise_exception=True)¶
Check the output data validity.
- Parameters
raise_exception – if true, an exception is raised when data is invalid (Default value = True)
- classmethod deactivate_time_stamps()¶
Deactivate the time stamps for storing start and end times of execution and linearizations.
- property default_inputs¶
Accessor to the default inputs.
- static deserialize(in_file)¶
Derialize the discipline from a file.
- Parameters
in_file – input file for serialization
- Returns
a discipline instance
- property exec_time¶
Return the cumulated execution time.
Multiprocessing safe.
- execute(input_data=None)¶
Execute the discipline.
This method executes the discipline:
- Adds default inputs to the input_data if some inputs are not defined
in input_data but exist in self._default_data
- Checks if the last execution of the discipline wan not called with
identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)
Caches the inputs
Checks the input data against self.input_grammar
if self.data_processor is not None: runs the preprocessor
updates the status to RUNNING
calls the _run() method, that shall be defined
if self.data_processor is not None: runs the postprocessor
checks the output data
Caches the outputs
updates the status to DONE or FAILED
updates summed execution time
- Parameters
input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)
- Returns
the discipline local data after execution
- Return type
dict
- get_all_inputs()¶
Accessor for the input data as a list of values.
The order is given by self.get_input_data_names().
- Returns
the data
- get_all_outputs()¶
Accessor for the output data as a list of values.
The order is given by self.get_output_data_names().
- Returns
the data
- get_attributes_to_serialize()¶
Define the attributes to be serialized.
Shall be overloaded by disciplines
- Returns
the list of attributes names
- Return type
list
- static get_bnd_mult_name(variable_name, is_upper)[source]¶
Return the name of the lower bound-constraint multiplier of a variable.
- Parameters
variable_name (str) – name of the variable
is_upper (bool) – if True then return the upper bound-constraint multiplier name, otherwise return the lower bound-constraint multiplier
- Returns
name of the lower bound-constraint multiplier
- Return type
str
- static get_cstr_mult_name(constraint_name)[source]¶
Return the name of the multiplier of a constraint.
- Parameters
constraint_name (str) – name of the constraint
- Returns
name of the multiplier
- Return type
str
- static get_data_list_from_dict(keys, data_dict)¶
Filter the dict from a list of keys or a single key.
If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.
- Parameters
keys – a sting key or a list of keys
data_dict – the dict to get the data from
- Returns
a data or a generator of data
- get_expected_dataflow()[source]¶
Return the expected data exchange sequence.
This method is used for the XDSM representation.
Default to empty list See MDOFormulation.get_expected_dataflow
- Returns
a list representing the data exchange arcs
- get_expected_workflow()[source]¶
Return the expected execution sequence.
This method is used for XDSM representation Default to the execution of the discipline itself See MDOFormulation.get_expected_workflow
- get_input_data()¶
Accessor for the input data as a dict of values.
- Returns
the data dict
- get_input_data_names()¶
Accessor for the input names as a list.
- Returns
the data names list
- get_input_output_data_names()¶
Accessor for the input and output names as a list.
- Returns
the data names list
- get_inputs_asarray()¶
Accessor for the outputs as a large numpy array.
The order is the one of self.get_all_outputs().
- Returns
the outputs array
- Return type
ndarray
- get_inputs_by_name(data_names)¶
Accessor for the inputs as a list.
- Parameters
data_names – the data names list
- Returns
the data list
- get_local_data_by_name(data_names)¶
Accessor for the local data of the discipline as a dict of values.
- Parameters
data_names – the names of the data which will be the keys of the dictionary
- Returns
the data list
- get_output_data()¶
Accessor for the output data as a dict of values.
- Returns
the data dict
- get_output_data_names()¶
Accessor for the output names as a list.
- Returns
the data names list
- get_outputs_asarray()¶
Accessor for the outputs as a large numpy array.
The order is the one of self.get_all_outputs()
- Returns
the outputs array
- Return type
ndarray
- get_outputs_by_name(data_names)¶
Accessor for the outputs as a list.
- Parameters
data_names – the data names list
- Returns
the data list
- get_sub_disciplines()¶
Gets the sub disciplines of self By default, empty.
- Returns
the list of disciplines
- is_all_inputs_existing(data_names)¶
Test if all the names in data_names are inputs of the discipline.
- Parameters
data_names – the names of the inputs
- Returns
True if data_names are all in input grammar
- Return type
logical
- is_all_outputs_existing(data_names)¶
Test if all the names in data_names are outputs of the discipline.
- Parameters
data_names – the names of the outputs
- Returns
True if data_names are all in output grammar
- Return type
logical
- is_input_existing(data_name)¶
Test if input named data_name is an input of the discipline.
- Parameters
data_name – the name of the output
- Returns
True if data_name is in input grammar
- Return type
logical
- is_output_existing(data_name)¶
Test if output named data_name is an output of the discipline.
- Parameters
data_name – the name of the output
- Returns
True if data_name is in output grammar
- Return type
logical
- static is_scenario()¶
Return True if self is a scenario.
- Returns
True if self is a scenario
- property linearization_mode¶
Accessor to the linearization mode.
- linearize(input_data=None, force_all=False, force_no_exec=False)¶
Execute the linearized version of the code.
- Parameters
input_data – the input data dict needed to execute the disciplines according to the discipline input grammar
force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)
force_no_exec – if True, the discipline is not re executed, cache is loaded anyway
- property n_calls¶
Return the number of calls to execute() which triggered the _run().
Multiprocessing safe.
- property n_calls_linearize¶
Return the number of calls to linearize() which triggered the _compute_jacobian() method.
Multiprocessing safe.
- notify_status_observers()¶
Notify all status observers that the status has changed.
- remove_status_observer(obs)¶
Remove an observer for the status.
- Parameters
obs – the observer to remove
- reset_statuses_for_run()¶
Sets all the statuses to PENDING.
- serialize(out_file)¶
Serialize the discipline.
- Parameters
out_file – destination file for serialization
- set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)¶
Set the type of cache to use and the tolerance level.
This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g.
SimpleCache
andMemoryFullCache
, or on the disk, e.g.HDF5Cache
.CacheFactory.caches
provides the list of available types of caches.- Parameters
cache_type (str) – type of cache to use.
cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal
cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used
cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used
is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.
- set_disciplines_statuses(status)¶
Set the sub disciplines statuses.
To be implemented in subclasses. :param status: the status
- set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)¶
Set the jacobian approximation method.
Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize
- Parameters
jac_approx_type – “complex_step” or “finite_differences”
jax_approx_step – the step for finite differences or complex step
jac_approx_n_processes – maximum number of processors on which to run
jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing
jac_approx_wait_time – time waited between two forks of the process /Thread
- set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)¶
Compute the optimal finite-difference step.
Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.
Warning: this calls the discipline execution two times per input variables.
See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”
- Parameters
inputs – inputs wrt the linearization is made. If None, use differentiated inputs
outputs – outputs of the linearization is made. If None, use differentiated outputs
force_all – if True, all inputs and outputs are used
print_errors – if True, displays the estimated errors
numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution
- Returns
the estimated errors of truncation and cancelation error.
- property status¶
Status accessor.
- store_local_data(**kwargs)¶
Store discipline data in local data.
- Parameters
kwargs – the data as key value pairs
- time_stamps = None¶