gemseo / problems / propane

# propane module¶

## The propane combustion MDO problem¶

The Propane MDO problem can be found in [PAG96] and [TM06]. It represents the chemical equilibrium reached during the combustion of propane in air. Variables are assigned to represent each of the ten combustion products as well as the sum of the products.

The optimization problem is as follows:

\begin{split}\begin{aligned} \text{minimize the objective function } & f_2 + f_6 + f_7 + f_9 \\ \text{with respect to the design variables } & x_{1},\,x_{3},\,x_{6},\,x_{7} \\ \text{subject to the general constraints } & f_2(x) \geq 0\\ & f_6(x) \geq 0\\ & f_7(x) \geq 0\\ & f_9(x) \geq 0\\ \text{subject to the bound constraints } & x_{1} \geq 0\\ & x_{3} \geq 0\\ & x_{6} \geq 0\\ & x_{7} \geq 0\\ \end{aligned}\end{split}

where the System Discipline consists of computing the following expressions:

\begin{split}\begin{aligned} f_2(x) & = & 2x_1 + x_2 + x_4 + x_7 + x_8 + x_9 + 2x_{10} - R, \\ f_6(x) & = & K_6x_2^{1/2}x_4^{1/2} - x_1^{1/2}x_6(p/x_{11})^{1/2}, \\ f_7(x) & = & K_7x_1^{1/2}x_2^{1/2} - x_4^{1/2}x_7(p/x_{11})^{1/2}, \\ f_9(x) & = & K_9x_1x_3^{1/2} - x_4x_9(p/x_{11})^{1/2}. \\ \end{aligned}\end{split}

Discipline 1 computes $$(x_{2}, x_{4})$$ by satisfying the following equations:

\begin{split}\begin{aligned} x_1 + x_4 - 3 &=& 0,\\ K_5x_2x_4 - x_1x_5 &=& 0.\\ \end{aligned}\end{split}

Discipline 2 computes $$(x_2, x_4)$$ such that:

\begin{split}\begin{aligned} K_8x_1 + x_4x_8(p/x_{11}) &=& 0,\\ K_{10}x_{1}^{2} - x_4^2x_{10}(p/x_{11}) &=& 0.\\ \end{aligned}\end{split}

and Discipline 3 computes $$(x_5, x_9, x_{11})$$ by solving:

\begin{split}\begin{aligned} 2x_2 + 2x_5 + x_6 + x_7 - 8&=& 0,\\ 2x_3 + x_9 - 4R &=& 0, \\ x_{11} - \sum_{j=1}^{10} x_j &=& 0. \\ \end{aligned}\end{split}

Classes:

 Propane combustion 1st set of equations This discipline is characterized by two governing equations. Propane combustion 2nd set of equations This discipline is characterized by two governing equations. This discipline is characterized by three governing equations. Propane’s objective and constraints discipline This discipline’s outputs are the objective function and partial terms used in inequality constraints.

Functions:

 get_design_space([to_complex]) Reads the design space file.
class gemseo.problems.propane.propane.PropaneComb1[source]

Propane combustion 1st set of equations This discipline is characterized by two governing equations.

Constructor.

Parameters
• name – the name of the discipline

• input_grammar_file – the file for input grammar description, if None, name + “_input.json” is used

• output_grammar_file – the file for output grammar description, if None, name + “_output.json” is used

• auto_detect_grammar_files – if no input and output grammar files are provided, auto_detect_grammar_files uses a naming convention to associate a grammar file to a discipline: searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

• grammar_type – the type of grammar to use for IO declaration either JSON_GRAMMAR_TYPE or SIMPLE_GRAMMAR_TYPE

• cache_type – type of cache policy, SIMPLE_CACHE or HDF5_CACHE

• cache_file_path – the file to store the data, mandatory when HDF caching is used

Attributes:

 APPROX_MODES AVAILABLE_MODES COMPLEX_STEP FINITE_DIFFERENCES HDF5_CACHE JSON_GRAMMAR_TYPE MEMORY_FULL_CACHE N_CPUS RE_EXECUTE_DONE_POLICY RE_EXECUTE_NEVER_POLICY SIMPLE_CACHE SIMPLE_GRAMMAR_TYPE STATUS_DONE STATUS_FAILED STATUS_PENDING STATUS_RUNNING STATUS_VIRTUAL cache_tol Accessor to the cache input tolerance. default_inputs Accessor to the default inputs. exec_time Return the cumulated execution time. linearization_mode Accessor to the linearization mode. n_calls Return the number of calls to execute() which triggered the _run(). n_calls_linearize Return the number of calls to linearize() which triggered the _compute_jacobian() method. status Status accessor. time_stamps

Methods:

 Activate the time stamps. add_differentiated_inputs([inputs]) Add inputs to the differentiation list. add_differentiated_outputs([outputs]) Add outputs to the differentiation list. Add an observer for the status. auto_get_grammar_file([is_input, name, comp_dir]) Use a naming convention to associate a grammar file to a discipline. check_input_data(input_data[, raise_exception]) Check the input data validity. check_jacobian([input_data, derr_approx, …]) Check if the jacobian provided by the linearize() method is correct. check_output_data([raise_exception]) Check the output data validity. compute_y0(x_shared) Solve the first coupling equation in functional form. compute_y1(x_shared) Solve the second coupling equation in functional form. Deactivate the time stamps for storing start and end times of execution and linearizations. deserialize(in_file) Derialize the discipline from a file. execute([input_data]) Execute the discipline. Accessor for the input data as a list of values. Accessor for the output data as a list of values. Define the attributes to be serialized. get_data_list_from_dict(keys, data_dict) Filter the dict from a list of keys or a single key. Return the expected data exchange sequence. Return the expected execution sequence. Accessor for the input data as a dict of values. Accessor for the input names as a list. Accessor for the input and output names as a list. Accessor for the outputs as a large numpy array. get_inputs_by_name(data_names) Accessor for the inputs as a list. get_local_data_by_name(data_names) Accessor for the local data of the discipline as a dict of values. Accessor for the output data as a dict of values. Accessor for the output names as a list. Accessor for the outputs as a large numpy array. get_outputs_by_name(data_names) Accessor for the outputs as a list. Gets the sub disciplines of self By default, empty. is_all_inputs_existing(data_names) Test if all the names in data_names are inputs of the discipline. is_all_outputs_existing(data_names) Test if all the names in data_names are outputs of the discipline. is_input_existing(data_name) Test if input named data_name is an input of the discipline. is_output_existing(data_name) Test if output named data_name is an output of the discipline. Return True if self is a scenario. linearize([input_data, force_all, force_no_exec]) Execute the linearized version of the code. Notify all status observers that the status has changed. Remove an observer for the status. Sets all the statuses to PENDING. serialize(out_file) Serialize the discipline. set_cache_policy([cache_type, …]) Set the type of cache to use and the tolerance level. Set the sub disciplines statuses. Set the jacobian approximation method. set_optimal_fd_step([outputs, inputs, …]) Compute the optimal finite-difference step. store_local_data(**kwargs) Store discipline data in local data.
APPROX_MODES = ['finite_differences', 'complex_step']
AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')
COMPLEX_STEP = 'complex_step'
FINITE_DIFFERENCES = 'finite_differences'
HDF5_CACHE = 'HDF5Cache'
JSON_GRAMMAR_TYPE = 'JSON'
MEMORY_FULL_CACHE = 'MemoryFullCache'
N_CPUS = 2
RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'
RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'
SIMPLE_CACHE = 'SimpleCache'
SIMPLE_GRAMMAR_TYPE = 'Simple'
STATUS_DONE = 'DONE'
STATUS_FAILED = 'FAILED'
STATUS_PENDING = 'PENDING'
STATUS_RUNNING = 'RUNNING'
STATUS_VIRTUAL = 'VIRTUAL'
classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

Add inputs to the differentiation list.

This method updates self._differentiated_inputs with inputs

Parameters

inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)

Add outputs to the differentiation list.

Update self._differentiated_inputs with inputs.

Parameters

outputs – list of output variables to differentiate if None, all outputs of discipline are used

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters

obs – the observer to add

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to a discipline.

This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

Parameters
• is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)

• name – the name of the discipline (Default value = None)

• comp_dir – the containing directory if None, use self.comp_dir (Default value = None)

Returns

path to the grammar file

Return type

string

property cache_tol

Accessor to the cache input tolerance.

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters
• input_data – the input data dict

• raise_exception – Default value = True)

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)

Check if the jacobian provided by the linearize() method is correct.

Parameters
• input_data – input data dict (Default value = None)

• derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)

• threshold – acceptance threshold for the jacobian error (Default value = 1e-8)

• linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

• inputs – list of inputs wrt which to differentiate (Default value = None)

• outputs – list of outputs to differentiate (Default value = None)

• step – the step for finite differences or complex step

• parallel – if True, executes in parallel

• n_processes – maximum number of processors on which to run

• use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

• wait_time_between_fork – time waited between two forks of the process /Thread

• auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation

• plot_result – plot the result of the validation (computed and approximate jacobians)

• file_path – path to the output file if plot_result is True

• show – if True, open the figure

• figsize_x – x size of the figure in inches

• figsize_y – y size of the figure in inches

Returns

True if the check is accepted, False otherwise

check_output_data(raise_exception=True)

Check the output data validity.

Parameters

raise_exception – if true, an exception is raised when data is invalid (Default value = True)

classmethod compute_y0(x_shared)[source]

Solve the first coupling equation in functional form.

Parameters

x_shared (ndarray) – vector of shared design variables

Returns

coupling variable y0

classmethod compute_y1(x_shared)[source]

Solve the second coupling equation in functional form.

Parameters

x_shared (ndarray) – vector of shared design variables

Returns

coupling variable y1

classmethod deactivate_time_stamps()

Deactivate the time stamps for storing start and end times of execution and linearizations.

property default_inputs

Accessor to the default inputs.

static deserialize(in_file)

Derialize the discipline from a file.

Parameters

in_file – input file for serialization

Returns

a discipline instance

property exec_time

Return the cumulated execution time.

Multiprocessing safe.

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

• Adds default inputs to the input_data if some inputs are not defined

in input_data but exist in self._default_data

• Checks if the last execution of the discipline wan not called with

identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)

• Caches the inputs

• Checks the input data against self.input_grammar

• if self.data_processor is not None: runs the preprocessor

• updates the status to RUNNING

• calls the _run() method, that shall be defined

• if self.data_processor is not None: runs the postprocessor

• checks the output data

• Caches the outputs

• updates the status to DONE or FAILED

Parameters

input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)

Returns

the discipline local data after execution

Return type

dict

get_all_inputs()

Accessor for the input data as a list of values.

The order is given by self.get_input_data_names().

Returns

the data

get_all_outputs()

Accessor for the output data as a list of values.

The order is given by self.get_output_data_names().

Returns

the data

get_attributes_to_serialize()

Define the attributes to be serialized.

Returns

the list of attributes names

Return type

list

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.

Parameters
• keys – a sting key or a list of keys

• data_dict – the dict to get the data from

Returns

a data or a generator of data

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

Default to empty list See MDOFormulation.get_expected_dataflow

Returns

a list representing the data exchange arcs

get_expected_workflow()

Return the expected execution sequence.

This method is used for XDSM representation Default to the execution of the discipline itself See MDOFormulation.get_expected_workflow

get_input_data()

Accessor for the input data as a dict of values.

Returns

the data dict

get_input_data_names()

Accessor for the input names as a list.

Returns

the data names list

get_input_output_data_names()

Accessor for the input and output names as a list.

Returns

the data names list

get_inputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs().

Returns

the outputs array

Return type

ndarray

get_inputs_by_name(data_names)

Accessor for the inputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_local_data_by_name(data_names)

Accessor for the local data of the discipline as a dict of values.

Parameters

data_names – the names of the data which will be the keys of the dictionary

Returns

the data list

get_output_data()

Accessor for the output data as a dict of values.

Returns

the data dict

get_output_data_names()

Accessor for the output names as a list.

Returns

the data names list

get_outputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs()

Returns

the outputs array

Return type

ndarray

get_outputs_by_name(data_names)

Accessor for the outputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_sub_disciplines()

Gets the sub disciplines of self By default, empty.

Returns

the list of disciplines

is_all_inputs_existing(data_names)

Test if all the names in data_names are inputs of the discipline.

Parameters

data_names – the names of the inputs

Returns

True if data_names are all in input grammar

Return type

logical

is_all_outputs_existing(data_names)

Test if all the names in data_names are outputs of the discipline.

Parameters

data_names – the names of the outputs

Returns

True if data_names are all in output grammar

Return type

logical

is_input_existing(data_name)

Test if input named data_name is an input of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in input grammar

Return type

logical

is_output_existing(data_name)

Test if output named data_name is an output of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in output grammar

Return type

logical

static is_scenario()

Return True if self is a scenario.

Returns

True if self is a scenario

property linearization_mode

Accessor to the linearization mode.

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters
• input_data – the input data dict needed to execute the disciplines according to the discipline input grammar

• force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)

• force_no_exec – if True, the discipline is not re executed, cache is loaded anyway

property n_calls

Return the number of calls to execute() which triggered the _run().

Multiprocessing safe.

property n_calls_linearize

Return the number of calls to linearize() which triggered the _compute_jacobian() method.

Multiprocessing safe.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

Parameters

obs – the observer to remove

reset_statuses_for_run()

Sets all the statuses to PENDING.

serialize(out_file)

Serialize the discipline.

Parameters

out_file – destination file for serialization

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache , or on the disk, e.g. HDF5Cache . CacheFactory.caches provides the list of available types of caches.

Parameters
• cache_type (str) – type of cache to use.

• cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal

• cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used

• cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used

• is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.

set_disciplines_statuses(status)

Set the sub disciplines statuses.

To be implemented in subclasses. :param status: the status

Set the jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize

Parameters
• jac_approx_type – “complex_step” or “finite_differences”

• jax_approx_step – the step for finite differences or complex step

• jac_approx_n_processes – maximum number of processors on which to run

• jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

• jac_approx_wait_time – time waited between two forks of the process /Thread

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.

Warning: this calls the discipline execution two times per input variables.

See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”

Parameters
• inputs – inputs wrt the linearization is made. If None, use differentiated inputs

• outputs – outputs of the linearization is made. If None, use differentiated outputs

• force_all – if True, all inputs and outputs are used

• print_errors – if True, displays the estimated errors

• numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution

Returns

the estimated errors of truncation and cancelation error.

property status

Status accessor.

store_local_data(**kwargs)

Store discipline data in local data.

Parameters

kwargs – the data as key value pairs

time_stamps = None
class gemseo.problems.propane.propane.PropaneComb2[source]

Propane combustion 2nd set of equations This discipline is characterized by two governing equations.

Constructor.

Parameters
• name – the name of the discipline

• input_grammar_file – the file for input grammar description, if None, name + “_input.json” is used

• output_grammar_file – the file for output grammar description, if None, name + “_output.json” is used

• auto_detect_grammar_files – if no input and output grammar files are provided, auto_detect_grammar_files uses a naming convention to associate a grammar file to a discipline: searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

• grammar_type – the type of grammar to use for IO declaration either JSON_GRAMMAR_TYPE or SIMPLE_GRAMMAR_TYPE

• cache_type – type of cache policy, SIMPLE_CACHE or HDF5_CACHE

• cache_file_path – the file to store the data, mandatory when HDF caching is used

Attributes:

 APPROX_MODES AVAILABLE_MODES COMPLEX_STEP FINITE_DIFFERENCES HDF5_CACHE JSON_GRAMMAR_TYPE MEMORY_FULL_CACHE N_CPUS RE_EXECUTE_DONE_POLICY RE_EXECUTE_NEVER_POLICY SIMPLE_CACHE SIMPLE_GRAMMAR_TYPE STATUS_DONE STATUS_FAILED STATUS_PENDING STATUS_RUNNING STATUS_VIRTUAL cache_tol Accessor to the cache input tolerance. default_inputs Accessor to the default inputs. exec_time Return the cumulated execution time. linearization_mode Accessor to the linearization mode. n_calls Return the number of calls to execute() which triggered the _run(). n_calls_linearize Return the number of calls to linearize() which triggered the _compute_jacobian() method. status Status accessor. time_stamps

Methods:

 Activate the time stamps. add_differentiated_inputs([inputs]) Add inputs to the differentiation list. add_differentiated_outputs([outputs]) Add outputs to the differentiation list. Add an observer for the status. auto_get_grammar_file([is_input, name, comp_dir]) Use a naming convention to associate a grammar file to a discipline. check_input_data(input_data[, raise_exception]) Check the input data validity. check_jacobian([input_data, derr_approx, …]) Check if the jacobian provided by the linearize() method is correct. check_output_data([raise_exception]) Check the output data validity. compute_y2(x_shared) Solve the third coupling equation in functional form. compute_y3(x_shared) Solve the fourth coupling equation in functional form. Deactivate the time stamps for storing start and end times of execution and linearizations. deserialize(in_file) Derialize the discipline from a file. execute([input_data]) Execute the discipline. Accessor for the input data as a list of values. Accessor for the output data as a list of values. Define the attributes to be serialized. get_data_list_from_dict(keys, data_dict) Filter the dict from a list of keys or a single key. Return the expected data exchange sequence. Return the expected execution sequence. Accessor for the input data as a dict of values. Accessor for the input names as a list. Accessor for the input and output names as a list. Accessor for the outputs as a large numpy array. get_inputs_by_name(data_names) Accessor for the inputs as a list. get_local_data_by_name(data_names) Accessor for the local data of the discipline as a dict of values. Accessor for the output data as a dict of values. Accessor for the output names as a list. Accessor for the outputs as a large numpy array. get_outputs_by_name(data_names) Accessor for the outputs as a list. Gets the sub disciplines of self By default, empty. is_all_inputs_existing(data_names) Test if all the names in data_names are inputs of the discipline. is_all_outputs_existing(data_names) Test if all the names in data_names are outputs of the discipline. is_input_existing(data_name) Test if input named data_name is an input of the discipline. is_output_existing(data_name) Test if output named data_name is an output of the discipline. Return True if self is a scenario. linearize([input_data, force_all, force_no_exec]) Execute the linearized version of the code. Notify all status observers that the status has changed. Remove an observer for the status. Sets all the statuses to PENDING. serialize(out_file) Serialize the discipline. set_cache_policy([cache_type, …]) Set the type of cache to use and the tolerance level. Set the sub disciplines statuses. Set the jacobian approximation method. set_optimal_fd_step([outputs, inputs, …]) Compute the optimal finite-difference step. store_local_data(**kwargs) Store discipline data in local data.
APPROX_MODES = ['finite_differences', 'complex_step']
AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')
COMPLEX_STEP = 'complex_step'
FINITE_DIFFERENCES = 'finite_differences'
HDF5_CACHE = 'HDF5Cache'
JSON_GRAMMAR_TYPE = 'JSON'
MEMORY_FULL_CACHE = 'MemoryFullCache'
N_CPUS = 2
RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'
RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'
SIMPLE_CACHE = 'SimpleCache'
SIMPLE_GRAMMAR_TYPE = 'Simple'
STATUS_DONE = 'DONE'
STATUS_FAILED = 'FAILED'
STATUS_PENDING = 'PENDING'
STATUS_RUNNING = 'RUNNING'
STATUS_VIRTUAL = 'VIRTUAL'
classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

Add inputs to the differentiation list.

This method updates self._differentiated_inputs with inputs

Parameters

inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)

Add outputs to the differentiation list.

Update self._differentiated_inputs with inputs.

Parameters

outputs – list of output variables to differentiate if None, all outputs of discipline are used

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters

obs – the observer to add

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to a discipline.

This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

Parameters
• is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)

• name – the name of the discipline (Default value = None)

• comp_dir – the containing directory if None, use self.comp_dir (Default value = None)

Returns

path to the grammar file

Return type

string

property cache_tol

Accessor to the cache input tolerance.

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters
• input_data – the input data dict

• raise_exception – Default value = True)

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)

Check if the jacobian provided by the linearize() method is correct.

Parameters
• input_data – input data dict (Default value = None)

• derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)

• threshold – acceptance threshold for the jacobian error (Default value = 1e-8)

• linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

• inputs – list of inputs wrt which to differentiate (Default value = None)

• outputs – list of outputs to differentiate (Default value = None)

• step – the step for finite differences or complex step

• parallel – if True, executes in parallel

• n_processes – maximum number of processors on which to run

• use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

• wait_time_between_fork – time waited between two forks of the process /Thread

• auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation

• plot_result – plot the result of the validation (computed and approximate jacobians)

• file_path – path to the output file if plot_result is True

• show – if True, open the figure

• figsize_x – x size of the figure in inches

• figsize_y – y size of the figure in inches

Returns

True if the check is accepted, False otherwise

check_output_data(raise_exception=True)

Check the output data validity.

Parameters

raise_exception – if true, an exception is raised when data is invalid (Default value = True)

classmethod compute_y2(x_shared)[source]

Solve the third coupling equation in functional form.

Parameters

x_shared (ndarray) – vector of shared design variables

Returns

coupling variable y_2

classmethod compute_y3(x_shared)[source]

Solve the fourth coupling equation in functional form.

Parameters

x_shared (ndarray) – vector of shared design variables

Returns

coupling variable y_3

classmethod deactivate_time_stamps()

Deactivate the time stamps for storing start and end times of execution and linearizations.

property default_inputs

Accessor to the default inputs.

static deserialize(in_file)

Derialize the discipline from a file.

Parameters

in_file – input file for serialization

Returns

a discipline instance

property exec_time

Return the cumulated execution time.

Multiprocessing safe.

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

• Adds default inputs to the input_data if some inputs are not defined

in input_data but exist in self._default_data

• Checks if the last execution of the discipline wan not called with

identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)

• Caches the inputs

• Checks the input data against self.input_grammar

• if self.data_processor is not None: runs the preprocessor

• updates the status to RUNNING

• calls the _run() method, that shall be defined

• if self.data_processor is not None: runs the postprocessor

• checks the output data

• Caches the outputs

• updates the status to DONE or FAILED

Parameters

input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)

Returns

the discipline local data after execution

Return type

dict

get_all_inputs()

Accessor for the input data as a list of values.

The order is given by self.get_input_data_names().

Returns

the data

get_all_outputs()

Accessor for the output data as a list of values.

The order is given by self.get_output_data_names().

Returns

the data

get_attributes_to_serialize()

Define the attributes to be serialized.

Returns

the list of attributes names

Return type

list

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.

Parameters
• keys – a sting key or a list of keys

• data_dict – the dict to get the data from

Returns

a data or a generator of data

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

Default to empty list See MDOFormulation.get_expected_dataflow

Returns

a list representing the data exchange arcs

get_expected_workflow()

Return the expected execution sequence.

This method is used for XDSM representation Default to the execution of the discipline itself See MDOFormulation.get_expected_workflow

get_input_data()

Accessor for the input data as a dict of values.

Returns

the data dict

get_input_data_names()

Accessor for the input names as a list.

Returns

the data names list

get_input_output_data_names()

Accessor for the input and output names as a list.

Returns

the data names list

get_inputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs().

Returns

the outputs array

Return type

ndarray

get_inputs_by_name(data_names)

Accessor for the inputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_local_data_by_name(data_names)

Accessor for the local data of the discipline as a dict of values.

Parameters

data_names – the names of the data which will be the keys of the dictionary

Returns

the data list

get_output_data()

Accessor for the output data as a dict of values.

Returns

the data dict

get_output_data_names()

Accessor for the output names as a list.

Returns

the data names list

get_outputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs()

Returns

the outputs array

Return type

ndarray

get_outputs_by_name(data_names)

Accessor for the outputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_sub_disciplines()

Gets the sub disciplines of self By default, empty.

Returns

the list of disciplines

is_all_inputs_existing(data_names)

Test if all the names in data_names are inputs of the discipline.

Parameters

data_names – the names of the inputs

Returns

True if data_names are all in input grammar

Return type

logical

is_all_outputs_existing(data_names)

Test if all the names in data_names are outputs of the discipline.

Parameters

data_names – the names of the outputs

Returns

True if data_names are all in output grammar

Return type

logical

is_input_existing(data_name)

Test if input named data_name is an input of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in input grammar

Return type

logical

is_output_existing(data_name)

Test if output named data_name is an output of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in output grammar

Return type

logical

static is_scenario()

Return True if self is a scenario.

Returns

True if self is a scenario

property linearization_mode

Accessor to the linearization mode.

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters
• input_data – the input data dict needed to execute the disciplines according to the discipline input grammar

• force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)

• force_no_exec – if True, the discipline is not re executed, cache is loaded anyway

property n_calls

Return the number of calls to execute() which triggered the _run().

Multiprocessing safe.

property n_calls_linearize

Return the number of calls to linearize() which triggered the _compute_jacobian() method.

Multiprocessing safe.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

Parameters

obs – the observer to remove

reset_statuses_for_run()

Sets all the statuses to PENDING.

serialize(out_file)

Serialize the discipline.

Parameters

out_file – destination file for serialization

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache , or on the disk, e.g. HDF5Cache . CacheFactory.caches provides the list of available types of caches.

Parameters
• cache_type (str) – type of cache to use.

• cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal

• cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used

• cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used

• is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.

set_disciplines_statuses(status)

Set the sub disciplines statuses.

To be implemented in subclasses. :param status: the status

Set the jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize

Parameters
• jac_approx_type – “complex_step” or “finite_differences”

• jax_approx_step – the step for finite differences or complex step

• jac_approx_n_processes – maximum number of processors on which to run

• jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

• jac_approx_wait_time – time waited between two forks of the process /Thread

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.

Warning: this calls the discipline execution two times per input variables.

See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”

Parameters
• inputs – inputs wrt the linearization is made. If None, use differentiated inputs

• outputs – outputs of the linearization is made. If None, use differentiated outputs

• force_all – if True, all inputs and outputs are used

• print_errors – if True, displays the estimated errors

• numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution

Returns

the estimated errors of truncation and cancelation error.

property status

Status accessor.

store_local_data(**kwargs)

Store discipline data in local data.

Parameters

kwargs – the data as key value pairs

time_stamps = None
class gemseo.problems.propane.propane.PropaneComb3[source]

This discipline is characterized by three governing equations.

Constructor.

Parameters
• name – the name of the discipline

• input_grammar_file – the file for input grammar description, if None, name + “_input.json” is used

• output_grammar_file – the file for output grammar description, if None, name + “_output.json” is used

• auto_detect_grammar_files – if no input and output grammar files are provided, auto_detect_grammar_files uses a naming convention to associate a grammar file to a discipline: searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

• grammar_type – the type of grammar to use for IO declaration either JSON_GRAMMAR_TYPE or SIMPLE_GRAMMAR_TYPE

• cache_type – type of cache policy, SIMPLE_CACHE or HDF5_CACHE

• cache_file_path – the file to store the data, mandatory when HDF caching is used

Attributes:

 APPROX_MODES AVAILABLE_MODES COMPLEX_STEP FINITE_DIFFERENCES HDF5_CACHE JSON_GRAMMAR_TYPE MEMORY_FULL_CACHE N_CPUS RE_EXECUTE_DONE_POLICY RE_EXECUTE_NEVER_POLICY SIMPLE_CACHE SIMPLE_GRAMMAR_TYPE STATUS_DONE STATUS_FAILED STATUS_PENDING STATUS_RUNNING STATUS_VIRTUAL cache_tol Accessor to the cache input tolerance. default_inputs Accessor to the default inputs. exec_time Return the cumulated execution time. linearization_mode Accessor to the linearization mode. n_calls Return the number of calls to execute() which triggered the _run(). n_calls_linearize Return the number of calls to linearize() which triggered the _compute_jacobian() method. status Status accessor. time_stamps

Methods:

 Activate the time stamps. add_differentiated_inputs([inputs]) Add inputs to the differentiation list. add_differentiated_outputs([outputs]) Add outputs to the differentiation list. Add an observer for the status. auto_get_grammar_file([is_input, name, comp_dir]) Use a naming convention to associate a grammar file to a discipline. check_input_data(input_data[, raise_exception]) Check the input data validity. check_jacobian([input_data, derr_approx, …]) Check if the jacobian provided by the linearize() method is correct. check_output_data([raise_exception]) Check the output data validity. compute_y4(x_shared) Solve the fifth coupling equation in functional form. compute_y5(x_shared) Solve the sixth coupling equation in functional form. compute_y6(x_shared) Solve the seventh coupling equation in functional form. Deactivate the time stamps for storing start and end times of execution and linearizations. deserialize(in_file) Derialize the discipline from a file. execute([input_data]) Execute the discipline. Accessor for the input data as a list of values. Accessor for the output data as a list of values. Define the attributes to be serialized. get_data_list_from_dict(keys, data_dict) Filter the dict from a list of keys or a single key. Return the expected data exchange sequence. Return the expected execution sequence. Accessor for the input data as a dict of values. Accessor for the input names as a list. Accessor for the input and output names as a list. Accessor for the outputs as a large numpy array. get_inputs_by_name(data_names) Accessor for the inputs as a list. get_local_data_by_name(data_names) Accessor for the local data of the discipline as a dict of values. Accessor for the output data as a dict of values. Accessor for the output names as a list. Accessor for the outputs as a large numpy array. get_outputs_by_name(data_names) Accessor for the outputs as a list. Gets the sub disciplines of self By default, empty. is_all_inputs_existing(data_names) Test if all the names in data_names are inputs of the discipline. is_all_outputs_existing(data_names) Test if all the names in data_names are outputs of the discipline. is_input_existing(data_name) Test if input named data_name is an input of the discipline. is_output_existing(data_name) Test if output named data_name is an output of the discipline. Return True if self is a scenario. linearize([input_data, force_all, force_no_exec]) Execute the linearized version of the code. Notify all status observers that the status has changed. Remove an observer for the status. Sets all the statuses to PENDING. serialize(out_file) Serialize the discipline. set_cache_policy([cache_type, …]) Set the type of cache to use and the tolerance level. Set the sub disciplines statuses. Set the jacobian approximation method. set_optimal_fd_step([outputs, inputs, …]) Compute the optimal finite-difference step. store_local_data(**kwargs) Store discipline data in local data.
APPROX_MODES = ['finite_differences', 'complex_step']
AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')
COMPLEX_STEP = 'complex_step'
FINITE_DIFFERENCES = 'finite_differences'
HDF5_CACHE = 'HDF5Cache'
JSON_GRAMMAR_TYPE = 'JSON'
MEMORY_FULL_CACHE = 'MemoryFullCache'
N_CPUS = 2
RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'
RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'
SIMPLE_CACHE = 'SimpleCache'
SIMPLE_GRAMMAR_TYPE = 'Simple'
STATUS_DONE = 'DONE'
STATUS_FAILED = 'FAILED'
STATUS_PENDING = 'PENDING'
STATUS_RUNNING = 'RUNNING'
STATUS_VIRTUAL = 'VIRTUAL'
classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

Add inputs to the differentiation list.

This method updates self._differentiated_inputs with inputs

Parameters

inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)

Add outputs to the differentiation list.

Update self._differentiated_inputs with inputs.

Parameters

outputs – list of output variables to differentiate if None, all outputs of discipline are used

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters

obs – the observer to add

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to a discipline.

This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

Parameters
• is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)

• name – the name of the discipline (Default value = None)

• comp_dir – the containing directory if None, use self.comp_dir (Default value = None)

Returns

path to the grammar file

Return type

string

property cache_tol

Accessor to the cache input tolerance.

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters
• input_data – the input data dict

• raise_exception – Default value = True)

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)

Check if the jacobian provided by the linearize() method is correct.

Parameters
• input_data – input data dict (Default value = None)

• derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)

• threshold – acceptance threshold for the jacobian error (Default value = 1e-8)

• linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

• inputs – list of inputs wrt which to differentiate (Default value = None)

• outputs – list of outputs to differentiate (Default value = None)

• step – the step for finite differences or complex step

• parallel – if True, executes in parallel

• n_processes – maximum number of processors on which to run

• use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

• wait_time_between_fork – time waited between two forks of the process /Thread

• auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation

• plot_result – plot the result of the validation (computed and approximate jacobians)

• file_path – path to the output file if plot_result is True

• show – if True, open the figure

• figsize_x – x size of the figure in inches

• figsize_y – y size of the figure in inches

Returns

True if the check is accepted, False otherwise

check_output_data(raise_exception=True)

Check the output data validity.

Parameters

raise_exception – if true, an exception is raised when data is invalid (Default value = True)

classmethod compute_y4(x_shared)[source]

Solve the fifth coupling equation in functional form.

Parameters

x_shared (ndarray) – vector of shared design variables

Returns

coupling variable y_4

classmethod compute_y5(x_shared)[source]

Solve the sixth coupling equation in functional form.

Parameters

x_shared (ndarray) – vector of shared design variables

Returns

coupling variable Y5

classmethod compute_y6(x_shared)[source]

Solve the seventh coupling equation in functional form.

Parameters

x_shared (ndarray) – vector of shared design variables

Returns

coupling variable Y6

classmethod deactivate_time_stamps()

Deactivate the time stamps for storing start and end times of execution and linearizations.

property default_inputs

Accessor to the default inputs.

static deserialize(in_file)

Derialize the discipline from a file.

Parameters

in_file – input file for serialization

Returns

a discipline instance

property exec_time

Return the cumulated execution time.

Multiprocessing safe.

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

• Adds default inputs to the input_data if some inputs are not defined

in input_data but exist in self._default_data

• Checks if the last execution of the discipline wan not called with

identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)

• Caches the inputs

• Checks the input data against self.input_grammar

• if self.data_processor is not None: runs the preprocessor

• updates the status to RUNNING

• calls the _run() method, that shall be defined

• if self.data_processor is not None: runs the postprocessor

• checks the output data

• Caches the outputs

• updates the status to DONE or FAILED

Parameters

input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)

Returns

the discipline local data after execution

Return type

dict

get_all_inputs()

Accessor for the input data as a list of values.

The order is given by self.get_input_data_names().

Returns

the data

get_all_outputs()

Accessor for the output data as a list of values.

The order is given by self.get_output_data_names().

Returns

the data

get_attributes_to_serialize()

Define the attributes to be serialized.

Returns

the list of attributes names

Return type

list

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.

Parameters
• keys – a sting key or a list of keys

• data_dict – the dict to get the data from

Returns

a data or a generator of data

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

Default to empty list See MDOFormulation.get_expected_dataflow

Returns

a list representing the data exchange arcs

get_expected_workflow()

Return the expected execution sequence.

This method is used for XDSM representation Default to the execution of the discipline itself See MDOFormulation.get_expected_workflow

get_input_data()

Accessor for the input data as a dict of values.

Returns

the data dict

get_input_data_names()

Accessor for the input names as a list.

Returns

the data names list

get_input_output_data_names()

Accessor for the input and output names as a list.

Returns

the data names list

get_inputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs().

Returns

the outputs array

Return type

ndarray

get_inputs_by_name(data_names)

Accessor for the inputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_local_data_by_name(data_names)

Accessor for the local data of the discipline as a dict of values.

Parameters

data_names – the names of the data which will be the keys of the dictionary

Returns

the data list

get_output_data()

Accessor for the output data as a dict of values.

Returns

the data dict

get_output_data_names()

Accessor for the output names as a list.

Returns

the data names list

get_outputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs()

Returns

the outputs array

Return type

ndarray

get_outputs_by_name(data_names)

Accessor for the outputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_sub_disciplines()

Gets the sub disciplines of self By default, empty.

Returns

the list of disciplines

is_all_inputs_existing(data_names)

Test if all the names in data_names are inputs of the discipline.

Parameters

data_names – the names of the inputs

Returns

True if data_names are all in input grammar

Return type

logical

is_all_outputs_existing(data_names)

Test if all the names in data_names are outputs of the discipline.

Parameters

data_names – the names of the outputs

Returns

True if data_names are all in output grammar

Return type

logical

is_input_existing(data_name)

Test if input named data_name is an input of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in input grammar

Return type

logical

is_output_existing(data_name)

Test if output named data_name is an output of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in output grammar

Return type

logical

static is_scenario()

Return True if self is a scenario.

Returns

True if self is a scenario

property linearization_mode

Accessor to the linearization mode.

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters
• input_data – the input data dict needed to execute the disciplines according to the discipline input grammar

• force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)

• force_no_exec – if True, the discipline is not re executed, cache is loaded anyway

property n_calls

Return the number of calls to execute() which triggered the _run().

Multiprocessing safe.

property n_calls_linearize

Return the number of calls to linearize() which triggered the _compute_jacobian() method.

Multiprocessing safe.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

Parameters

obs – the observer to remove

reset_statuses_for_run()

Sets all the statuses to PENDING.

serialize(out_file)

Serialize the discipline.

Parameters

out_file – destination file for serialization

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache , or on the disk, e.g. HDF5Cache . CacheFactory.caches provides the list of available types of caches.

Parameters
• cache_type (str) – type of cache to use.

• cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal

• cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used

• cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used

• is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.

set_disciplines_statuses(status)

Set the sub disciplines statuses.

To be implemented in subclasses. :param status: the status

Set the jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize

Parameters
• jac_approx_type – “complex_step” or “finite_differences”

• jax_approx_step – the step for finite differences or complex step

• jac_approx_n_processes – maximum number of processors on which to run

• jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

• jac_approx_wait_time – time waited between two forks of the process /Thread

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.

Warning: this calls the discipline execution two times per input variables.

See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”

Parameters
• inputs – inputs wrt the linearization is made. If None, use differentiated inputs

• outputs – outputs of the linearization is made. If None, use differentiated outputs

• force_all – if True, all inputs and outputs are used

• print_errors – if True, displays the estimated errors

• numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution

Returns

the estimated errors of truncation and cancelation error.

property status

Status accessor.

store_local_data(**kwargs)

Store discipline data in local data.

Parameters

kwargs – the data as key value pairs

time_stamps = None
class gemseo.problems.propane.propane.PropaneReaction[source]

Propane’s objective and constraints discipline This discipline’s outputs are the objective function and partial terms used in inequality constraints.

Note: the equations have been decoupled (y_i = y_i(x_shared)). Otherwise, the solvers may find iterates for which discipline analyses are not computable.

Constructor.

Parameters
• name – the name of the discipline

• input_grammar_file – the file for input grammar description, if None, name + “_input.json” is used

• output_grammar_file – the file for output grammar description, if None, name + “_output.json” is used

• auto_detect_grammar_files – if no input and output grammar files are provided, auto_detect_grammar_files uses a naming convention to associate a grammar file to a discipline: searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

• grammar_type – the type of grammar to use for IO declaration either JSON_GRAMMAR_TYPE or SIMPLE_GRAMMAR_TYPE

• cache_type – type of cache policy, SIMPLE_CACHE or HDF5_CACHE

• cache_file_path – the file to store the data, mandatory when HDF caching is used

Attributes:

 APPROX_MODES AVAILABLE_MODES COMPLEX_STEP FINITE_DIFFERENCES HDF5_CACHE JSON_GRAMMAR_TYPE MEMORY_FULL_CACHE N_CPUS RE_EXECUTE_DONE_POLICY RE_EXECUTE_NEVER_POLICY SIMPLE_CACHE SIMPLE_GRAMMAR_TYPE STATUS_DONE STATUS_FAILED STATUS_PENDING STATUS_RUNNING STATUS_VIRTUAL cache_tol Accessor to the cache input tolerance. default_inputs Accessor to the default inputs. exec_time Return the cumulated execution time. linearization_mode Accessor to the linearization mode. n_calls Return the number of calls to execute() which triggered the _run(). n_calls_linearize Return the number of calls to linearize() which triggered the _compute_jacobian() method. status Status accessor. time_stamps

Methods:

 Activate the time stamps. add_differentiated_inputs([inputs]) Add inputs to the differentiation list. add_differentiated_outputs([outputs]) Add outputs to the differentiation list. Add an observer for the status. auto_get_grammar_file([is_input, name, comp_dir]) Use a naming convention to associate a grammar file to a discipline. check_input_data(input_data[, raise_exception]) Check the input data validity. check_jacobian([input_data, derr_approx, …]) Check if the jacobian provided by the linearize() method is correct. check_output_data([raise_exception]) Check the output data validity. Deactivate the time stamps for storing start and end times of execution and linearizations. deserialize(in_file) Derialize the discipline from a file. execute([input_data]) Execute the discipline. f_2(x_shared, y_1, y_2, y_3) First term of a sum of four in the objective function. f_6(x_shared, y_1, y_3) Second term of a sum of four in the objective function. f_7(x_shared, y_1, y_3) Third term of a sum of four in the objective function. f_9(x_shared, y_1, y_3) Fourth term of a sum of four in the objective function. Accessor for the input data as a list of values. Accessor for the output data as a list of values. Define the attributes to be serialized. get_data_list_from_dict(keys, data_dict) Filter the dict from a list of keys or a single key. Return the expected data exchange sequence. Return the expected execution sequence. Accessor for the input data as a dict of values. Accessor for the input names as a list. Accessor for the input and output names as a list. Accessor for the outputs as a large numpy array. get_inputs_by_name(data_names) Accessor for the inputs as a list. get_local_data_by_name(data_names) Accessor for the local data of the discipline as a dict of values. Accessor for the output data as a dict of values. Accessor for the output names as a list. Accessor for the outputs as a large numpy array. get_outputs_by_name(data_names) Accessor for the outputs as a list. Gets the sub disciplines of self By default, empty. is_all_inputs_existing(data_names) Test if all the names in data_names are inputs of the discipline. is_all_outputs_existing(data_names) Test if all the names in data_names are outputs of the discipline. is_input_existing(data_name) Test if input named data_name is an input of the discipline. is_output_existing(data_name) Test if output named data_name is an output of the discipline. Return True if self is a scenario. linearize([input_data, force_all, force_no_exec]) Execute the linearized version of the code. Notify all status observers that the status has changed. Remove an observer for the status. Sets all the statuses to PENDING. serialize(out_file) Serialize the discipline. set_cache_policy([cache_type, …]) Set the type of cache to use and the tolerance level. Set the sub disciplines statuses. Set the jacobian approximation method. set_optimal_fd_step([outputs, inputs, …]) Compute the optimal finite-difference step. store_local_data(**kwargs) Store discipline data in local data.
APPROX_MODES = ['finite_differences', 'complex_step']
AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')
COMPLEX_STEP = 'complex_step'
FINITE_DIFFERENCES = 'finite_differences'
HDF5_CACHE = 'HDF5Cache'
JSON_GRAMMAR_TYPE = 'JSON'
MEMORY_FULL_CACHE = 'MemoryFullCache'
N_CPUS = 2
RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'
RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'
SIMPLE_CACHE = 'SimpleCache'
SIMPLE_GRAMMAR_TYPE = 'Simple'
STATUS_DONE = 'DONE'
STATUS_FAILED = 'FAILED'
STATUS_PENDING = 'PENDING'
STATUS_RUNNING = 'RUNNING'
STATUS_VIRTUAL = 'VIRTUAL'
classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

Add inputs to the differentiation list.

This method updates self._differentiated_inputs with inputs

Parameters

inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)

Add outputs to the differentiation list.

Update self._differentiated_inputs with inputs.

Parameters

outputs – list of output variables to differentiate if None, all outputs of discipline are used

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters

obs – the observer to add

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to a discipline.

This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

Parameters
• is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)

• name – the name of the discipline (Default value = None)

• comp_dir – the containing directory if None, use self.comp_dir (Default value = None)

Returns

path to the grammar file

Return type

string

property cache_tol

Accessor to the cache input tolerance.

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters
• input_data – the input data dict

• raise_exception – Default value = True)

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)

Check if the jacobian provided by the linearize() method is correct.

Parameters
• input_data – input data dict (Default value = None)

• derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)

• threshold – acceptance threshold for the jacobian error (Default value = 1e-8)

• linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

• inputs – list of inputs wrt which to differentiate (Default value = None)

• outputs – list of outputs to differentiate (Default value = None)

• step – the step for finite differences or complex step

• parallel – if True, executes in parallel

• n_processes – maximum number of processors on which to run

• use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

• wait_time_between_fork – time waited between two forks of the process /Thread

• auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation

• plot_result – plot the result of the validation (computed and approximate jacobians)

• file_path – path to the output file if plot_result is True

• show – if True, open the figure

• figsize_x – x size of the figure in inches

• figsize_y – y size of the figure in inches

Returns

True if the check is accepted, False otherwise

check_output_data(raise_exception=True)

Check the output data validity.

Parameters

raise_exception – if true, an exception is raised when data is invalid (Default value = True)

classmethod deactivate_time_stamps()

Deactivate the time stamps for storing start and end times of execution and linearizations.

property default_inputs

Accessor to the default inputs.

static deserialize(in_file)

Derialize the discipline from a file.

Parameters

in_file – input file for serialization

Returns

a discipline instance

property exec_time

Return the cumulated execution time.

Multiprocessing safe.

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

• Adds default inputs to the input_data if some inputs are not defined

in input_data but exist in self._default_data

• Checks if the last execution of the discipline wan not called with

identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)

• Caches the inputs

• Checks the input data against self.input_grammar

• if self.data_processor is not None: runs the preprocessor

• updates the status to RUNNING

• calls the _run() method, that shall be defined

• if self.data_processor is not None: runs the postprocessor

• checks the output data

• Caches the outputs

• updates the status to DONE or FAILED

Parameters

input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)

Returns

the discipline local data after execution

Return type

dict

classmethod f_2(x_shared, y_1, y_2, y_3)[source]

First term of a sum of four in the objective function. Is also a nonnegative constraint at system level.

Parameters
• x_shared (ndarray) – vector of shared design variables

• y_1 (ndarray) – first coupling variable

• y_2 (ndarray) – second coupling variable

• y_3 (ndarray) – third coupling variable

Returns

f2(x_shared, y_1, y_2, y_3)

Return type

float

classmethod f_6(x_shared, y_1, y_3)[source]

Second term of a sum of four in the objective function. Is also a nonnegative constraint at system level.

Parameters
• x_shared (ndarray) – vector of shared design variables

• y_1 (ndarray) – first coupling variable

• y_3 (ndarray) – third coupling variable

Returns

f6(x, y)

Return type

float

classmethod f_7(x_shared, y_1, y_3)[source]

Third term of a sum of four in the objective function. Is also a nonnegative constraint at system level.

Parameters
• x_shared (ndarray) – vector of shared design variables

• y_1 (ndarray) – first coupling variable

• y_3 (ndarray) – third coupling variable

Returns

f7(x, y)

Return type

float

classmethod f_9(x_shared, y_1, y_3)[source]

Fourth term of a sum of four in the objective function. Is also a nonnegative constraint at system level.

Parameters
• x_shared (ndarray) – vector of shared design variables

• y_1 (ndarray) – first coupling variable

• y_3 (ndarray) – third coupling variable

Returns

f9(x, y)

Return type

float

get_all_inputs()

Accessor for the input data as a list of values.

The order is given by self.get_input_data_names().

Returns

the data

get_all_outputs()

Accessor for the output data as a list of values.

The order is given by self.get_output_data_names().

Returns

the data

get_attributes_to_serialize()

Define the attributes to be serialized.

Returns

the list of attributes names

Return type

list

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.

Parameters
• keys – a sting key or a list of keys

• data_dict – the dict to get the data from

Returns

a data or a generator of data

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

Default to empty list See MDOFormulation.get_expected_dataflow

Returns

a list representing the data exchange arcs

get_expected_workflow()

Return the expected execution sequence.

This method is used for XDSM representation Default to the execution of the discipline itself See MDOFormulation.get_expected_workflow

get_input_data()

Accessor for the input data as a dict of values.

Returns

the data dict

get_input_data_names()

Accessor for the input names as a list.

Returns

the data names list

get_input_output_data_names()

Accessor for the input and output names as a list.

Returns

the data names list

get_inputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs().

Returns

the outputs array

Return type

ndarray

get_inputs_by_name(data_names)

Accessor for the inputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_local_data_by_name(data_names)

Accessor for the local data of the discipline as a dict of values.

Parameters

data_names – the names of the data which will be the keys of the dictionary

Returns

the data list

get_output_data()

Accessor for the output data as a dict of values.

Returns

the data dict

get_output_data_names()

Accessor for the output names as a list.

Returns

the data names list

get_outputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs()

Returns

the outputs array

Return type

ndarray

get_outputs_by_name(data_names)

Accessor for the outputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_sub_disciplines()

Gets the sub disciplines of self By default, empty.

Returns

the list of disciplines

is_all_inputs_existing(data_names)

Test if all the names in data_names are inputs of the discipline.

Parameters

data_names – the names of the inputs

Returns

True if data_names are all in input grammar

Return type

logical

is_all_outputs_existing(data_names)

Test if all the names in data_names are outputs of the discipline.

Parameters

data_names – the names of the outputs

Returns

True if data_names are all in output grammar

Return type

logical

is_input_existing(data_name)

Test if input named data_name is an input of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in input grammar

Return type

logical

is_output_existing(data_name)

Test if output named data_name is an output of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in output grammar

Return type

logical

static is_scenario()

Return True if self is a scenario.

Returns

True if self is a scenario

property linearization_mode

Accessor to the linearization mode.

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters
• input_data – the input data dict needed to execute the disciplines according to the discipline input grammar

• force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)

• force_no_exec – if True, the discipline is not re executed, cache is loaded anyway

property n_calls

Return the number of calls to execute() which triggered the _run().

Multiprocessing safe.

property n_calls_linearize

Return the number of calls to linearize() which triggered the _compute_jacobian() method.

Multiprocessing safe.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

Parameters

obs – the observer to remove

reset_statuses_for_run()

Sets all the statuses to PENDING.

serialize(out_file)

Serialize the discipline.

Parameters

out_file – destination file for serialization

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache , or on the disk, e.g. HDF5Cache . CacheFactory.caches provides the list of available types of caches.

Parameters
• cache_type (str) – type of cache to use.

• cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal

• cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used

• cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used

• is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.

set_disciplines_statuses(status)

Set the sub disciplines statuses.

To be implemented in subclasses. :param status: the status

Set the jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize

Parameters
• jac_approx_type – “complex_step” or “finite_differences”

• jax_approx_step – the step for finite differences or complex step

• jac_approx_n_processes – maximum number of processors on which to run

• jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

• jac_approx_wait_time – time waited between two forks of the process /Thread

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.

Warning: this calls the discipline execution two times per input variables.

See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”

Parameters
• inputs – inputs wrt the linearization is made. If None, use differentiated inputs

• outputs – outputs of the linearization is made. If None, use differentiated outputs

• force_all – if True, all inputs and outputs are used

• print_errors – if True, displays the estimated errors

• numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution

Returns

the estimated errors of truncation and cancelation error.

property status

Status accessor.

store_local_data(**kwargs)

Store discipline data in local data.

Parameters

kwargs – the data as key value pairs

time_stamps = None
gemseo.problems.propane.propane.get_design_space(to_complex=True)[source]

Parameters

to_complex – if True, current x is a complex vector