gemseo / problems / dataset

burgers module

Burgers dataset

This Dataset contains solutions to the Burgers’ equation with periodic boundary conditions on the interval \([0, 2\pi]\) for different time steps:

\[u_t + u u_x = \nu u_{xx},\]

An analytical expression can be obtained for the solution, using the Cole-Hopf transform:

\[u(t, x) = - 2 \nu \frac{\phi'}{\phi},\]

where \(\phi\) is solution to the heat equation \(\phi_t = \nu \phi_{xx}\).

This Dataset is based on a full-factorial design of experiments. Each sample corresponds to a given time step \(t\), while each feature corresponds to a given spatial point \(x\).

More information about Burgers’ equation

Classes:

BurgersDataset([name, by_group, n_samples, …])

Burgers dataset parametrization.

BurgersDiscipline()

A software integrated in the workflow.

class gemseo.problems.dataset.burgers.BurgersDataset(name='Burgers', by_group=True, n_samples=30, n_x=501, fluid_viscosity=0.1, categorize=True)[source]

Bases: gemseo.core.dataset.Dataset

Burgers dataset parametrization.

Constructor.

Parameters
  • name (str) – name of the dataset.

  • by_group (bool) – if True, store the data by group. Otherwise, store them by variables. Default: True.

  • n_samples (int) – number of samples. Default: 30.

  • n_x (int) – number of spatial points. Default: 501.

  • fluid_viscosity (float) – fluid viscosity. Default: 0.1.

  • categorize (bool) – distinguish between the different groups of variables. Default: True.

Parma bool opt_naming

use an optimization naming. Default: True.

Attributes:

DEFAULT_GROUP

DEFAULT_NAMES

DESIGN_GROUP

FUNCTION_GROUP

GRADIENT_GROUP

HDF5_CACHE

INPUT_GROUP

MEMORY_FULL_CACHE

OUTPUT_GROUP

PARAMETER_GROUP

columns_names

The names of the columns of the dataset.

groups

The sorted names of the groups of variables.

n_samples

The number of samples.

n_variables

The number of variables.

row_names

The names of the rows.

variables

The sorted names of the variables.

Methods:

add_group(group, data[, variables, sizes, …])

Add data related to a group.

add_variable(name, data[, group, cache_as_input])

Add data related to a variable.

compare(value_1, logical_operator, value_2)

Compare either a variable and a value or a variable and another variable.

export_to_cache([inputs, outputs, …])

Export the dataset to a cache.

export_to_dataframe([copy])

Export the dataset to a pandas Dataframe.

find(comparison)

Find the entries for which a comparison is satisfied.

get_all_data([by_group, as_dict])

Get all the data stored in the dataset.

get_available_plots()

Return the available plot methods.

get_data_by_group(group[, as_dict])

Get the data for a specific group name.

get_data_by_names(names[, as_dict])

Get the data for specific names of variables.

get_group(variable_name)

Get the name of the group that contains a variable.

get_names(group_name)

Get the names of the variables of a group.

get_normalized_dataset([excluded_variables, …])

Get a normalized copy of the dataset.

is_empty()

Check if the dataset is empty.

is_group(name)

Check if a name is a group name.

is_nan()

Check if an entry contains NaN.

is_variable(name)

Check if a name is a variable name.

n_variables_by_group(group)

The number of variables for a group.

plot(name[, show, save])

Plot the dataset from a DatasetPlot.

remove(entries)

Remove entries.

set_from_array(data[, variables, sizes, …])

Set the dataset from an array.

set_from_file(filename[, variables, sizes, …])

Set the dataset from a file.

set_metadata(name, value)

Set a metadata attribute.

DEFAULT_GROUP = 'parameters'
DEFAULT_NAMES = {'design_parameters': 'dp', 'functions': 'func', 'inputs': 'in', 'outputs': 'out', 'parameters': 'x'}
DESIGN_GROUP = 'design_parameters'
FUNCTION_GROUP = 'functions'
GRADIENT_GROUP = 'gradients'
HDF5_CACHE = 'HDF5Cache'
INPUT_GROUP = 'inputs'
MEMORY_FULL_CACHE = 'MemoryFullCache'
OUTPUT_GROUP = 'outputs'
PARAMETER_GROUP = 'parameters'
add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)

Add data related to a group.

Parameters
  • group (str) – The name of the group of data to be added.

  • data (numpy.ndarray) – The data to be added.

  • variables (Optional[List[str]]) – The names of the variables. If None, use default names based on a pattern.

  • sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • pattern (Optional[str]) – The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

  • cache_as_input (bool) – If True, cache these data as inputs when the cache is exported to a cache.

Return type

str

add_variable(name, data, group='parameters', cache_as_input=True)

Add data related to a variable.

Parameters
  • name (str) – The name of the variable to be stored.

  • data (numpy.ndarray) – The data to be stored.

  • group (str) – The name of the group related to this variable.

  • cache_as_input (bool) – If True, cache these data as inputs when the cache is exported to a cache.

Return type

None

property columns_names

The names of the columns of the dataset.

compare(value_1, logical_operator, value_2, component_1=0, component_2=0)

Compare either a variable and a value or a variable and another variable.

Parameters
  • value_1 (Union[str, float]) – The first value, either a variable name or a numeric value.

  • logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.

  • value_2 (Union[str, float]) – The second value, either a variable name or a numeric value.

  • component_1 (int) – If value_1 is a variable name, component_1 corresponds to its component used in the comparison.

  • component_2 (int) – If value_2 is a variable name, component_2 corresponds to its component used in the comparison.

Returns

Whether the comparison is valid for the different entries.

Return type

numpy.ndarray

export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)

Export the dataset to a cache.

Parameters
  • inputs (Optional[Iterable[str]]) – The names of the inputs to cache. If None, use all inputs.

  • outputs (Optional[Iterable[str]]) – The names of the outputs to cache. If None, use all outputs.

  • cache_type (str) – The type of cache to use.

  • cache_hdf_file (Optional[str]) – The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.

  • cache_hdf_node_name (Optional[str]) – The name of the HDF node to store the discipline. If None, use the name of the dataset.

Returns

A cache containing the dataset.

Return type

gemseo.core.cache.AbstractFullCache

export_to_dataframe(copy=True)

Export the dataset to a pandas Dataframe.

Parameters

copy (bool) – If True, copy data. Otherwise, use reference.

Returns

A pandas DataFrame containing the dataset.

Return type

DataFrame

static find(comparison)

Find the entries for which a comparison is satisfied.

This search uses a boolean 1D array whose length is equal to the length of the dataset.

Parameters

comparison (numpy.ndarray) – A boolean vector whose length is equal to the number of samples.

Returns

The indices of the entries for which the comparison is satisfied.

Return type

List[int]

get_all_data(by_group=True, as_dict=False)

Get all the data stored in the dataset.

The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied with the names and sizes of the variables.

The data can also classified by groups of variables.

Parameters
  • by_group – If True, sort the data by group.

  • as_dict – If True, return the data as a dictionary.

Returns

All the data stored in the dataset.

Return type

Union[Dict[str, Union[Dict[str, numpy.ndarray], numpy.ndarray]], Tuple[Union[numpy.ndarray, Dict[str, numpy.ndarray]], List[str], Dict[str, int]]]

get_available_plots()

Return the available plot methods.

Return type

List[str]

get_data_by_group(group, as_dict=False)

Get the data for a specific group name.

Parameters
  • group (str) – The name of the group.

  • as_dict (bool) – If True, return values as dictionary.

Returns

The data related to the group.

Return type

Union[numpy.ndarray, Dict[str, numpy.ndarray]]

get_data_by_names(names, as_dict=True)

Get the data for specific names of variables.

Parameters
  • names (Union[str, Iterable[str]]) – The names of the variables.

  • as_dict (bool) – If True, return values as dictionary.

Returns

The data related to the variables.

Return type

Union[numpy.ndarray, Dict[str, numpy.ndarray]]

get_group(variable_name)

Get the name of the group that contains a variable.

Parameters

variable_name (str) – The name of the variable.

Returns

The group to which the variable belongs.

Return type

str

get_names(group_name)

Get the names of the variables of a group.

Parameters

group_name (str) – The name of the group.

Returns

The names of the variables of the group.

Return type

List[str]

get_normalized_dataset(excluded_variables=None, excluded_groups=None)

Get a normalized copy of the dataset.

Parameters
  • excluded_variables (Optional[Sequence[str]]) – The names of the variables not to be normalized. If None, normalize all the variables.

  • excluded_groups (Optional[Sequence[str]]) – The names of the groups not to be normalized. If None, normalize all the groups.

Returns

A normalized dataset.

Return type

gemseo.core.dataset.Dataset

property groups

The sorted names of the groups of variables.

is_empty()

Check if the dataset is empty.

Returns

Whether the dataset is empty.

Return type

bool

is_group(name)

Check if a name is a group name.

Parameters

name (str) – A name of a group.

Returns

Whether the name is a group name.

Return type

bool

is_nan()

Check if an entry contains NaN.

Returns

Whether any entries is NaN or not.

Return type

numpy.ndarray

is_variable(name)

Check if a name is a variable name.

Parameters

name (str) – A name of a variable.

Returns

Whether the name is a variable name.

Return type

bool

property n_samples

The number of samples.

property n_variables

The number of variables.

n_variables_by_group(group)

The number of variables for a group.

Parameters

group (str) – The name of a group.

Returns

The group dimension.

Return type

int

plot(name, show=True, save=False, **options)

Plot the dataset from a DatasetPlot.

See Dataset.get_available_plots()

Parameters
  • name (str) – The name of the post-processing, which is the name of a class inheriting from DatasetPlot.

  • show (bool) – If True, display the figure.

  • save (bool) – If True, save the figure.

  • options – The options for the post-processing.

Return type

None

remove(entries)

Remove entries.

Parameters

entries (Union[List[int], numpy.ndarray]) – The entries to be removed, either indices or a boolean 1D array whose length is equal to the length of the dataset and elements to delete are coded True.

Return type

None

property row_names

The names of the rows.

set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)

Set the dataset from an array.

Parameters
  • data (numpy.ndarray) – The data to be stored.

  • variables (Optional[List[str]]) – The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.

  • sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • groups (Optional[Dict[str, str]]) – The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

  • default_name (Optional[str]) – The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

Return type

None

set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)

Set the dataset from a file.

Parameters
  • filename (str) – The name of the file containing the data.

  • variables (Optional[List[str]]) – The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the Dataset.DEFAULT_NAMES associated with the different groups.

  • sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • groups (Optional[Dict[str, str]]) – The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

  • delimiter (str) – The field delimiter.

  • header (bool) – If True, read the names of the variables on the first line of the file.

Return type

None

set_metadata(name, value)

Set a metadata attribute.

Parameters
  • name (str) – The name of the metadata attribute.

  • value (Any) – The value of the metadata attribute.

Return type

None

property variables

The sorted names of the variables.

class gemseo.problems.dataset.burgers.BurgersDiscipline[source]

Bases: gemseo.core.discipline.MDODiscipline

A software integrated in the workflow.

The inputs and outputs are defined in a grammar, which can be either a SimpleGrammar or a JSONGrammar, or your own which derives from the Grammar abstract class

To be used, use a subclass and implement the _run method which defined the execution of the software. Typically, in the _run method, get the inputs from the input grammar, call your software, and write the outputs to the output grammar.

The JSON Grammar are automatically detected when in the same folder as your subclass module and named “CLASSNAME_input.json” use auto_detect_grammar_files=True to activate this option

Constructor.

Parameters
  • name – the name of the discipline

  • input_grammar_file – the file for input grammar description, if None, name + “_input.json” is used

  • output_grammar_file – the file for output grammar description, if None, name + “_output.json” is used

  • auto_detect_grammar_files – if no input and output grammar files are provided, auto_detect_grammar_files uses a naming convention to associate a grammar file to a discipline: searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

  • grammar_type – the type of grammar to use for IO declaration either JSON_GRAMMAR_TYPE or SIMPLE_GRAMMAR_TYPE

  • cache_type – type of cache policy, SIMPLE_CACHE or HDF5_CACHE

  • cache_file_path – the file to store the data, mandatory when HDF caching is used

Attributes:

APPROX_MODES

AVAILABLE_MODES

COMPLEX_STEP

FINITE_DIFFERENCES

HDF5_CACHE

JSON_GRAMMAR_TYPE

MEMORY_FULL_CACHE

N_CPUS

RE_EXECUTE_DONE_POLICY

RE_EXECUTE_NEVER_POLICY

SIMPLE_CACHE

SIMPLE_GRAMMAR_TYPE

STATUS_DONE

STATUS_FAILED

STATUS_PENDING

STATUS_RUNNING

STATUS_VIRTUAL

cache_tol

Accessor to the cache input tolerance.

default_inputs

Accessor to the default inputs.

exec_time

Return the cumulated execution time.

linearization_mode

Accessor to the linearization mode.

n_calls

Return the number of calls to execute() which triggered the _run().

n_calls_linearize

Return the number of calls to linearize() which triggered the _compute_jacobian() method.

status

Status accessor.

time_stamps

Methods:

activate_time_stamps()

Activate the time stamps.

add_differentiated_inputs([inputs])

Add inputs to the differentiation list.

add_differentiated_outputs([outputs])

Add outputs to the differentiation list.

add_status_observer(obs)

Add an observer for the status.

auto_get_grammar_file([is_input, name, comp_dir])

Use a naming convention to associate a grammar file to a discipline.

check_input_data(input_data[, raise_exception])

Check the input data validity.

check_jacobian([input_data, derr_approx, …])

Check if the jacobian provided by the linearize() method is correct.

check_output_data([raise_exception])

Check the output data validity.

deactivate_time_stamps()

Deactivate the time stamps for storing start and end times of execution and linearizations.

deserialize(in_file)

Derialize the discipline from a file.

execute([input_data])

Execute the discipline.

get_all_inputs()

Accessor for the input data as a list of values.

get_all_outputs()

Accessor for the output data as a list of values.

get_attributes_to_serialize()

Define the attributes to be serialized.

get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

get_expected_dataflow()

Return the expected data exchange sequence.

get_expected_workflow()

Return the expected execution sequence.

get_input_data()

Accessor for the input data as a dict of values.

get_input_data_names()

Accessor for the input names as a list.

get_input_output_data_names()

Accessor for the input and output names as a list.

get_inputs_asarray()

Accessor for the outputs as a large numpy array.

get_inputs_by_name(data_names)

Accessor for the inputs as a list.

get_local_data_by_name(data_names)

Accessor for the local data of the discipline as a dict of values.

get_output_data()

Accessor for the output data as a dict of values.

get_output_data_names()

Accessor for the output names as a list.

get_outputs_asarray()

Accessor for the outputs as a large numpy array.

get_outputs_by_name(data_names)

Accessor for the outputs as a list.

get_sub_disciplines()

Gets the sub disciplines of self By default, empty.

is_all_inputs_existing(data_names)

Test if all the names in data_names are inputs of the discipline.

is_all_outputs_existing(data_names)

Test if all the names in data_names are outputs of the discipline.

is_input_existing(data_name)

Test if input named data_name is an input of the discipline.

is_output_existing(data_name)

Test if output named data_name is an output of the discipline.

is_scenario()

Return True if self is a scenario.

linearize([input_data, force_all, force_no_exec])

Execute the linearized version of the code.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

reset_statuses_for_run()

Sets all the statuses to PENDING.

serialize(out_file)

Serialize the discipline.

set_cache_policy([cache_type, …])

Set the type of cache to use and the tolerance level.

set_disciplines_statuses(status)

Set the sub disciplines statuses.

set_jacobian_approximation([…])

Set the jacobian approximation method.

set_optimal_fd_step([outputs, inputs, …])

Compute the optimal finite-difference step.

store_local_data(**kwargs)

Store discipline data in local data.

APPROX_MODES = ['finite_differences', 'complex_step']
AVAILABLE_MODES = ('auto', 'direct', 'adjoint', 'reverse', 'finite_differences', 'complex_step')
COMPLEX_STEP = 'complex_step'
FINITE_DIFFERENCES = 'finite_differences'
HDF5_CACHE = 'HDF5Cache'
JSON_GRAMMAR_TYPE = 'JSON'
MEMORY_FULL_CACHE = 'MemoryFullCache'
N_CPUS = 2
RE_EXECUTE_DONE_POLICY = 'RE_EXEC_DONE'
RE_EXECUTE_NEVER_POLICY = 'RE_EXEC_NEVER'
SIMPLE_CACHE = 'SimpleCache'
SIMPLE_GRAMMAR_TYPE = 'Simple'
STATUS_DONE = 'DONE'
STATUS_FAILED = 'FAILED'
STATUS_PENDING = 'PENDING'
STATUS_RUNNING = 'RUNNING'
STATUS_VIRTUAL = 'VIRTUAL'
classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

add_differentiated_inputs(inputs=None)

Add inputs to the differentiation list.

This method updates self._differentiated_inputs with inputs

Parameters

inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)

add_differentiated_outputs(outputs=None)

Add outputs to the differentiation list.

Update self._differentiated_inputs with inputs.

Parameters

outputs – list of output variables to differentiate if None, all outputs of discipline are used

add_status_observer(obs)

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters

obs – the observer to add

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to a discipline.

This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

Parameters
  • is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)

  • name – the name of the discipline (Default value = None)

  • comp_dir – the containing directory if None, use self.comp_dir (Default value = None)

Returns

path to the grammar file

Return type

string

property cache_tol

Accessor to the cache input tolerance.

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters
  • input_data – the input data dict

  • raise_exception – Default value = True)

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)

Check if the jacobian provided by the linearize() method is correct.

Parameters
  • input_data – input data dict (Default value = None)

  • derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)

  • threshold – acceptance threshold for the jacobian error (Default value = 1e-8)

  • linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

  • inputs – list of inputs wrt which to differentiate (Default value = None)

  • outputs – list of outputs to differentiate (Default value = None)

  • step – the step for finite differences or complex step

  • parallel – if True, executes in parallel

  • n_processes – maximum number of processors on which to run

  • use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

  • wait_time_between_fork – time waited between two forks of the process /Thread

  • auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation

  • plot_result – plot the result of the validation (computed and approximate jacobians)

  • file_path – path to the output file if plot_result is True

  • show – if True, open the figure

  • figsize_x – x size of the figure in inches

  • figsize_y – y size of the figure in inches

Returns

True if the check is accepted, False otherwise

check_output_data(raise_exception=True)

Check the output data validity.

Parameters

raise_exception – if true, an exception is raised when data is invalid (Default value = True)

classmethod deactivate_time_stamps()

Deactivate the time stamps for storing start and end times of execution and linearizations.

property default_inputs

Accessor to the default inputs.

static deserialize(in_file)

Derialize the discipline from a file.

Parameters

in_file – input file for serialization

Returns

a discipline instance

property exec_time

Return the cumulated execution time.

Multiprocessing safe.

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

  • Adds default inputs to the input_data if some inputs are not defined

    in input_data but exist in self._default_data

  • Checks if the last execution of the discipline wan not called with

    identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)

  • Caches the inputs

  • Checks the input data against self.input_grammar

  • if self.data_processor is not None: runs the preprocessor

  • updates the status to RUNNING

  • calls the _run() method, that shall be defined

  • if self.data_processor is not None: runs the postprocessor

  • checks the output data

  • Caches the outputs

  • updates the status to DONE or FAILED

  • updates summed execution time

Parameters

input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)

Returns

the discipline local data after execution

Return type

dict

get_all_inputs()

Accessor for the input data as a list of values.

The order is given by self.get_input_data_names().

Returns

the data

get_all_outputs()

Accessor for the output data as a list of values.

The order is given by self.get_output_data_names().

Returns

the data

get_attributes_to_serialize()

Define the attributes to be serialized.

Shall be overloaded by disciplines

Returns

the list of attributes names

Return type

list

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.

Parameters
  • keys – a sting key or a list of keys

  • data_dict – the dict to get the data from

Returns

a data or a generator of data

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

Default to empty list See MDOFormulation.get_expected_dataflow

Returns

a list representing the data exchange arcs

get_expected_workflow()

Return the expected execution sequence.

This method is used for XDSM representation Default to the execution of the discipline itself See MDOFormulation.get_expected_workflow

get_input_data()

Accessor for the input data as a dict of values.

Returns

the data dict

get_input_data_names()

Accessor for the input names as a list.

Returns

the data names list

get_input_output_data_names()

Accessor for the input and output names as a list.

Returns

the data names list

get_inputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs().

Returns

the outputs array

Return type

ndarray

get_inputs_by_name(data_names)

Accessor for the inputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_local_data_by_name(data_names)

Accessor for the local data of the discipline as a dict of values.

Parameters

data_names – the names of the data which will be the keys of the dictionary

Returns

the data list

get_output_data()

Accessor for the output data as a dict of values.

Returns

the data dict

get_output_data_names()

Accessor for the output names as a list.

Returns

the data names list

get_outputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs()

Returns

the outputs array

Return type

ndarray

get_outputs_by_name(data_names)

Accessor for the outputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_sub_disciplines()

Gets the sub disciplines of self By default, empty.

Returns

the list of disciplines

is_all_inputs_existing(data_names)

Test if all the names in data_names are inputs of the discipline.

Parameters

data_names – the names of the inputs

Returns

True if data_names are all in input grammar

Return type

logical

is_all_outputs_existing(data_names)

Test if all the names in data_names are outputs of the discipline.

Parameters

data_names – the names of the outputs

Returns

True if data_names are all in output grammar

Return type

logical

is_input_existing(data_name)

Test if input named data_name is an input of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in input grammar

Return type

logical

is_output_existing(data_name)

Test if output named data_name is an output of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in output grammar

Return type

logical

static is_scenario()

Return True if self is a scenario.

Returns

True if self is a scenario

property linearization_mode

Accessor to the linearization mode.

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters
  • input_data – the input data dict needed to execute the disciplines according to the discipline input grammar

  • force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)

  • force_no_exec – if True, the discipline is not re executed, cache is loaded anyway

property n_calls

Return the number of calls to execute() which triggered the _run().

Multiprocessing safe.

property n_calls_linearize

Return the number of calls to linearize() which triggered the _compute_jacobian() method.

Multiprocessing safe.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

Parameters

obs – the observer to remove

reset_statuses_for_run()

Sets all the statuses to PENDING.

serialize(out_file)

Serialize the discipline.

Parameters

out_file – destination file for serialization

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache , or on the disk, e.g. HDF5Cache . CacheFactory.caches provides the list of available types of caches.

Parameters
  • cache_type (str) – type of cache to use.

  • cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal

  • cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used

  • cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used

  • is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.

set_disciplines_statuses(status)

Set the sub disciplines statuses.

To be implemented in subclasses. :param status: the status

set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)

Set the jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize

Parameters
  • jac_approx_type – “complex_step” or “finite_differences”

  • jax_approx_step – the step for finite differences or complex step

  • jac_approx_n_processes – maximum number of processors on which to run

  • jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

  • jac_approx_wait_time – time waited between two forks of the process /Thread

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.

Warning: this calls the discipline execution two times per input variables.

See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”

Parameters
  • inputs – inputs wrt the linearization is made. If None, use differentiated inputs

  • outputs – outputs of the linearization is made. If None, use differentiated outputs

  • force_all – if True, all inputs and outputs are used

  • print_errors – if True, displays the estimated errors

  • numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution

Returns

the estimated errors of truncation and cancelation error.

property status

Status accessor.

store_local_data(**kwargs)

Store discipline data in local data.

Parameters

kwargs – the data as key value pairs

time_stamps = None