Built-in datasets

Dataset factory

This module contains a factory to instantiate a Dataset from its class name. The class can be internal to GEMSEO or located in an external module whose path is provided to the constructor. It also provides a list of available cache types and allows you to test if a cache type is available.

Classes:

DatasetFactory()

This factory instantiates a Dataset from its class name.

class gemseo.problems.dataset.factory.DatasetFactory[source]

This factory instantiates a Dataset from its class name.

The class can be internal to GEMSEO or located in an external module whose path is provided to the constructor.

Initializes the factory: scans the directories to search for subclasses of Dataset.

Searches in “GEMSEO_PATH” and gemseo.mlearning.p_datasets

Methods:

create(dataset, **options)

Create a dataset.

is_available(dataset)

Checks the availability of a dataset.

Attributes:

datasets

Lists the available datasets.

create(dataset, **options)[source]

Create a dataset.

Parameters
  • dataset (str) – name of the dataset (its classname).

  • options – additional options specific

Returns

dataset

Return type

Dataset

property datasets

Lists the available datasets.

Returns

the list of datasets.

Return type

list(str)

is_available(dataset)[source]

Checks the availability of a dataset.

Parameters

dataset (str) – name of the dataset (its class name).

Returns

True if the dataset is available.

Return type

bool

Burgers dataset

This Dataset contains solutions to the Burgers’ equation with periodic boundary conditions on the interval \([0, 2\pi]\) for different time steps:

\[u_t + u u_x = \nu u_{xx},\]

An analytical expression can be obtained for the solution, using the Cole-Hopf transform:

\[u(t, x) = - 2 \nu \frac{\phi'}{\phi},\]

where \(\phi\) is solution to the heat equation \(\phi_t = \nu \phi_{xx}\).

This Dataset is based on a full-factorial design of experiments. Each sample corresponds to a given time step \(t\), while each feature corresponds to a given spatial point \(x\).

More information about Burgers’ equation

Classes:

BurgersDataset([name, by_group, n_samples, ...])

Burgers dataset parametrization.

BurgersDiscipline()

A software integrated in the workflow.

class gemseo.problems.dataset.burgers.BurgersDataset(name='Burgers', by_group=True, n_samples=30, n_x=501, fluid_viscosity=0.1, categorize=True)[source]

Burgers dataset parametrization.

Constructor.

Parameters
  • name (str) –

    name of the dataset.

    By default it is set to Burgers.

  • by_group (bool) –

    if True, store the data by group. Otherwise, store them by variables. Default: True.

    By default it is set to True.

  • n_samples (int) –

    number of samples. Default: 30.

    By default it is set to 30.

  • n_x (int) –

    number of spatial points. Default: 501.

    By default it is set to 501.

  • fluid_viscosity (float) –

    fluid viscosity. Default: 0.1.

    By default it is set to 0.1.

  • categorize (bool) –

    distinguish between the different groups of variables. Default: True.

    By default it is set to True.

Parma bool opt_naming

use an optimization naming. Default: True.

Methods:

add_group(group, data[, variables, sizes, ...])

Add data related to a group.

add_variable(name, data[, group, cache_as_input])

Add data related to a variable.

compare(value_1, logical_operator, value_2)

Compare either a variable and a value or a variable and another variable.

export_to_cache([inputs, outputs, ...])

Export the dataset to a cache.

export_to_dataframe([copy])

Export the dataset to a pandas Dataframe.

find(comparison)

Find the entries for which a comparison is satisfied.

get_all_data([by_group, as_dict])

Get all the data stored in the dataset.

get_available_plots()

Return the available plot methods.

get_data_by_group(group[, as_dict])

Get the data for a specific group name.

get_data_by_names(names[, as_dict])

Get the data for specific names of variables.

get_group(variable_name)

Get the name of the group that contains a variable.

get_names(group_name)

Get the names of the variables of a group.

get_normalized_dataset([excluded_variables, ...])

Get a normalized copy of the dataset.

is_empty()

Check if the dataset is empty.

is_group(name)

Check if a name is a group name.

is_nan()

Check if an entry contains NaN.

is_variable(name)

Check if a name is a variable name.

n_variables_by_group(group)

The number of variables for a group.

plot(name[, show, save])

Plot the dataset from a DatasetPlot.

remove(entries)

Remove entries.

set_from_array(data[, variables, sizes, ...])

Set the dataset from an array.

set_from_file(filename[, variables, sizes, ...])

Set the dataset from a file.

set_metadata(name, value)

Set a metadata attribute.

Attributes:

columns_names

The names of the columns of the dataset.

groups

The sorted names of the groups of variables.

n_samples

The number of samples.

n_variables

The number of variables.

row_names

The names of the rows.

variables

The sorted names of the variables.

add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)

Add data related to a group.

Parameters
  • group (str) – The name of the group of data to be added.

  • data (numpy.ndarray) – The data to be added.

  • variables (Optional[List[str]]) –

    The names of the variables. If None, use default names based on a pattern.

    By default it is set to None.

  • sizes (Optional[Dict[str, int]]) –

    The sizes of the variables. If None, assume that all the variables have a size equal to 1.

    By default it is set to None.

  • pattern (Optional[str]) –

    The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

    By default it is set to None.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type

str

add_variable(name, data, group='parameters', cache_as_input=True)

Add data related to a variable.

Parameters
  • name (str) – The name of the variable to be stored.

  • data (numpy.ndarray) – The data to be stored.

  • group (str) –

    The name of the group related to this variable.

    By default it is set to parameters.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type

None

property columns_names

The names of the columns of the dataset.

compare(value_1, logical_operator, value_2, component_1=0, component_2=0)

Compare either a variable and a value or a variable and another variable.

Parameters
  • value_1 (Union[str, float]) – The first value, either a variable name or a numeric value.

  • logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.

  • value_2 (Union[str, float]) – The second value, either a variable name or a numeric value.

  • component_1 (int) –

    If value_1 is a variable name, component_1 corresponds to its component used in the comparison.

    By default it is set to 0.

  • component_2 (int) –

    If value_2 is a variable name, component_2 corresponds to its component used in the comparison.

    By default it is set to 0.

Returns

Whether the comparison is valid for the different entries.

Return type

numpy.ndarray

export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)

Export the dataset to a cache.

Parameters
  • inputs (Optional[Iterable[str]]) –

    The names of the inputs to cache. If None, use all inputs.

    By default it is set to None.

  • outputs (Optional[Iterable[str]]) –

    The names of the outputs to cache. If None, use all outputs.

    By default it is set to None.

  • cache_type (str) –

    The type of cache to use.

    By default it is set to MemoryFullCache.

  • cache_hdf_file (Optional[str]) –

    The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.

    By default it is set to None.

  • cache_hdf_node_name (Optional[str]) –

    The name of the HDF node to store the discipline. If None, use the name of the dataset.

    By default it is set to None.

Returns

A cache containing the dataset.

Return type

gemseo.core.cache.AbstractFullCache

export_to_dataframe(copy=True)

Export the dataset to a pandas Dataframe.

Parameters

copy (bool) –

If True, copy data. Otherwise, use reference.

By default it is set to True.

Returns

A pandas DataFrame containing the dataset.

Return type

DataFrame

static find(comparison)

Find the entries for which a comparison is satisfied.

This search uses a boolean 1D array whose length is equal to the length of the dataset.

Parameters

comparison (numpy.ndarray) – A boolean vector whose length is equal to the number of samples.

Returns

The indices of the entries for which the comparison is satisfied.

Return type

List[int]

get_all_data(by_group=True, as_dict=False)

Get all the data stored in the dataset.

The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied with the names and sizes of the variables.

The data can also classified by groups of variables.

Parameters
  • by_group

    If True, sort the data by group.

    By default it is set to True.

  • as_dict

    If True, return the data as a dictionary.

    By default it is set to False.

Returns

All the data stored in the dataset.

Return type

Union[Dict[str, Union[Dict[str, numpy.ndarray], numpy.ndarray]], Tuple[Union[numpy.ndarray, Dict[str, numpy.ndarray]], List[str], Dict[str, int]]]

get_available_plots()

Return the available plot methods.

Return type

List[str]

get_data_by_group(group, as_dict=False)

Get the data for a specific group name.

Parameters
  • group (str) – The name of the group.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to False.

Returns

The data related to the group.

Return type

Union[numpy.ndarray, Dict[str, numpy.ndarray]]

get_data_by_names(names, as_dict=True)

Get the data for specific names of variables.

Parameters
  • names (Union[str, Iterable[str]]) – The names of the variables.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to True.

Returns

The data related to the variables.

Return type

Union[numpy.ndarray, Dict[str, numpy.ndarray]]

get_group(variable_name)

Get the name of the group that contains a variable.

Parameters

variable_name (str) – The name of the variable.

Returns

The group to which the variable belongs.

Return type

str

get_names(group_name)

Get the names of the variables of a group.

Parameters

group_name (str) – The name of the group.

Returns

The names of the variables of the group.

Return type

List[str]

get_normalized_dataset(excluded_variables=None, excluded_groups=None)

Get a normalized copy of the dataset.

Parameters
  • excluded_variables (Optional[Sequence[str]]) –

    The names of the variables not to be normalized. If None, normalize all the variables.

    By default it is set to None.

  • excluded_groups (Optional[Sequence[str]]) –

    The names of the groups not to be normalized. If None, normalize all the groups.

    By default it is set to None.

Returns

A normalized dataset.

Return type

gemseo.core.dataset.Dataset

property groups

The sorted names of the groups of variables.

is_empty()

Check if the dataset is empty.

Returns

Whether the dataset is empty.

Return type

bool

is_group(name)

Check if a name is a group name.

Parameters

name (str) – A name of a group.

Returns

Whether the name is a group name.

Return type

bool

is_nan()

Check if an entry contains NaN.

Returns

Whether any entries is NaN or not.

Return type

numpy.ndarray

is_variable(name)

Check if a name is a variable name.

Parameters

name (str) – A name of a variable.

Returns

Whether the name is a variable name.

Return type

bool

property n_samples

The number of samples.

property n_variables

The number of variables.

n_variables_by_group(group)

The number of variables for a group.

Parameters

group (str) – The name of a group.

Returns

The group dimension.

Return type

int

plot(name, show=True, save=False, **options)

Plot the dataset from a DatasetPlot.

See Dataset.get_available_plots()

Parameters
  • name (str) – The name of the post-processing, which is the name of a class inheriting from DatasetPlot.

  • show (bool) –

    If True, display the figure.

    By default it is set to True.

  • save (bool) –

    If True, save the figure.

    By default it is set to False.

  • options – The options for the post-processing.

Return type

None

remove(entries)

Remove entries.

Parameters

entries (Union[List[int], numpy.ndarray]) – The entries to be removed, either indices or a boolean 1D array whose length is equal to the length of the dataset and elements to delete are coded True.

Return type

None

property row_names

The names of the rows.

set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)

Set the dataset from an array.

Parameters
  • data (numpy.ndarray) – The data to be stored.

  • variables (Optional[List[str]]) –

    The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.

    By default it is set to None.

  • sizes (Optional[Dict[str, int]]) –

    The sizes of the variables. If None, assume that all the variables have a size equal to 1.

    By default it is set to None.

  • groups (Optional[Dict[str, str]]) –

    The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

    By default it is set to None.

  • default_name (Optional[str]) –

    The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

    By default it is set to None.

Return type

None

set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)

Set the dataset from a file.

Parameters
  • filename (str) – The name of the file containing the data.

  • variables (Optional[List[str]]) –

    The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the Dataset.DEFAULT_NAMES associated with the different groups.

    By default it is set to None.

  • sizes (Optional[Dict[str, int]]) –

    The sizes of the variables. If None, assume that all the variables have a size equal to 1.

    By default it is set to None.

  • groups (Optional[Dict[str, str]]) –

    The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

    By default it is set to None.

  • delimiter (str) –

    The field delimiter.

    By default it is set to ,.

  • header (bool) –

    If True, read the names of the variables on the first line of the file.

    By default it is set to True.

Return type

None

set_metadata(name, value)

Set a metadata attribute.

Parameters
  • name (str) – The name of the metadata attribute.

  • value (Any) – The value of the metadata attribute.

Return type

None

property variables

The sorted names of the variables.

class gemseo.problems.dataset.burgers.BurgersDiscipline[source]

A software integrated in the workflow.

The inputs and outputs are defined in a grammar, which can be either a SimpleGrammar or a JSONGrammar, or your own which derives from the Grammar abstract class.

To be used, use a subclass and implement the _run method which defined the execution of the software. Typically, in the _run method, get the inputs from the input grammar, call your software, and write the outputs to the output grammar.

The JSONGrammar files are automatically detected when in the same folder as your subclass module and named “CLASSNAME_input.json” use auto_detect_grammar_files=True to activate this option.

input_grammar

The input grammar.

Type

AbstractGrammar

output_grammar

The output grammar.

Type

AbstractGrammar

grammar_type

The type of grammar to be used for inputs and outputs declaration.

Type

str

comp_dir

The path to the directory of the discipline module file if any.

Type

str

data_processor

A tool to pre- and post-process discipline data.

Type

DataProcessor

re_exec_policy

The policy to re-execute the same discipline.

Type

str

residual_variables

The output variables to be considered as residuals; they shall be equal to zero.

Type

List[str]

jac

The Jacobians of the outputs wrt inputs of the form {output: {input: matrix}}.

Type

Dict[str, Dict[str, ndarray]]

exec_for_lin

Whether the last execution was due to a linearization.

Type

bool

name

The name of the discipline.

Type

str

cache

The cache containing one or several executions of the discipline according to the cache policy.

Type

AbstractCache

local_data

The last input and output data.

Type

Dict[str, Any]

Methods:

activate_time_stamps()

Activate the time stamps.

add_differentiated_inputs([inputs])

Add inputs against which to differentiate the outputs.

add_differentiated_outputs([outputs])

Add outputs to be differentiated.

add_status_observer(obs)

Add an observer for the status.

auto_get_grammar_file([is_input, name, comp_dir])

Use a naming convention to associate a grammar file to a discipline.

check_input_data(input_data[, raise_exception])

Check the input data validity.

check_jacobian([input_data, derr_approx, ...])

Check if the analytical Jacobian is correct with respect to a reference one.

check_output_data([raise_exception])

Check the output data validity.

deactivate_time_stamps()

Deactivate the time stamps.

deserialize(in_file)

Deserialize a discipline from a file.

execute([input_data])

Execute the discipline.

get_all_inputs()

Return the local input data as a list.

get_all_outputs()

Return the local output data as a list.

get_attributes_to_serialize()

Define the names of the attributes to be serialized.

get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

get_expected_dataflow()

Return the expected data exchange sequence.

get_expected_workflow()

Return the expected execution sequence.

get_input_data()

Return the local input data as a dictionary.

get_input_data_names()

Return the names of the input variables.

get_input_output_data_names()

Return the names of the input and output variables.

get_inputs_asarray()

Return the local output data as a large NumPy array.

get_inputs_by_name(data_names)

Return the local data associated with input variables.

get_local_data_by_name(data_names)

Return the local data of the discipline associated with variables names.

get_output_data()

Return the local output data as a dictionary.

get_output_data_names()

Return the names of the output variables.

get_outputs_asarray()

Return the local input data as a large NumPy array.

get_outputs_by_name(data_names)

Return the local data associated with output variables.

get_sub_disciplines()

Return the sub-disciplines if any.

is_all_inputs_existing(data_names)

Test if several variables are discipline inputs.

is_all_outputs_existing(data_names)

Test if several variables are discipline outputs.

is_input_existing(data_name)

Test if a variable is a discipline input.

is_output_existing(data_name)

Test if a variable is a discipline output.

is_scenario()

Whether the discipline is a scenario.

linearize([input_data, force_all, force_no_exec])

Execute the linearized version of the code.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

reset_statuses_for_run()

Set all the statuses to PENDING.

serialize(out_file)

Serialize the discipline and store it in a file.

set_cache_policy([cache_type, ...])

Set the type of cache to use and the tolerance level.

set_disciplines_statuses(status)

Set the sub-disciplines statuses.

set_jacobian_approximation([...])

Set the Jacobian approximation method.

set_optimal_fd_step([outputs, inputs, ...])

Compute the optimal finite-difference step.

store_local_data(**kwargs)

Store discipline data in local data.

Attributes:

cache_tol

The cache input tolerance.

default_inputs

The default inputs.

exec_time

The cumulated execution time of the discipline.

grammar_type

The grammar type.

linearization_mode

The linearization mode among LINEARIZE_MODE_LIST.

n_calls

The number of times the discipline was executed.

n_calls_linearize

The number of times the discipline was linearized.

status

The status of the discipline.

classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

Return type

None

add_differentiated_inputs(inputs=None)

Add inputs against which to differentiate the outputs.

This method updates _differentiated_inputs with inputs.

Parameters

inputs (Optional[Iterable[str]]) –

The input variables against which to differentiate the outputs. If None, all the inputs of the discipline are used.

By default it is set to None.

Raises

ValueError – When the inputs wrt which differentiate the discipline are not inputs of the latter.

Return type

None

add_differentiated_outputs(outputs=None)

Add outputs to be differentiated.

This method updates _differentiated_outputs with outputs.

Parameters

outputs (Optional[Iterable[str]]) –

The output variables to be differentiated. If None, all the outputs of the discipline are used.

By default it is set to None.

Raises

ValueError – When the outputs to differentiate are not discipline outputs.

Return type

None

add_status_observer(obs)

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters

obs (Any) – The observer to add.

Return type

None

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to a discipline.

This method searches in a directory for either an input grammar file named name + "_input.json" or an output grammar file named``name + “_output.json”``.

Parameters
  • is_input (bool) –

    If True, autodetect the input grammar file; otherwise, autodetect the output grammar file.

    By default it is set to True.

  • name (Optional[str]) –

    The name to be searched in the file names. If None, use the name name of the discipline.

    By default it is set to None.

  • comp_dir (Optional[Union[str, pathlib.Path]]) –

    The directory in which to search the grammar file. If None, use comp_dir.

    By default it is set to None.

Returns

The grammar file path.

Return type

pathlib.Path

property cache_tol

The cache input tolerance.

This is the tolerance for equality of the inputs in the cache. If norm(stored_input_data-input_data) <= cache_tol * norm(stored_input_data), the cached data for stored_input_data is returned when calling self.execute(input_data).

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters
  • input_data (Dict[str, Any]) – The input data needed to execute the discipline according to the discipline input grammar.

  • raise_exception (bool) –

    By default it is set to True.

Return type

None

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10, reference_jacobian_path=None, save_reference_jacobian=False, indices=None)

Check if the analytical Jacobian is correct with respect to a reference one.

If reference_jacobian_path is not None and save_reference_jacobian is True, compute the reference Jacobian with the approximation method and save it in reference_jacobian_path.

If reference_jacobian_path is not None and save_reference_jacobian is False, do not compute the reference Jacobian but read it from reference_jacobian_path.

If reference_jacobian_path is None, compute the reference Jacobian without saving it.

Parameters
  • input_data

    The input data needed to execute the discipline according to the discipline input grammar. If None, use the default_inputs.

    By default it is set to None.

  • derr_approx

    The approximation method, either “complex_step” or “finite_differences”.

    By default it is set to finite_differences.

  • threshold

    The acceptance threshold for the Jacobian error.

    By default it is set to 1e-08.

  • linearization_mode

    the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

    By default it is set to auto.

  • inputs

    The names of the inputs wrt which to differentiate the outputs.

    By default it is set to None.

  • outputs

    The names of the outputs to be differentiated.

    By default it is set to None.

  • step

    The differentiation step.

    By default it is set to 1e-07.

  • parallel

    Whether to differentiate the discipline in parallel.

    By default it is set to False.

  • n_processes

    The maximum number of processors on which to run.

    By default it is set to 2.

  • use_threading

    Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.

    By default it is set to False.

  • wait_time_between_fork

    The time waited between two forks of the process / thread.

    By default it is set to 0.

  • auto_set_step

    Whether to compute the optimal step for a forward first order finite differences gradient approximation.

    By default it is set to False.

  • plot_result

    Whether to plot the result of the validation (computed vs approximated Jacobians).

    By default it is set to False.

  • file_path

    The path to the output file if plot_result is True.

    By default it is set to jacobian_errors.pdf.

  • show

    Whether to open the figure.

    By default it is set to False.

  • figsize_x

    The x-size of the figure in inches.

    By default it is set to 10.

  • figsize_y

    The y-size of the figure in inches.

    By default it is set to 10.

  • reference_jacobian_path

    The path of the reference Jacobian file.

    By default it is set to None.

  • save_reference_jacobian

    Whether to save the reference Jacobian.

    By default it is set to False.

  • indices

    The indices of the inputs and outputs for the different sub-Jacobian matrices, formatted as {variable_name: variable_components} where variable_components can be either an integer, e.g. 2 a sequence of integers, e.g. [0, 3], a slice, e.g. slice(0,3), the ellipsis symbol () or None, which is the same as ellipsis. If a variable name is missing, consider all its components. If None, consider all the components of all the inputs and outputs.

    By default it is set to None.

Returns

Whether the analytical Jacobian is correct with respect to the reference one.

check_output_data(raise_exception=True)

Check the output data validity.

Parameters

raise_exception (bool) –

Whether to raise an exception when the data is invalid.

By default it is set to True.

Return type

None

classmethod deactivate_time_stamps()

Deactivate the time stamps.

For storing start and end times of execution and linearizations.

Return type

None

property default_inputs

The default inputs.

Raises

TypeError – When the default inputs are not passed as a dictionary.

static deserialize(in_file)

Deserialize a discipline from a file.

Parameters

in_file (Union[str, pathlib.Path]) – The path to the file containing the discipline.

Returns

The discipline instance.

Return type

gemseo.core.discipline.MDODiscipline

property exec_time

The cumulated execution time of the discipline.

Note

This property is multiprocessing safe.

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

  • Adds the default inputs to the input_data if some inputs are not defined in input_data but exist in _default_inputs.

  • Checks whether the last execution of the discipline was called with identical inputs, ie. cached in cache; if so, directly returns self.cache.get_output_cache(inputs).

  • Caches the inputs.

  • Checks the input data against input_grammar.

  • If data_processor is not None, runs the preprocessor.

  • Updates the status to RUNNING.

  • Calls the _run() method, that shall be defined.

  • If data_processor is not None, runs the postprocessor.

  • Checks the output data.

  • Caches the outputs.

  • Updates the status to DONE or FAILED.

  • Updates summed execution time.

Parameters

input_data (Optional[Dict[str, Any]]) –

The input data needed to execute the discipline according to the discipline input grammar. If None, use the default_inputs.

By default it is set to None.

Returns

The discipline local data after execution.

Return type

Dict[str, Any]

get_all_inputs()

Return the local input data as a list.

The order is given by get_input_data_names().

Returns

The local input data.

Return type

List[Any]

get_all_outputs()

Return the local output data as a list.

The order is given by get_output_data_names().

Returns

The local output data.

Return type

List[Any]

get_attributes_to_serialize()

Define the names of the attributes to be serialized.

Shall be overloaded by disciplines

Returns

The names of the attributes to be serialized.

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of strings, then the method returns a generator of value corresponding to the keys which can be iterated.

Parameters
  • keys (Union[str, Iterable]) – One or several names.

  • data_dict (Dict[str, Any]) – The mapping from which to get the data.

Returns

Either a data or a generator of data.

Return type

Union[Any, Generator[Any]]

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

The default expected data exchange sequence is an empty list.

See also

MDOFormulation.get_expected_dataflow

Returns

The data exchange arcs.

Return type

List[Tuple[gemseo.core.discipline.MDODiscipline, gemseo.core.discipline.MDODiscipline, List[str]]]

get_expected_workflow()

Return the expected execution sequence.

This method is used for the XDSM representation.

The default expected execution sequence is the execution of the discipline itself.

See also

MDOFormulation.get_expected_workflow

Returns

The expected execution sequence.

Return type

SerialExecSequence

get_input_data()

Return the local input data as a dictionary.

Returns

The local input data.

Return type

Dict[str, Any]

get_input_data_names()

Return the names of the input variables.

Returns

The names of the input variables.

Return type

List[str]

get_input_output_data_names()

Return the names of the input and output variables.

Returns

The name of the input and output variables.

Return type

List[str]

get_inputs_asarray()

Return the local output data as a large NumPy array.

The order is the one of get_all_outputs().

Returns

The local output data.

Return type

numpy.ndarray

get_inputs_by_name(data_names)

Return the local data associated with input variables.

Parameters

data_names (Iterable[str]) – The names of the input variables.

Returns

The local data for the given input variables.

Raises

ValueError – When a variable is not an input of the discipline.

Return type

List[Any]

get_local_data_by_name(data_names)

Return the local data of the discipline associated with variables names.

Parameters

data_names (Iterable[str]) – The names of the variables.

Returns

The local data associated with the variables names.

Raises

ValueError – When a name is not not a discipline input name.

Return type

Generator[Any]

get_output_data()

Return the local output data as a dictionary.

Returns

The local output data.

Return type

Dict[str, Any]

get_output_data_names()

Return the names of the output variables.

Returns

The names of the output variables.

Return type

List[str]

get_outputs_asarray()

Return the local input data as a large NumPy array.

The order is the one of get_all_inputs().

Returns

The local input data.

Return type

numpy.ndarray

get_outputs_by_name(data_names)

Return the local data associated with output variables.

Parameters

data_names (Iterable[str]) – The names of the output variables.

Returns

The local data for the given output variables.

Raises

ValueError – When a variable is not an output of the discipline.

Return type

List[Any]

get_sub_disciplines()

Return the sub-disciplines if any.

Returns

The sub-disciplines.

Return type

List[gemseo.core.discipline.MDODiscipline]

property grammar_type

The grammar type.

is_all_inputs_existing(data_names)

Test if several variables are discipline inputs.

Parameters

data_names (Iterable[str]) – The names of the variables.

Returns

Whether all the variables are discipline inputs.

Return type

bool

is_all_outputs_existing(data_names)

Test if several variables are discipline outputs.

Parameters

data_names (Iterable[str]) – The names of the variables.

Returns

Whether all the variables are discipline outputs.

Return type

bool

is_input_existing(data_name)

Test if a variable is a discipline input.

Parameters

data_name (str) – The name of the variable.

Returns

Whether the variable is a discipline input.

Return type

bool

is_output_existing(data_name)

Test if a variable is a discipline output.

Parameters

data_name (str) – The name of the variable.

Returns

Whether the variable is a discipline output.

Return type

bool

static is_scenario()

Whether the discipline is a scenario.

Return type

bool

property linearization_mode

The linearization mode among LINEARIZE_MODE_LIST.

Raises

ValueError – When the linearization mode is unknown.

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters
  • input_data (Optional[Dict[str, Any]]) –

    The input data needed to linearize the discipline according to the discipline input grammar. If None, use the default_inputs.

    By default it is set to None.

  • force_all (bool) –

    If False, _differentiated_inputs and differentiated_output are used to filter the differentiated variables. otherwise, all outputs are differentiated wrt all inputs.

    By default it is set to False.

  • force_no_exec (bool) –

    If True, the discipline is not re executed, cache is loaded anyway.

    By default it is set to False.

Returns

The Jacobian of the discipline.

Return type

Dict[str, Dict[str, numpy.ndarray]]

property n_calls

The number of times the discipline was executed.

Note

This property is multiprocessing safe.

property n_calls_linearize

The number of times the discipline was linearized.

Note

This property is multiprocessing safe.

notify_status_observers()

Notify all status observers that the status has changed.

Return type

None

remove_status_observer(obs)

Remove an observer for the status.

Parameters

obs (Any) – The observer to remove.

Return type

None

reset_statuses_for_run()

Set all the statuses to PENDING.

Raises

ValueError – When the discipline cannot be run because of its status.

Return type

None

serialize(out_file)

Serialize the discipline and store it in a file.

Parameters

out_file (Union[str, pathlib.Path]) – The path to the file to store the discipline.

Return type

None

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method defines when the output data have to be cached according to the distance between the corresponding input data and the input data already cached for which output data are also cached.

The cache can be either a SimpleCache recording the last execution or a cache storing all executions, e.g. MemoryFullCache and HDF5Cache. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache, or on the disk, e.g. HDF5Cache.

The attribute CacheFactory.caches provides the available caches types.

Parameters
  • cache_type (str) –

    The type of cache.

    By default it is set to SimpleCache.

  • cache_tolerance (float) –

    The maximum relative norm of the difference between two input arrays to consider that two input arrays are equal.

    By default it is set to 0.0.

  • cache_hdf_file (Optional[Union[str, pathlib.Path]]) –

    The path to the HDF file to store the data; this argument is mandatory when the HDF5Cache policy is used.

    By default it is set to None.

  • cache_hdf_node_name (Optional[str]) –

    The name of the HDF file node to store the discipline data. If None, name is used.

    By default it is set to None.

  • is_memory_shared (bool) –

    Whether to store the data with a shared memory dictionary, which makes the cache compatible with multiprocessing.

    By default it is set to True.

Return type

None

set_disciplines_statuses(status)

Set the sub-disciplines statuses.

To be implemented in subclasses.

Parameters

status (str) – The status.

Return type

None

set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)

Set the Jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling linearize().

Parameters
  • jac_approx_type (str) –

    The approximation method, either “complex_step” or “finite_differences”.

    By default it is set to finite_differences.

  • jax_approx_step (float) –

    The differentiation step.

    By default it is set to 1e-07.

  • jac_approx_n_processes (int) –

    The maximum number of processors on which to run.

    By default it is set to 1.

  • jac_approx_use_threading (bool) –

    Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.

    By default it is set to False.

  • jac_approx_wait_time (float) –

    The time waited between two forks of the process / thread.

    By default it is set to 0.

Return type

None

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of the perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x))

are approximately equal.

Warning

This calls the discipline execution twice per input variables.

See also

https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”

Parameters
  • inputs

    The inputs wrt which the outputs are linearized. If None, use the _differentiated_inputs.

    By default it is set to None.

  • outputs

    The outputs to be linearized. If None, use the _differentiated_outputs.

    By default it is set to None.

  • force_all

    Whether to consider all the inputs and outputs of the discipline;

    By default it is set to False.

  • print_errors

    Whether to display the estimated errors.

    By default it is set to False.

  • numerical_error

    The numerical error associated to the calculation of f. By default, this is the machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution.

    By default it is set to 2.220446049250313e-16.

Returns

The estimated errors of truncation and cancellation error.

Raises

ValueError – When the Jacobian approximation method has not been set.

property status

The status of the discipline.

store_local_data(**kwargs)

Store discipline data in local data.

Parameters
  • kwargs – The data to be stored in local_data.

  • **kwargs (Any) –

Return type

None

Example

Iris dataset

This is one of the best known Dataset to be found in the machine learning literature.

It was introduced by the statistician Ronald Fisher in his 1936 paper “The use of multiple measurements in taxonomic problems”, Annals of Eugenics. 7 (2): 179–188.

It contains 150 instances of iris plants:

  • 50 Iris Setosa,

  • 50 Iris Versicolour,

  • 50 Iris Virginica.

Each instance is characterized by:

  • its sepal length in cm,

  • its sepal width in cm,

  • its petal length in cm,

  • its petal width in cm.

This Dataset can be used for either clustering purposes or classification ones.

More information about the Iris dataset

Classes:

IrisDataset([name, by_group, as_io])

Iris dataset parametrization.

class gemseo.problems.dataset.iris.IrisDataset(name='Iris', by_group=True, as_io=False)[source]

Iris dataset parametrization.

Constructor.

Methods:

add_group(group, data[, variables, sizes, ...])

Add data related to a group.

add_variable(name, data[, group, cache_as_input])

Add data related to a variable.

compare(value_1, logical_operator, value_2)

Compare either a variable and a value or a variable and another variable.

export_to_cache([inputs, outputs, ...])

Export the dataset to a cache.

export_to_dataframe([copy])

Export the dataset to a pandas Dataframe.

find(comparison)

Find the entries for which a comparison is satisfied.

get_all_data([by_group, as_dict])

Get all the data stored in the dataset.

get_available_plots()

Return the available plot methods.

get_data_by_group(group[, as_dict])

Get the data for a specific group name.

get_data_by_names(names[, as_dict])

Get the data for specific names of variables.

get_group(variable_name)

Get the name of the group that contains a variable.

get_names(group_name)

Get the names of the variables of a group.

get_normalized_dataset([excluded_variables, ...])

Get a normalized copy of the dataset.

is_empty()

Check if the dataset is empty.

is_group(name)

Check if a name is a group name.

is_nan()

Check if an entry contains NaN.

is_variable(name)

Check if a name is a variable name.

n_variables_by_group(group)

The number of variables for a group.

plot(name[, show, save])

Plot the dataset from a DatasetPlot.

remove(entries)

Remove entries.

set_from_array(data[, variables, sizes, ...])

Set the dataset from an array.

set_from_file(filename[, variables, sizes, ...])

Set the dataset from a file.

set_metadata(name, value)

Set a metadata attribute.

Attributes:

columns_names

The names of the columns of the dataset.

groups

The sorted names of the groups of variables.

n_samples

The number of samples.

n_variables

The number of variables.

row_names

The names of the rows.

variables

The sorted names of the variables.

add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)

Add data related to a group.

Parameters
  • group (str) – The name of the group of data to be added.

  • data (numpy.ndarray) – The data to be added.

  • variables (Optional[List[str]]) –

    The names of the variables. If None, use default names based on a pattern.

    By default it is set to None.

  • sizes (Optional[Dict[str, int]]) –

    The sizes of the variables. If None, assume that all the variables have a size equal to 1.

    By default it is set to None.

  • pattern (Optional[str]) –

    The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

    By default it is set to None.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type

str

add_variable(name, data, group='parameters', cache_as_input=True)

Add data related to a variable.

Parameters
  • name (str) – The name of the variable to be stored.

  • data (numpy.ndarray) – The data to be stored.

  • group (str) –

    The name of the group related to this variable.

    By default it is set to parameters.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type

None

property columns_names

The names of the columns of the dataset.

compare(value_1, logical_operator, value_2, component_1=0, component_2=0)

Compare either a variable and a value or a variable and another variable.

Parameters
  • value_1 (Union[str, float]) – The first value, either a variable name or a numeric value.

  • logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.

  • value_2 (Union[str, float]) – The second value, either a variable name or a numeric value.

  • component_1 (int) –

    If value_1 is a variable name, component_1 corresponds to its component used in the comparison.

    By default it is set to 0.

  • component_2 (int) –

    If value_2 is a variable name, component_2 corresponds to its component used in the comparison.

    By default it is set to 0.

Returns

Whether the comparison is valid for the different entries.

Return type

numpy.ndarray

export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)

Export the dataset to a cache.

Parameters
  • inputs (Optional[Iterable[str]]) –

    The names of the inputs to cache. If None, use all inputs.

    By default it is set to None.

  • outputs (Optional[Iterable[str]]) –

    The names of the outputs to cache. If None, use all outputs.

    By default it is set to None.

  • cache_type (str) –

    The type of cache to use.

    By default it is set to MemoryFullCache.

  • cache_hdf_file (Optional[str]) –

    The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.

    By default it is set to None.

  • cache_hdf_node_name (Optional[str]) –

    The name of the HDF node to store the discipline. If None, use the name of the dataset.

    By default it is set to None.

Returns

A cache containing the dataset.

Return type

gemseo.core.cache.AbstractFullCache

export_to_dataframe(copy=True)

Export the dataset to a pandas Dataframe.

Parameters

copy (bool) –

If True, copy data. Otherwise, use reference.

By default it is set to True.

Returns

A pandas DataFrame containing the dataset.

Return type

DataFrame

static find(comparison)

Find the entries for which a comparison is satisfied.

This search uses a boolean 1D array whose length is equal to the length of the dataset.

Parameters

comparison (numpy.ndarray) – A boolean vector whose length is equal to the number of samples.

Returns

The indices of the entries for which the comparison is satisfied.

Return type

List[int]

get_all_data(by_group=True, as_dict=False)

Get all the data stored in the dataset.

The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied with the names and sizes of the variables.

The data can also classified by groups of variables.

Parameters
  • by_group

    If True, sort the data by group.

    By default it is set to True.

  • as_dict

    If True, return the data as a dictionary.

    By default it is set to False.

Returns

All the data stored in the dataset.

Return type

Union[Dict[str, Union[Dict[str, numpy.ndarray], numpy.ndarray]], Tuple[Union[numpy.ndarray, Dict[str, numpy.ndarray]], List[str], Dict[str, int]]]

get_available_plots()

Return the available plot methods.

Return type

List[str]

get_data_by_group(group, as_dict=False)

Get the data for a specific group name.

Parameters
  • group (str) – The name of the group.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to False.

Returns

The data related to the group.

Return type

Union[numpy.ndarray, Dict[str, numpy.ndarray]]

get_data_by_names(names, as_dict=True)

Get the data for specific names of variables.

Parameters
  • names (Union[str, Iterable[str]]) – The names of the variables.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to True.

Returns

The data related to the variables.

Return type

Union[numpy.ndarray, Dict[str, numpy.ndarray]]

get_group(variable_name)

Get the name of the group that contains a variable.

Parameters

variable_name (str) – The name of the variable.

Returns

The group to which the variable belongs.

Return type

str

get_names(group_name)

Get the names of the variables of a group.

Parameters

group_name (str) – The name of the group.

Returns

The names of the variables of the group.

Return type

List[str]

get_normalized_dataset(excluded_variables=None, excluded_groups=None)

Get a normalized copy of the dataset.

Parameters
  • excluded_variables (Optional[Sequence[str]]) –

    The names of the variables not to be normalized. If None, normalize all the variables.

    By default it is set to None.

  • excluded_groups (Optional[Sequence[str]]) –

    The names of the groups not to be normalized. If None, normalize all the groups.

    By default it is set to None.

Returns

A normalized dataset.

Return type

gemseo.core.dataset.Dataset

property groups

The sorted names of the groups of variables.

is_empty()

Check if the dataset is empty.

Returns

Whether the dataset is empty.

Return type

bool

is_group(name)

Check if a name is a group name.

Parameters

name (str) – A name of a group.

Returns

Whether the name is a group name.

Return type

bool

is_nan()

Check if an entry contains NaN.

Returns

Whether any entries is NaN or not.

Return type

numpy.ndarray

is_variable(name)

Check if a name is a variable name.

Parameters

name (str) – A name of a variable.

Returns

Whether the name is a variable name.

Return type

bool

property n_samples

The number of samples.

property n_variables

The number of variables.

n_variables_by_group(group)

The number of variables for a group.

Parameters

group (str) – The name of a group.

Returns

The group dimension.

Return type

int

plot(name, show=True, save=False, **options)

Plot the dataset from a DatasetPlot.

See Dataset.get_available_plots()

Parameters
  • name (str) – The name of the post-processing, which is the name of a class inheriting from DatasetPlot.

  • show (bool) –

    If True, display the figure.

    By default it is set to True.

  • save (bool) –

    If True, save the figure.

    By default it is set to False.

  • options – The options for the post-processing.

Return type

None

remove(entries)

Remove entries.

Parameters

entries (Union[List[int], numpy.ndarray]) – The entries to be removed, either indices or a boolean 1D array whose length is equal to the length of the dataset and elements to delete are coded True.

Return type

None

property row_names

The names of the rows.

set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)

Set the dataset from an array.

Parameters
  • data (numpy.ndarray) – The data to be stored.

  • variables (Optional[List[str]]) –

    The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.

    By default it is set to None.

  • sizes (Optional[Dict[str, int]]) –

    The sizes of the variables. If None, assume that all the variables have a size equal to 1.

    By default it is set to None.

  • groups (Optional[Dict[str, str]]) –

    The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

    By default it is set to None.

  • default_name (Optional[str]) –

    The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

    By default it is set to None.

Return type

None

set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)

Set the dataset from a file.

Parameters
  • filename (str) – The name of the file containing the data.

  • variables (Optional[List[str]]) –

    The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the Dataset.DEFAULT_NAMES associated with the different groups.

    By default it is set to None.

  • sizes (Optional[Dict[str, int]]) –

    The sizes of the variables. If None, assume that all the variables have a size equal to 1.

    By default it is set to None.

  • groups (Optional[Dict[str, str]]) –

    The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

    By default it is set to None.

  • delimiter (str) –

    The field delimiter.

    By default it is set to ,.

  • header (bool) –

    If True, read the names of the variables on the first line of the file.

    By default it is set to True.

Return type

None

set_metadata(name, value)

Set a metadata attribute.

Parameters
  • name (str) – The name of the metadata attribute.

  • value (Any) – The value of the metadata attribute.

Return type

None

property variables

The sorted names of the variables.

Example

Rosenbrock dataset

This Dataset contains 100 evaluations of the well-known Rosenbrock function:

\[f(x,y)=(1-x)^2+100(y-x^2)^2\]

This function is known for its global minimum at point (1,1), its banana valley and the difficulty to reach its minimum.

This Dataset is based on a full-factorial design of experiments.

More information about the Rosenbrock function

Classes:

RosenbrockDataset([name, by_group, ...])

Rosenbrock dataset parametrization.

class gemseo.problems.dataset.rosenbrock.RosenbrockDataset(name='Rosenbrock', by_group=True, n_samples=100, categorize=True, opt_naming=True)[source]

Rosenbrock dataset parametrization.

Constructor.

Parameters
  • name (str) –

    name of the dataset.

    By default it is set to Rosenbrock.

  • by_group (bool) –

    if True, store the data by group. Otherwise, store them by variables. Default: True

    By default it is set to True.

  • n_samples (int) –

    number of samples

    By default it is set to 100.

  • categorize (bool) –

    distinguish between the different groups of variables. Default: True.

    By default it is set to True.

Parma bool opt_naming

use an optimization naming. Default: True.

Methods:

add_group(group, data[, variables, sizes, ...])

Add data related to a group.

add_variable(name, data[, group, cache_as_input])

Add data related to a variable.

compare(value_1, logical_operator, value_2)

Compare either a variable and a value or a variable and another variable.

export_to_cache([inputs, outputs, ...])

Export the dataset to a cache.

export_to_dataframe([copy])

Export the dataset to a pandas Dataframe.

find(comparison)

Find the entries for which a comparison is satisfied.

get_all_data([by_group, as_dict])

Get all the data stored in the dataset.

get_available_plots()

Return the available plot methods.

get_data_by_group(group[, as_dict])

Get the data for a specific group name.

get_data_by_names(names[, as_dict])

Get the data for specific names of variables.

get_group(variable_name)

Get the name of the group that contains a variable.

get_names(group_name)

Get the names of the variables of a group.

get_normalized_dataset([excluded_variables, ...])

Get a normalized copy of the dataset.

is_empty()

Check if the dataset is empty.

is_group(name)

Check if a name is a group name.

is_nan()

Check if an entry contains NaN.

is_variable(name)

Check if a name is a variable name.

n_variables_by_group(group)

The number of variables for a group.

plot(name[, show, save])

Plot the dataset from a DatasetPlot.

remove(entries)

Remove entries.

set_from_array(data[, variables, sizes, ...])

Set the dataset from an array.

set_from_file(filename[, variables, sizes, ...])

Set the dataset from a file.

set_metadata(name, value)

Set a metadata attribute.

Attributes:

columns_names

The names of the columns of the dataset.

groups

The sorted names of the groups of variables.

n_samples

The number of samples.

n_variables

The number of variables.

row_names

The names of the rows.

variables

The sorted names of the variables.

add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)

Add data related to a group.

Parameters
  • group (str) – The name of the group of data to be added.

  • data (numpy.ndarray) – The data to be added.

  • variables (Optional[List[str]]) –

    The names of the variables. If None, use default names based on a pattern.

    By default it is set to None.

  • sizes (Optional[Dict[str, int]]) –

    The sizes of the variables. If None, assume that all the variables have a size equal to 1.

    By default it is set to None.

  • pattern (Optional[str]) –

    The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

    By default it is set to None.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type

str

add_variable(name, data, group='parameters', cache_as_input=True)

Add data related to a variable.

Parameters
  • name (str) – The name of the variable to be stored.

  • data (numpy.ndarray) – The data to be stored.

  • group (str) –

    The name of the group related to this variable.

    By default it is set to parameters.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type

None

property columns_names

The names of the columns of the dataset.

compare(value_1, logical_operator, value_2, component_1=0, component_2=0)

Compare either a variable and a value or a variable and another variable.

Parameters
  • value_1 (Union[str, float]) – The first value, either a variable name or a numeric value.

  • logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.

  • value_2 (Union[str, float]) – The second value, either a variable name or a numeric value.

  • component_1 (int) –

    If value_1 is a variable name, component_1 corresponds to its component used in the comparison.

    By default it is set to 0.

  • component_2 (int) –

    If value_2 is a variable name, component_2 corresponds to its component used in the comparison.

    By default it is set to 0.

Returns

Whether the comparison is valid for the different entries.

Return type

numpy.ndarray

export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)

Export the dataset to a cache.

Parameters
  • inputs (Optional[Iterable[str]]) –

    The names of the inputs to cache. If None, use all inputs.

    By default it is set to None.

  • outputs (Optional[Iterable[str]]) –

    The names of the outputs to cache. If None, use all outputs.

    By default it is set to None.

  • cache_type (str) –

    The type of cache to use.

    By default it is set to MemoryFullCache.

  • cache_hdf_file (Optional[str]) –

    The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.

    By default it is set to None.

  • cache_hdf_node_name (Optional[str]) –

    The name of the HDF node to store the discipline. If None, use the name of the dataset.

    By default it is set to None.

Returns

A cache containing the dataset.

Return type

gemseo.core.cache.AbstractFullCache

export_to_dataframe(copy=True)

Export the dataset to a pandas Dataframe.

Parameters

copy (bool) –

If True, copy data. Otherwise, use reference.

By default it is set to True.

Returns

A pandas DataFrame containing the dataset.

Return type

DataFrame

static find(comparison)

Find the entries for which a comparison is satisfied.

This search uses a boolean 1D array whose length is equal to the length of the dataset.

Parameters

comparison (numpy.ndarray) – A boolean vector whose length is equal to the number of samples.

Returns

The indices of the entries for which the comparison is satisfied.

Return type

List[int]

get_all_data(by_group=True, as_dict=False)

Get all the data stored in the dataset.

The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied with the names and sizes of the variables.

The data can also classified by groups of variables.

Parameters
  • by_group

    If True, sort the data by group.

    By default it is set to True.

  • as_dict

    If True, return the data as a dictionary.

    By default it is set to False.

Returns

All the data stored in the dataset.

Return type

Union[Dict[str, Union[Dict[str, numpy.ndarray], numpy.ndarray]], Tuple[Union[numpy.ndarray, Dict[str, numpy.ndarray]], List[str], Dict[str, int]]]

get_available_plots()

Return the available plot methods.

Return type

List[str]

get_data_by_group(group, as_dict=False)

Get the data for a specific group name.

Parameters
  • group (str) – The name of the group.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to False.

Returns

The data related to the group.

Return type

Union[numpy.ndarray, Dict[str, numpy.ndarray]]

get_data_by_names(names, as_dict=True)

Get the data for specific names of variables.

Parameters
  • names (Union[str, Iterable[str]]) – The names of the variables.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to True.

Returns

The data related to the variables.

Return type

Union[numpy.ndarray, Dict[str, numpy.ndarray]]

get_group(variable_name)

Get the name of the group that contains a variable.

Parameters

variable_name (str) – The name of the variable.

Returns

The group to which the variable belongs.

Return type

str

get_names(group_name)

Get the names of the variables of a group.

Parameters

group_name (str) – The name of the group.

Returns

The names of the variables of the group.

Return type

List[str]

get_normalized_dataset(excluded_variables=None, excluded_groups=None)

Get a normalized copy of the dataset.

Parameters
  • excluded_variables (Optional[Sequence[str]]) –

    The names of the variables not to be normalized. If None, normalize all the variables.

    By default it is set to None.

  • excluded_groups (Optional[Sequence[str]]) –

    The names of the groups not to be normalized. If None, normalize all the groups.

    By default it is set to None.

Returns

A normalized dataset.

Return type

gemseo.core.dataset.Dataset

property groups

The sorted names of the groups of variables.

is_empty()

Check if the dataset is empty.

Returns

Whether the dataset is empty.

Return type

bool

is_group(name)

Check if a name is a group name.

Parameters

name (str) – A name of a group.

Returns

Whether the name is a group name.

Return type

bool

is_nan()

Check if an entry contains NaN.

Returns

Whether any entries is NaN or not.

Return type

numpy.ndarray

is_variable(name)

Check if a name is a variable name.

Parameters

name (str) – A name of a variable.

Returns

Whether the name is a variable name.

Return type

bool

property n_samples

The number of samples.

property n_variables

The number of variables.

n_variables_by_group(group)

The number of variables for a group.

Parameters

group (str) – The name of a group.

Returns

The group dimension.

Return type

int

plot(name, show=True, save=False, **options)

Plot the dataset from a DatasetPlot.

See Dataset.get_available_plots()

Parameters
  • name (str) – The name of the post-processing, which is the name of a class inheriting from DatasetPlot.

  • show (bool) –

    If True, display the figure.

    By default it is set to True.

  • save (bool) –

    If True, save the figure.

    By default it is set to False.

  • options – The options for the post-processing.

Return type

None

remove(entries)

Remove entries.

Parameters

entries (Union[List[int], numpy.ndarray]) – The entries to be removed, either indices or a boolean 1D array whose length is equal to the length of the dataset and elements to delete are coded True.

Return type

None

property row_names

The names of the rows.

set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)

Set the dataset from an array.

Parameters
  • data (numpy.ndarray) – The data to be stored.

  • variables (Optional[List[str]]) –

    The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.

    By default it is set to None.

  • sizes (Optional[Dict[str, int]]) –

    The sizes of the variables. If None, assume that all the variables have a size equal to 1.

    By default it is set to None.

  • groups (Optional[Dict[str, str]]) –

    The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

    By default it is set to None.

  • default_name (Optional[str]) –

    The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

    By default it is set to None.

Return type

None

set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)

Set the dataset from a file.

Parameters
  • filename (str) – The name of the file containing the data.

  • variables (Optional[List[str]]) –

    The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the Dataset.DEFAULT_NAMES associated with the different groups.

    By default it is set to None.

  • sizes (Optional[Dict[str, int]]) –

    The sizes of the variables. If None, assume that all the variables have a size equal to 1.

    By default it is set to None.

  • groups (Optional[Dict[str, str]]) –

    The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

    By default it is set to None.

  • delimiter (str) –

    The field delimiter.

    By default it is set to ,.

  • header (bool) –

    If True, read the names of the variables on the first line of the file.

    By default it is set to True.

Return type

None

set_metadata(name, value)

Set a metadata attribute.

Parameters
  • name (str) – The name of the metadata attribute.

  • value (Any) – The value of the metadata attribute.

Return type

None

property variables

The sorted names of the variables.

Example