Built-in datasets

A factory for datasets.

class gemseo.problems.dataset.factory.DatasetFactory[source]

A factory for Dataset.

create(dataset, **options)[source]

Create a Dataset.

Parameters:
  • dataset (str) – The name of the dataset (its classname).

  • **options (Any) – The options of the dataset.

Returns:

A dataset.

Return type:

Dataset

is_available(dataset)[source]

Check the availability of a dataset.

Parameters:

dataset (str) – The name of the dataset (its class name).

Returns:

Whether the dataset is available.

Return type:

bool

property datasets: list[str]

The names of the available datasets.

Burgers dataset

This Dataset contains solutions to the Burgers’ equation with periodic boundary conditions on the interval \([0, 2\pi]\) for different time steps:

\[u_t + u u_x = \nu u_{xx},\]

An analytical expression can be obtained for the solution, using the Cole-Hopf transform:

\[u(t, x) = - 2 \nu \frac{\phi'}{\phi},\]

where \(\phi\) is solution to the heat equation \(\phi_t = \nu \phi_{xx}\).

This Dataset is based on a full-factorial design of experiments. Each sample corresponds to a given time step \(t\), while each feature corresponds to a given spatial point \(x\).

More information about Burgers’ equation

class gemseo.problems.dataset.burgers.BurgersDataset(name='Burgers', by_group=True, n_samples=30, n_x=501, fluid_viscosity=0.1, categorize=True)[source]

Burgers dataset parametrization.

Parameters:
  • name (str) –

    The name of the dataset.

    By default it is set to “Burgers”.

  • by_group (bool) –

    Whether to store the data by group. Otherwise, store them by variables.

    By default it is set to True.

  • n_samples (int) –

    The number of samples.

    By default it is set to 30.

  • n_x (int) –

    The number of spatial points.

    By default it is set to 501.

  • fluid_viscosity (float) –

    The fluid viscosity.

    By default it is set to 0.1.

  • categorize (bool) –

    Whether to distinguish between the different groups of variables.

    By default it is set to True.

add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)

Add data related to a group.

Parameters:
  • group (str) – The name of the group of data to be added.

  • data (ndarray) – The data to be added.

  • variables (list[str] | None) – The names of the variables. If None, use default names based on a pattern.

  • sizes (dict[str, int] | None) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • pattern (str | None) – The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type:

None

add_variable(name, data, group='parameters', cache_as_input=True)

Add data related to a variable.

Parameters:
  • name (str) – The name of the variable to be stored.

  • data (ndarray) – The data to be stored.

  • group (str) –

    The name of the group related to this variable.

    By default it is set to “parameters”.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type:

None

compare(value_1, logical_operator, value_2, component_1=0, component_2=0)

Compare either a variable and a value or a variable and another variable.

Parameters:
  • value_1 (str | float) – The first value, either a variable name or a numeric value.

  • logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.

  • value_2 (str | float) – The second value, either a variable name or a numeric value.

  • component_1 (int) –

    If value_1 is a variable name, component_1 corresponds to its component used in the comparison.

    By default it is set to 0.

  • component_2 (int) –

    If value_2 is a variable name, component_2 corresponds to its component used in the comparison.

    By default it is set to 0.

Returns:

Whether the comparison is valid for the different entries.

Return type:

ndarray

export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)

Export the dataset to a cache.

Parameters:
  • inputs (Iterable[str] | None) – The names of the inputs to cache. If None, use all inputs.

  • outputs (Iterable[str] | None) – The names of the outputs to cache. If None, use all outputs.

  • cache_type (str) –

    The type of cache to use.

    By default it is set to “MemoryFullCache”.

  • cache_hdf_file (str | None) – The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.

  • cache_hdf_node_name (str | None) – The name of the HDF node to store the discipline. If None, use the name of the dataset.

Returns:

A cache containing the dataset.

Return type:

AbstractFullCache

export_to_dataframe(copy=True, variable_names=None)

Export the dataset to a pandas Dataframe.

Parameters:
  • copy (bool) –

    If True, copy data. Otherwise, use reference.

    By default it is set to True.

  • variable_names (Sequence[str] | None) –

Returns:

A pandas DataFrame containing the dataset.

Return type:

DataFrame

static find(comparison)

Find the entries for which a comparison is satisfied.

This search uses a boolean 1D array whose length is equal to the length of the dataset.

Parameters:

comparison (ndarray) – A boolean vector whose length is equal to the number of samples.

Returns:

The indices of the entries for which the comparison is satisfied.

Return type:

list[int]

get_all_data(by_group=True, as_dict=False)

Get all the data stored in the dataset.

The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied by the names and sizes of the variables.

The data can also be classified by groups of variables.

Parameters:
  • by_group

    If True, sort the data by group.

    By default it is set to True.

  • as_dict

    If True, return the data as a dictionary.

    By default it is set to False.

Returns:

All the data stored in the dataset.

Return type:

Union[Dict[str, Union[Dict[str, ndarray], ndarray]], Tuple[Union[ndarray, Dict[str, ndarray]], List[str], Dict[str, int]]]

static get_available_plots()

Return the available plot methods.

Return type:

list[str]

get_column_names(variables=None, as_tuple=False, start=0)

Return the names of the columns of the dataset.

If dim(x)=1, its column name is ‘x’ while if dim(y)=2, its column names are either ‘x_0’ and ‘x_1’ or ColumnName(group_name, ‘x’, ‘0’) and ColumnName(group_name, ‘x’, ‘1’).

Parameters:
  • variables (Sequence[str]) – The names of the variables. If None, use all the variables.

  • as_tuple (bool) –

    If True, return the names as named tuples. otherwise, return the names as strings.

    By default it is set to False.

  • start (int) –

    The first index for the components of a variable. E.g. with ‘0’: ‘x_0’, ‘x_1’, …

    By default it is set to 0.

Returns:

The names of the columns of the dataset.

Return type:

list[str | ColumnName]

get_data_by_group(group, as_dict=False)

Get the data for a specific group name.

Parameters:
  • group (str) – The name of the group.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to False.

Returns:

The data related to the group.

Return type:

ndarray | dict[str, ndarray]

get_data_by_names(names, as_dict=True)

Get the data for specific names of variables.

Parameters:
  • names (str | Iterable[str]) – The names of the variables.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to True.

Returns:

The data related to the variables.

Return type:

ndarray | dict[str, ndarray]

get_group(variable_name)

Get the name of the group that contains a variable.

Parameters:

variable_name (str) – The name of the variable.

Returns:

The group to which the variable belongs.

Return type:

str

get_names(group_name)

Get the names of the variables of a group.

Parameters:

group_name (str) – The name of the group.

Returns:

The names of the variables of the group.

Return type:

list[str]

get_normalized_dataset(excluded_variables=None, excluded_groups=None, use_min_max=True, center=False, scale=False)

Get a normalized copy of the dataset.

Parameters:
  • excluded_variables (Sequence[str] | None) – The names of the variables not to be normalized. If None, normalize all the variables.

  • excluded_groups (Sequence[str] | None) – The names of the groups not to be normalized. If None, normalize all the groups.

  • use_min_max (bool) –

    Whether to use the geometric normalization \((x-\min(x))/(\max(x)-\min(x))\).

    By default it is set to True.

  • center (bool) –

    Whether to center the variables so that they have a zero mean.

    By default it is set to False.

  • scale (bool) –

    Whether to scale the variables so that they have a unit variance.

    By default it is set to False.

Returns:

A normalized dataset.

Return type:

Dataset

is_empty()

Check if the dataset is empty.

Returns:

Whether the dataset is empty.

Return type:

bool

is_group(name)

Check if a name is a group name.

Parameters:

name (str) – A name of a group.

Returns:

Whether the name is a group name.

Return type:

bool

is_nan()

Check if an entry contains NaN.

Returns:

Whether any entries are NaN or not.

Return type:

ndarray

is_variable(name)

Check if a name is a variable name.

Parameters:

name (str) – A name of a variable.

Returns:

Whether the name is a variable name.

Return type:

bool

n_variables_by_group(group)

The number of variables for a group.

Parameters:

group (str) – The name of a group.

Returns:

The group dimension.

Return type:

int

plot(name, show=True, save=False, file_path=None, directory_path=None, file_name=None, file_format=None, properties=None, **options)

Plot the dataset from a DatasetPlot.

See Dataset.get_available_plots()

Parameters:
  • name (str) – The name of the post-processing, which is the name of a class inheriting from DatasetPlot.

  • show (bool) –

    If True, display the figure.

    By default it is set to True.

  • save (bool) –

    If True, save the figure.

    By default it is set to False.

  • file_path (str | Path | None) – The path of the file to save the figures. If None, create a file path from directory_path, file_name and file_format.

  • directory_path (str | Path | None) – The path of the directory to save the figures. If None, use the current working directory.

  • file_name (str | None) – The name of the file to save the figures. If None, use a default one generated by the post-processing.

  • file_format (str | None) – A file format, e.g. ‘png’, ‘pdf’, ‘svg’, … If None, use a default file extension.

  • properties (Mapping[str, DatasetPlotPropertyType] | None) – The general properties of a DatasetPlot.

  • **options – The options for the post-processing.

Return type:

DatasetPlot

remove(entries)

Remove entries.

Parameters:

entries (list[int] | ndarray) – The entries to be removed, either indices or a boolean 1D array whose length is equal to the length of the dataset and elements to delete are coded True.

Return type:

None

rename_variable(name, new_name)

Rename a variable.

Parameters:
  • name (str) – The name of the variable.

  • new_name (str) – The new name of the variable.

Return type:

None

set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)

Set the dataset from an array.

Parameters:
  • data (ndarray) – The data to be stored.

  • variables (list[str] | None) – The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.

  • sizes (dict[str, int] | None) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • groups (dict[str, str] | None) – The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

  • default_name (str | None) – The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

Return type:

None

set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)

Set the dataset from a file.

Parameters:
  • filename (Path | str) – The name of the file containing the data.

  • variables (list[str] | None) – The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the Dataset.DEFAULT_NAMES associated with the different groups.

  • sizes (dict[str, int] | None) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • groups (dict[str, str] | None) – The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

  • delimiter (str) –

    The field delimiter.

    By default it is set to “,”.

  • header (bool) –

    If True, read the names of the variables on the first line of the file.

    By default it is set to True.

Return type:

None

set_metadata(name, value)

Set a metadata attribute.

Parameters:
  • name (str) – The name of the metadata attribute.

  • value (Any) – The value of the metadata attribute.

Return type:

None

transform_variable(name, transformation)

Transform a variable.

Parameters:
  • name (str) – The name of the variable, e.g. "foo".

  • transformation (Callable[[ndarray], ndarray]) – The function transforming the variable, e.g. "lambda x: np.exp(x)".

Return type:

None

DEFAULT_GROUP: ClassVar[str] = 'parameters'

The default name to group the variables.

DEFAULT_NAMES: ClassVar[dict[str, str]] = {'design_parameters': 'dp', 'functions': 'func', 'inputs': 'in', 'outputs': 'out', 'parameters': 'x'}

The default variable names for the different groups.

DESIGN_GROUP: ClassVar[str] = 'design_parameters'

The group name for the design variables of an OptimizationProblem.

FUNCTION_GROUP: ClassVar[str] = 'functions'

The group name for the functions of an OptimizationProblem.

GRADIENT_GROUP: ClassVar[str] = 'gradients'

The group name for the gradients.

HDF5_CACHE: ClassVar[str] = 'HDF5Cache'

The name of the HDF5Cache.

INPUT_GROUP: ClassVar[str] = 'inputs'

The group name for the input variables.

MEMORY_FULL_CACHE: ClassVar[str] = 'MemoryFullCache'

The name of the MemoryFullCache.

OUTPUT_GROUP: ClassVar[str] = 'outputs'

The group name for the output variables.

PARAMETER_GROUP: ClassVar[str] = 'parameters'

The group name for the parameters.

property columns_names: list[str | ColumnName]

The names of the columns of the dataset.

data: dict[str, ndarray]

The data stored by variable names or group names.

The values are NumPy arrays whose columns are features and rows are observations.

dimension: dict[str, int]

The dimensions of the groups of variables.

property groups: list[str]

The sorted names of the groups of variables.

length: int

The length of the dataset.

metadata: dict[str, Any]

The metadata used to store any kind of information that are not variables,

E.g. the mesh associated with a multi-dimensional variable.

property n_samples: int

The number of samples.

property n_variables: int

The number of variables.

name: str

The name of the dataset.

property row_names: list[str]

The names of the rows.

sizes: dict[str, int]

The sizes of the variables.

strings_encoding: dict[str, dict[int, int]]

The encoding structure mapping the values of the string variables with integers.

The keys are the names of the variables and the values are dictionaries whose keys are the components of the variables and the values are the integer values.

property variables: list[str]

The sorted names of the variables.

class gemseo.problems.dataset.burgers.BurgersDiscipline[source]

A software integrated in the workflow.

To be used, subclass MDODiscipline and implement the _run() method which defines the execution of the software. Typically, _run() gets the input data stored in the local_data, passes them to the callable computing the output data, e.g. a software, and stores these output data in the local_data.

Then, the end-user calls the execute() method with optional input_data; if not, default_inputs are used.

This execute() method uses name grammars to check the variable names and types of both the passed input data before calling run() and the returned output data before they are stored in the cache. A grammar can be either a SimpleGrammar or a JSONGrammar, or your own which derives from BaseGrammar.

Parameters:
  • name – The name of the discipline. If None, use the class name.

  • input_grammar_file – The input grammar file path. If None and auto_detect_grammar_files=True, look for "ClassName_input.json" in the GRAMMAR_DIRECTORY if any or in the directory of the discipline class module. If None and auto_detect_grammar_files=False, do not initialize the input grammar from a schema file.

  • output_grammar_file – The output grammar file path. If None and auto_detect_grammar_files=True, look for "ClassName_output.json" in the GRAMMAR_DIRECTORY if any or in the directory of the discipline class module. If None and auto_detect_grammar_files=False, do not initialize the output grammar from a schema file.

  • auto_detect_grammar_files – Whether to look for "ClassName_{input,output}.json" in the GRAMMAR_DIRECTORY if any or in the directory of the discipline class module when {input,output}_grammar_file is None.

  • grammar_type – The type of grammar to define the input and output variables, e.g. MDODiscipline.JSON_GRAMMAR_TYPE or MDODiscipline.SIMPLE_GRAMMAR_TYPE.

  • cache_type – The type of policy to cache the discipline evaluations, e.g. MDODiscipline.SIMPLE_CACHE to cache the last one, MDODiscipline.HDF5_CACHE to cache them in an HDF file, or MDODiscipline.MEMORY_FULL_CACHE to cache them in memory. If None or if activate_cache is True, do not cache the discipline evaluations.

  • cache_file_path – The HDF file path when grammar_type is MDODiscipline.HDF5_CACHE.

classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

Return type:

None

add_differentiated_inputs(inputs=None)

Add the inputs against which to differentiate the outputs.

If the discipline grammar type is MDODiscipline.JSON_GRAMMAR_TYPE and an input is either a non-numeric array or not an array, it will be ignored. If an input is declared as an array but the type of its items is not defined, it is assumed as a numeric array.

If the discipline grammar type is MDODiscipline.SIMPLE_GRAMMAR_TYPE and an input is not an array, it will be ignored. Keep in mind that in this case the array subtype is not checked.

Parameters:

inputs (Iterable[str] | None) – The input variables against which to differentiate the outputs. If None, all the inputs of the discipline are used.

Raises:

ValueError – When the inputs wrt which differentiate the discipline are not inputs of the latter.

Return type:

None

add_differentiated_outputs(outputs=None)

Add the outputs to be differentiated.

If the discipline grammar type is MDODiscipline.JSON_GRAMMAR_TYPE and an output is either a non-numeric array or not an array, it will be ignored. If an output is declared as an array but the type of its items is not defined, it is assumed as a numeric array.

If the discipline grammar type is MDODiscipline.SIMPLE_GRAMMAR_TYPE and an output is not an array, it will be ignored. Keep in mind that in this case the array subtype is not checked.

Parameters:

outputs (Iterable[str] | None) – The output variables to be differentiated. If None, all the outputs of the discipline are used.

Raises:

ValueError – When the outputs to differentiate are not discipline outputs.

Return type:

None

add_namespace_to_input(name, namespace)

Add a namespace prefix to an existing input grammar element.

The updated input grammar element name will be namespace + namespaces_separator + name.

Parameters:
  • name (str) – The element name to rename.

  • namespace (str) – The name of the namespace.

add_namespace_to_output(name, namespace)

Add a namespace prefix to an existing output grammar element.

The updated output grammar element name will be namespace + namespaces_separator + name.

Parameters:
  • name (str) – The element name to rename.

  • namespace (str) – The name of the namespace.

add_status_observer(obs)

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters:

obs (Any) – The observer to add.

Return type:

None

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to the discipline.

Search in the directory comp_dir for either an input grammar file named name + "_input.json" or an output grammar file named name + "_output.json".

Parameters:
  • is_input (bool) –

    Whether to search for an input or output grammar file.

    By default it is set to True.

  • name (str | None) – The name to be searched in the file names. If None, use the name of the discipline class.

  • comp_dir (str | Path | None) – The directory in which to search the grammar file. If None, use the GRAMMAR_DIRECTORY if any, or the directory of the discipline class module.

Returns:

The grammar file path.

Return type:

str

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters:
  • input_data (dict[str, Any]) – The input data needed to execute the discipline according to the discipline input grammar.

  • raise_exception (bool) –

    Whether to raise on error.

    By default it is set to True.

Return type:

None

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, fig_size_x=10, fig_size_y=10, reference_jacobian_path=None, save_reference_jacobian=False, indices=None)

Check if the analytical Jacobian is correct with respect to a reference one.

If reference_jacobian_path is not None and save_reference_jacobian is True, compute the reference Jacobian with the approximation method and save it in reference_jacobian_path.

If reference_jacobian_path is not None and save_reference_jacobian is False, do not compute the reference Jacobian but read it from reference_jacobian_path.

If reference_jacobian_path is None, compute the reference Jacobian without saving it.

Parameters:
  • input_data (dict[str, ndarray] | None) – The input data needed to execute the discipline according to the discipline input grammar. If None, use the MDODiscipline.default_inputs.

  • derr_approx (str) –

    The approximation method, either “complex_step” or “finite_differences”.

    By default it is set to “finite_differences”.

  • threshold (float) –

    The acceptance threshold for the Jacobian error.

    By default it is set to 1e-08.

  • linearization_mode (str) –

    the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

    By default it is set to “auto”.

  • inputs (Iterable[str] | None) – The names of the inputs wrt which to differentiate the outputs.

  • outputs (Iterable[str] | None) – The names of the outputs to be differentiated.

  • step (float) –

    The differentiation step.

    By default it is set to 1e-07.

  • parallel (bool) –

    Whether to differentiate the discipline in parallel.

    By default it is set to False.

  • n_processes (int) –

    The maximum simultaneous number of threads, if use_threading is True, or processes otherwise, used to parallelize the execution.

    By default it is set to 2.

  • use_threading (bool) –

    Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.

    By default it is set to False.

  • wait_time_between_fork (float) –

    The time waited between two forks of the process / thread.

    By default it is set to 0.

  • auto_set_step (bool) –

    Whether to compute the optimal step for a forward first order finite differences gradient approximation.

    By default it is set to False.

  • plot_result (bool) –

    Whether to plot the result of the validation (computed vs approximated Jacobians).

    By default it is set to False.

  • file_path (str | Path) –

    The path to the output file if plot_result is True.

    By default it is set to “jacobian_errors.pdf”.

  • show (bool) –

    Whether to open the figure.

    By default it is set to False.

  • fig_size_x (float) –

    The x-size of the figure in inches.

    By default it is set to 10.

  • fig_size_y (float) –

    The y-size of the figure in inches.

    By default it is set to 10.

  • reference_jacobian_path (str | Path | None) – The path of the reference Jacobian file.

  • save_reference_jacobian (bool) –

    Whether to save the reference Jacobian.

    By default it is set to False.

  • indices (Iterable[int] | None) – The indices of the inputs and outputs for the different sub-Jacobian matrices, formatted as {variable_name: variable_components} where variable_components can be either an integer, e.g. 2 a sequence of integers, e.g. [0, 3], a slice, e.g. slice(0,3), the ellipsis symbol () or None, which is the same as ellipsis. If a variable name is missing, consider all its components. If None, consider all the components of all the inputs and outputs.

Returns:

Whether the analytical Jacobian is correct with respect to the reference one.

check_output_data(raise_exception=True)

Check the output data validity.

Parameters:

raise_exception (bool) –

Whether to raise an exception when the data is invalid.

By default it is set to True.

Return type:

None

classmethod deactivate_time_stamps()

Deactivate the time stamps.

For storing start and end times of execution and linearizations.

Return type:

None

static deserialize(file_path)

Deserialize a discipline from a file.

Parameters:

file_path (str | Path) – The path to the file containing the discipline.

Returns:

The discipline instance.

Return type:

MDODiscipline

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

Parameters:

input_data (Mapping[str, Any] | None) – The input data needed to execute the discipline according to the discipline input grammar. If None, use the MDODiscipline.default_inputs.

Returns:

The discipline local data after execution.

Raises:

RuntimeError – When residual_variables are declared but self.run_solves_residuals is False. This is not suported yet.

Return type:

dict[str, Any]

get_all_inputs()

Return the local input data as a list.

The order is given by MDODiscipline.get_input_data_names().

Returns:

The local input data.

Return type:

list[Any]

get_all_outputs()

Return the local output data as a list.

The order is given by MDODiscipline.get_output_data_names().

Returns:

The local output data.

Return type:

list[Any]

get_attributes_to_serialize()

Define the names of the attributes to be serialized.

Shall be overloaded by disciplines

Returns:

The names of the attributes to be serialized.

Return type:

list[str]

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of strings, then the method returns a generator of value corresponding to the keys which can be iterated.

Parameters:
  • keys (str | Iterable) – One or several names.

  • data_dict (dict[str, Any]) – The mapping from which to get the data.

Returns:

Either a data or a generator of data.

Return type:

Any | Generator[Any]

get_disciplines_in_dataflow_chain()

Return the disciplines that must be shown as blocks in the XDSM.

By default, only the discipline itself is shown. This function can be differently implemented for any type of inherited discipline.

Returns:

The disciplines shown in the XDSM chain.

Return type:

list[gemseo.core.discipline.MDODiscipline]

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

The default expected data exchange sequence is an empty list.

See also

MDOFormulation.get_expected_dataflow

Returns:

The data exchange arcs.

Return type:

list[tuple[gemseo.core.discipline.MDODiscipline, gemseo.core.discipline.MDODiscipline, list[str]]]

get_expected_workflow()

Return the expected execution sequence.

This method is used for the XDSM representation.

The default expected execution sequence is the execution of the discipline itself.

See also

MDOFormulation.get_expected_workflow

Returns:

The expected execution sequence.

Return type:

SerialExecSequence

get_input_data(with_namespaces=True)

Return the local input data as a dictionary.

Parameters:

with_namespaces

Whether to keep the namespace prefix of the input names, if any.

By default it is set to True.

Returns:

The local input data.

Return type:

dict[str, Any]

get_input_data_names(with_namespaces=True)

Return the names of the input variables.

Parameters:

with_namespaces

Whether to keep the namespace prefix of the input names, if any.

By default it is set to True.

Returns:

The names of the input variables.

Return type:

list[str]

get_input_output_data_names(with_namespaces=True)

Return the names of the input and output variables.

Args:
with_namespaces: Whether to keep the namespace prefix of the

output names, if any.

Returns:

The name of the input and output variables.

Return type:

list[str]

get_inputs_asarray()

Return the local output data as a large NumPy array.

The order is the one of MDODiscipline.get_all_outputs().

Returns:

The local output data.

Return type:

ndarray

get_inputs_by_name(data_names)

Return the local data associated with input variables.

Parameters:

data_names (Iterable[str]) – The names of the input variables.

Returns:

The local data for the given input variables.

Raises:

ValueError – When a variable is not an input of the discipline.

Return type:

list[Any]

get_local_data_by_name(data_names)

Return the local data of the discipline associated with variables names.

Parameters:

data_names (Iterable[str]) – The names of the variables.

Returns:

The local data associated with the variables names.

Raises:

ValueError – When a name is not a discipline input name.

Return type:

Generator[Any]

get_output_data(with_namespaces=True)

Return the local output data as a dictionary.

Parameters:

with_namespaces

Whether to keep the namespace prefix of the output names, if any.

By default it is set to True.

Returns:

The local output data.

Return type:

dict[str, Any]

get_output_data_names(with_namespaces=True)

Return the names of the output variables.

Parameters:

with_namespaces

Whether to keep the namespace prefix of the output names, if any.

By default it is set to True.

Returns:

The names of the output variables.

Return type:

list[str]

get_outputs_asarray()

Return the local input data as a large NumPy array.

The order is the one of MDODiscipline.get_all_inputs().

Returns:

The local input data.

Return type:

ndarray

get_outputs_by_name(data_names)

Return the local data associated with output variables.

Parameters:

data_names (Iterable[str]) – The names of the output variables.

Returns:

The local data for the given output variables.

Raises:

ValueError – When a variable is not an output of the discipline.

Return type:

list[Any]

get_sub_disciplines(recursive=False)

Determine the sub-disciplines.

This method lists the sub-disciplines’ disciplines. It will list up to one level of disciplines contained inside another one unless the recursive argument is set to True.

Parameters:

recursive (bool) –

If True, the method will look inside any discipline that has other disciplines inside until it reaches a discipline without sub-disciplines, in this case the return value will not include any discipline that has sub-disciplines. If False, the method will list up to one level of disciplines contained inside another one, in this case the return value may include disciplines that contain sub-disciplines.

By default it is set to False.

Returns:

The sub-disciplines.

Return type:

list[gemseo.core.discipline.MDODiscipline]

is_all_inputs_existing(data_names)

Test if several variables are discipline inputs.

Parameters:

data_names (Iterable[str]) – The names of the variables.

Returns:

Whether all the variables are discipline inputs.

Return type:

bool

is_all_outputs_existing(data_names)

Test if several variables are discipline outputs.

Parameters:

data_names (Iterable[str]) – The names of the variables.

Returns:

Whether all the variables are discipline outputs.

Return type:

bool

is_input_existing(data_name)

Test if a variable is a discipline input.

Parameters:

data_name (str) – The name of the variable.

Returns:

Whether the variable is a discipline input.

Return type:

bool

is_output_existing(data_name)

Test if a variable is a discipline output.

Parameters:

data_name (str) – The name of the variable.

Returns:

Whether the variable is a discipline output.

Return type:

bool

static is_scenario()

Whether the discipline is a scenario.

Return type:

bool

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters:
  • input_data (dict[str, Any] | None) – The input data needed to linearize the discipline according to the discipline input grammar. If None, use the MDODiscipline.default_inputs.

  • force_all (bool) –

    If False, MDODiscipline._differentiated_inputs and MDODiscipline._differentiated_outputs are used to filter the differentiated variables. otherwise, all outputs are differentiated wrt all inputs.

    By default it is set to False.

  • force_no_exec (bool) –

    If True, the discipline is not re-executed, cache is loaded anyway.

    By default it is set to False.

Returns:

The Jacobian of the discipline.

Return type:

dict[str, dict[str, ndarray]]

notify_status_observers()

Notify all status observers that the status has changed.

Return type:

None

remove_status_observer(obs)

Remove an observer for the status.

Parameters:

obs (Any) – The observer to remove.

Return type:

None

reset_statuses_for_run()

Set all the statuses to MDODiscipline.STATUS_PENDING.

Raises:

ValueError – When the discipline cannot be run because of its status.

Return type:

None

serialize(file_path)

Serialize the discipline and store it in a file.

Parameters:

file_path (str | Path) – The path to the file to store the discipline.

Return type:

None

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method defines when the output data have to be cached according to the distance between the corresponding input data and the input data already cached for which output data are also cached.

The cache can be either a SimpleCache recording the last execution or a cache storing all executions, e.g. MemoryFullCache and HDF5Cache. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache, or on the disk, e.g. HDF5Cache.

The attribute CacheFactory.caches provides the available caches types.

Parameters:
  • cache_type (str) –

    The type of cache.

    By default it is set to “SimpleCache”.

  • cache_tolerance (float) –

    The maximum relative norm of the difference between two input arrays to consider that two input arrays are equal.

    By default it is set to 0.0.

  • cache_hdf_file (str | Path | None) – The path to the HDF file to store the data; this argument is mandatory when the MDODiscipline.HDF5_CACHE policy is used.

  • cache_hdf_node_name (str | None) – The name of the HDF file node to store the discipline data. If None, MDODiscipline.name is used.

  • is_memory_shared (bool) –

    Whether to store the data with a shared memory dictionary, which makes the cache compatible with multiprocessing.

    By default it is set to True.

Return type:

None

set_disciplines_statuses(status)

Set the sub-disciplines statuses.

To be implemented in subclasses.

Parameters:

status (str) – The status.

Return type:

None

set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)

Set the Jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling MDODiscipline.linearize().

Parameters:
  • jac_approx_type (str) –

    The approximation method, either “complex_step” or “finite_differences”.

    By default it is set to “finite_differences”.

  • jax_approx_step (float) –

    The differentiation step.

    By default it is set to 1e-07.

  • jac_approx_n_processes (int) –

    The maximum simultaneous number of threads, if jac_approx_use_threading is True, or processes otherwise, used to parallelize the execution.

    By default it is set to 1.

  • jac_approx_use_threading (bool) –

    Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.

    By default it is set to False.

  • jac_approx_wait_time (float) –

    The time waited between two forks of the process / thread.

    By default it is set to 0.

Return type:

None

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of the perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (round-off when doing f(x+step)-f(x)) are approximately equal.

Warning

This calls the discipline execution twice per input variables.

See also

https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differentiation”

Parameters:
  • inputs (Iterable[str] | None) – The inputs wrt which the outputs are linearized. If None, use the MDODiscipline._differentiated_inputs.

  • outputs (Iterable[str] | None) – The outputs to be linearized. If None, use the MDODiscipline._differentiated_outputs.

  • force_all (bool) –

    Whether to consider all the inputs and outputs of the discipline;

    By default it is set to False.

  • print_errors (bool) –

    Whether to display the estimated errors.

    By default it is set to False.

  • numerical_error (float) –

    The numerical error associated to the calculation of f. By default, this is the machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution.

    By default it is set to 2.220446049250313e-16.

Returns:

The estimated errors of truncation and cancellation error.

Raises:

ValueError – When the Jacobian approximation method has not been set.

store_local_data(**kwargs)

Store discipline data in local data.

Parameters:

**kwargs (Any) – The data to be stored in MDODiscipline.local_data.

Return type:

None

GRAMMAR_DIRECTORY: ClassVar[str | None] = None

The directory in which to search for the grammar files if not the class one.

activate_cache: bool = True

Whether to cache the discipline evaluations by default.

activate_counters: ClassVar[bool] = True

Whether to activate the counters (execution time, calls and linearizations).

activate_input_data_check: ClassVar[bool] = True

Whether to check the input data respect the input grammar.

activate_output_data_check: ClassVar[bool] = True

Whether to check the output data respect the output grammar.

cache: AbstractCache | None

The cache containing one or several executions of the discipline according to the cache policy.

property cache_tol: float

The cache input tolerance.

This is the tolerance for equality of the inputs in the cache. If norm(stored_input_data-input_data) <= cache_tol * norm(stored_input_data), the cached data for stored_input_data is returned when calling self.execute(input_data).

Raises:

ValueError – When the discipline does not have a cache.

data_processor: DataProcessor

A tool to pre- and post-process discipline data.

property default_inputs: dict[str, Any]

The default inputs.

Raises:

TypeError – When the default inputs are not passed as a dictionary.

property disciplines: list[gemseo.core.discipline.MDODiscipline]

The sub-disciplines, if any.

exec_for_lin: bool

Whether the last execution was due to a linearization.

property exec_time: float | None

The cumulated execution time of the discipline.

This property is multiprocessing safe.

Raises:

RuntimeError – When the discipline counters are disabled.

property grammar_type: BaseGrammar

The type of grammar to be used for inputs and outputs declaration.

input_grammar: BaseGrammar

The input grammar.

jac: dict[str, dict[str, ndarray]]

The Jacobians of the outputs wrt inputs of the form {output: {input: matrix}}.

property linearization_mode: str

The linearization mode among MDODiscipline.AVAILABLE_MODES.

Raises:

ValueError – When the linearization mode is unknown.

property local_data: DisciplineData

The current input and output data.

property n_calls: int | None

The number of times the discipline was executed.

This property is multiprocessing safe.

Raises:

RuntimeError – When the discipline counters are disabled.

property n_calls_linearize: int | None

The number of times the discipline was linearized.

This property is multiprocessing safe.

Raises:

RuntimeError – When the discipline counters are disabled.

name: str

The name of the discipline.

output_grammar: BaseGrammar

The output grammar.

re_exec_policy: str

The policy to re-execute the same discipline.

residual_variables: Mapping[str, str]

The output variables mapping to their inputs, to be considered as residuals; they shall be equal to zero.

run_solves_residuals: bool

If True, the run method shall solve the residuals.

property status: str

The status of the discipline.

Example

Iris dataset

This is one of the best known Dataset to be found in the machine learning literature.

It was introduced by the statistician Ronald Fisher in his 1936 paper “The use of multiple measurements in taxonomic problems”, Annals of Eugenics. 7 (2): 179–188.

It contains 150 instances of iris plants:

  • 50 Iris Setosa,

  • 50 Iris Versicolour,

  • 50 Iris Virginica.

Each instance is characterized by:

  • its sepal length in cm,

  • its sepal width in cm,

  • its petal length in cm,

  • its petal width in cm.

This Dataset can be used for either clustering purposes or classification ones.

More information about the Iris dataset

class gemseo.problems.dataset.iris.IrisDataset(name='Iris', by_group=True, as_io=False)[source]

Iris dataset parametrization.

Constructor.

add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)

Add data related to a group.

Parameters:
  • group (str) – The name of the group of data to be added.

  • data (ndarray) – The data to be added.

  • variables (list[str] | None) – The names of the variables. If None, use default names based on a pattern.

  • sizes (dict[str, int] | None) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • pattern (str | None) – The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type:

None

add_variable(name, data, group='parameters', cache_as_input=True)

Add data related to a variable.

Parameters:
  • name (str) – The name of the variable to be stored.

  • data (ndarray) – The data to be stored.

  • group (str) –

    The name of the group related to this variable.

    By default it is set to “parameters”.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type:

None

compare(value_1, logical_operator, value_2, component_1=0, component_2=0)

Compare either a variable and a value or a variable and another variable.

Parameters:
  • value_1 (str | float) – The first value, either a variable name or a numeric value.

  • logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.

  • value_2 (str | float) – The second value, either a variable name or a numeric value.

  • component_1 (int) –

    If value_1 is a variable name, component_1 corresponds to its component used in the comparison.

    By default it is set to 0.

  • component_2 (int) –

    If value_2 is a variable name, component_2 corresponds to its component used in the comparison.

    By default it is set to 0.

Returns:

Whether the comparison is valid for the different entries.

Return type:

ndarray

export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)

Export the dataset to a cache.

Parameters:
  • inputs (Iterable[str] | None) – The names of the inputs to cache. If None, use all inputs.

  • outputs (Iterable[str] | None) – The names of the outputs to cache. If None, use all outputs.

  • cache_type (str) –

    The type of cache to use.

    By default it is set to “MemoryFullCache”.

  • cache_hdf_file (str | None) – The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.

  • cache_hdf_node_name (str | None) – The name of the HDF node to store the discipline. If None, use the name of the dataset.

Returns:

A cache containing the dataset.

Return type:

AbstractFullCache

export_to_dataframe(copy=True, variable_names=None)

Export the dataset to a pandas Dataframe.

Parameters:
  • copy (bool) –

    If True, copy data. Otherwise, use reference.

    By default it is set to True.

  • variable_names (Sequence[str] | None) –

Returns:

A pandas DataFrame containing the dataset.

Return type:

DataFrame

static find(comparison)

Find the entries for which a comparison is satisfied.

This search uses a boolean 1D array whose length is equal to the length of the dataset.

Parameters:

comparison (ndarray) – A boolean vector whose length is equal to the number of samples.

Returns:

The indices of the entries for which the comparison is satisfied.

Return type:

list[int]

get_all_data(by_group=True, as_dict=False)

Get all the data stored in the dataset.

The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied by the names and sizes of the variables.

The data can also be classified by groups of variables.

Parameters:
  • by_group

    If True, sort the data by group.

    By default it is set to True.

  • as_dict

    If True, return the data as a dictionary.

    By default it is set to False.

Returns:

All the data stored in the dataset.

Return type:

Union[Dict[str, Union[Dict[str, ndarray], ndarray]], Tuple[Union[ndarray, Dict[str, ndarray]], List[str], Dict[str, int]]]

static get_available_plots()

Return the available plot methods.

Return type:

list[str]

get_column_names(variables=None, as_tuple=False, start=0)

Return the names of the columns of the dataset.

If dim(x)=1, its column name is ‘x’ while if dim(y)=2, its column names are either ‘x_0’ and ‘x_1’ or ColumnName(group_name, ‘x’, ‘0’) and ColumnName(group_name, ‘x’, ‘1’).

Parameters:
  • variables (Sequence[str]) – The names of the variables. If None, use all the variables.

  • as_tuple (bool) –

    If True, return the names as named tuples. otherwise, return the names as strings.

    By default it is set to False.

  • start (int) –

    The first index for the components of a variable. E.g. with ‘0’: ‘x_0’, ‘x_1’, …

    By default it is set to 0.

Returns:

The names of the columns of the dataset.

Return type:

list[str | ColumnName]

get_data_by_group(group, as_dict=False)

Get the data for a specific group name.

Parameters:
  • group (str) – The name of the group.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to False.

Returns:

The data related to the group.

Return type:

ndarray | dict[str, ndarray]

get_data_by_names(names, as_dict=True)

Get the data for specific names of variables.

Parameters:
  • names (str | Iterable[str]) – The names of the variables.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to True.

Returns:

The data related to the variables.

Return type:

ndarray | dict[str, ndarray]

get_group(variable_name)

Get the name of the group that contains a variable.

Parameters:

variable_name (str) – The name of the variable.

Returns:

The group to which the variable belongs.

Return type:

str

get_names(group_name)

Get the names of the variables of a group.

Parameters:

group_name (str) – The name of the group.

Returns:

The names of the variables of the group.

Return type:

list[str]

get_normalized_dataset(excluded_variables=None, excluded_groups=None, use_min_max=True, center=False, scale=False)

Get a normalized copy of the dataset.

Parameters:
  • excluded_variables (Sequence[str] | None) – The names of the variables not to be normalized. If None, normalize all the variables.

  • excluded_groups (Sequence[str] | None) – The names of the groups not to be normalized. If None, normalize all the groups.

  • use_min_max (bool) –

    Whether to use the geometric normalization \((x-\min(x))/(\max(x)-\min(x))\).

    By default it is set to True.

  • center (bool) –

    Whether to center the variables so that they have a zero mean.

    By default it is set to False.

  • scale (bool) –

    Whether to scale the variables so that they have a unit variance.

    By default it is set to False.

Returns:

A normalized dataset.

Return type:

Dataset

is_empty()

Check if the dataset is empty.

Returns:

Whether the dataset is empty.

Return type:

bool

is_group(name)

Check if a name is a group name.

Parameters:

name (str) – A name of a group.

Returns:

Whether the name is a group name.

Return type:

bool

is_nan()

Check if an entry contains NaN.

Returns:

Whether any entries are NaN or not.

Return type:

ndarray

is_variable(name)

Check if a name is a variable name.

Parameters:

name (str) – A name of a variable.

Returns:

Whether the name is a variable name.

Return type:

bool

n_variables_by_group(group)

The number of variables for a group.

Parameters:

group (str) – The name of a group.

Returns:

The group dimension.

Return type:

int

plot(name, show=True, save=False, file_path=None, directory_path=None, file_name=None, file_format=None, properties=None, **options)

Plot the dataset from a DatasetPlot.

See Dataset.get_available_plots()

Parameters:
  • name (str) – The name of the post-processing, which is the name of a class inheriting from DatasetPlot.

  • show (bool) –

    If True, display the figure.

    By default it is set to True.

  • save (bool) –

    If True, save the figure.

    By default it is set to False.

  • file_path (str | Path | None) – The path of the file to save the figures. If None, create a file path from directory_path, file_name and file_format.

  • directory_path (str | Path | None) – The path of the directory to save the figures. If None, use the current working directory.

  • file_name (str | None) – The name of the file to save the figures. If None, use a default one generated by the post-processing.

  • file_format (str | None) – A file format, e.g. ‘png’, ‘pdf’, ‘svg’, … If None, use a default file extension.

  • properties (Mapping[str, DatasetPlotPropertyType] | None) – The general properties of a DatasetPlot.

  • **options – The options for the post-processing.

Return type:

DatasetPlot

remove(entries)

Remove entries.

Parameters:

entries (list[int] | ndarray) – The entries to be removed, either indices or a boolean 1D array whose length is equal to the length of the dataset and elements to delete are coded True.

Return type:

None

rename_variable(name, new_name)

Rename a variable.

Parameters:
  • name (str) – The name of the variable.

  • new_name (str) – The new name of the variable.

Return type:

None

set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)

Set the dataset from an array.

Parameters:
  • data (ndarray) – The data to be stored.

  • variables (list[str] | None) – The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.

  • sizes (dict[str, int] | None) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • groups (dict[str, str] | None) – The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

  • default_name (str | None) – The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

Return type:

None

set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)

Set the dataset from a file.

Parameters:
  • filename (Path | str) – The name of the file containing the data.

  • variables (list[str] | None) – The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the Dataset.DEFAULT_NAMES associated with the different groups.

  • sizes (dict[str, int] | None) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • groups (dict[str, str] | None) – The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

  • delimiter (str) –

    The field delimiter.

    By default it is set to “,”.

  • header (bool) –

    If True, read the names of the variables on the first line of the file.

    By default it is set to True.

Return type:

None

set_metadata(name, value)

Set a metadata attribute.

Parameters:
  • name (str) – The name of the metadata attribute.

  • value (Any) – The value of the metadata attribute.

Return type:

None

transform_variable(name, transformation)

Transform a variable.

Parameters:
  • name (str) – The name of the variable, e.g. "foo".

  • transformation (Callable[[ndarray], ndarray]) – The function transforming the variable, e.g. "lambda x: np.exp(x)".

Return type:

None

DEFAULT_GROUP: ClassVar[str] = 'parameters'

The default name to group the variables.

DEFAULT_NAMES: ClassVar[dict[str, str]] = {'design_parameters': 'dp', 'functions': 'func', 'inputs': 'in', 'outputs': 'out', 'parameters': 'x'}

The default variable names for the different groups.

DESIGN_GROUP: ClassVar[str] = 'design_parameters'

The group name for the design variables of an OptimizationProblem.

FUNCTION_GROUP: ClassVar[str] = 'functions'

The group name for the functions of an OptimizationProblem.

GRADIENT_GROUP: ClassVar[str] = 'gradients'

The group name for the gradients.

HDF5_CACHE: ClassVar[str] = 'HDF5Cache'

The name of the HDF5Cache.

INPUT_GROUP: ClassVar[str] = 'inputs'

The group name for the input variables.

MEMORY_FULL_CACHE: ClassVar[str] = 'MemoryFullCache'

The name of the MemoryFullCache.

OUTPUT_GROUP: ClassVar[str] = 'outputs'

The group name for the output variables.

PARAMETER_GROUP: ClassVar[str] = 'parameters'

The group name for the parameters.

property columns_names: list[str | ColumnName]

The names of the columns of the dataset.

data: dict[str, ndarray]

The data stored by variable names or group names.

The values are NumPy arrays whose columns are features and rows are observations.

dimension: dict[str, int]

The dimensions of the groups of variables.

property groups: list[str]

The sorted names of the groups of variables.

length: int

The length of the dataset.

metadata: dict[str, Any]

The metadata used to store any kind of information that are not variables,

E.g. the mesh associated with a multi-dimensional variable.

property n_samples: int

The number of samples.

property n_variables: int

The number of variables.

name: str

The name of the dataset.

property row_names: list[str]

The names of the rows.

sizes: dict[str, int]

The sizes of the variables.

strings_encoding: dict[str, dict[int, int]]

The encoding structure mapping the values of the string variables with integers.

The keys are the names of the variables and the values are dictionaries whose keys are the components of the variables and the values are the integer values.

property variables: list[str]

The sorted names of the variables.

Example

Rosenbrock dataset

This Dataset contains 100 evaluations of the well-known Rosenbrock function:

\[f(x,y)=(1-x)^2+100(y-x^2)^2\]

This function is known for its global minimum at point (1,1), its banana valley and the difficulty to reach its minimum.

This Dataset is based on a full-factorial design of experiments.

More information about the Rosenbrock function

class gemseo.problems.dataset.rosenbrock.RosenbrockDataset(name='Rosenbrock', by_group=True, n_samples=100, categorize=True, opt_naming=True)[source]

Rosenbrock dataset parametrization.

Parameters:
  • name (str) –

    The name of the dataset.

    By default it is set to “Rosenbrock”.

  • by_group (bool) –

    Whether to store the data by group. Otherwise, store them by variables.

    By default it is set to True.

  • n_samples (int) –

    The number of samples.

    By default it is set to 100.

  • categorize (bool) –

    Whether to distinguish between the different groups of variables.

    By default it is set to True.

  • opt_naming (bool) –

    Whether to use an optimization naming.

    By default it is set to True.

add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)

Add data related to a group.

Parameters:
  • group (str) – The name of the group of data to be added.

  • data (ndarray) – The data to be added.

  • variables (list[str] | None) – The names of the variables. If None, use default names based on a pattern.

  • sizes (dict[str, int] | None) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • pattern (str | None) – The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type:

None

add_variable(name, data, group='parameters', cache_as_input=True)

Add data related to a variable.

Parameters:
  • name (str) – The name of the variable to be stored.

  • data (ndarray) – The data to be stored.

  • group (str) –

    The name of the group related to this variable.

    By default it is set to “parameters”.

  • cache_as_input (bool) –

    If True, cache these data as inputs when the cache is exported to a cache.

    By default it is set to True.

Return type:

None

compare(value_1, logical_operator, value_2, component_1=0, component_2=0)

Compare either a variable and a value or a variable and another variable.

Parameters:
  • value_1 (str | float) – The first value, either a variable name or a numeric value.

  • logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.

  • value_2 (str | float) – The second value, either a variable name or a numeric value.

  • component_1 (int) –

    If value_1 is a variable name, component_1 corresponds to its component used in the comparison.

    By default it is set to 0.

  • component_2 (int) –

    If value_2 is a variable name, component_2 corresponds to its component used in the comparison.

    By default it is set to 0.

Returns:

Whether the comparison is valid for the different entries.

Return type:

ndarray

export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)

Export the dataset to a cache.

Parameters:
  • inputs (Iterable[str] | None) – The names of the inputs to cache. If None, use all inputs.

  • outputs (Iterable[str] | None) – The names of the outputs to cache. If None, use all outputs.

  • cache_type (str) –

    The type of cache to use.

    By default it is set to “MemoryFullCache”.

  • cache_hdf_file (str | None) – The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.

  • cache_hdf_node_name (str | None) – The name of the HDF node to store the discipline. If None, use the name of the dataset.

Returns:

A cache containing the dataset.

Return type:

AbstractFullCache

export_to_dataframe(copy=True, variable_names=None)

Export the dataset to a pandas Dataframe.

Parameters:
  • copy (bool) –

    If True, copy data. Otherwise, use reference.

    By default it is set to True.

  • variable_names (Sequence[str] | None) –

Returns:

A pandas DataFrame containing the dataset.

Return type:

DataFrame

static find(comparison)

Find the entries for which a comparison is satisfied.

This search uses a boolean 1D array whose length is equal to the length of the dataset.

Parameters:

comparison (ndarray) – A boolean vector whose length is equal to the number of samples.

Returns:

The indices of the entries for which the comparison is satisfied.

Return type:

list[int]

get_all_data(by_group=True, as_dict=False)

Get all the data stored in the dataset.

The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied by the names and sizes of the variables.

The data can also be classified by groups of variables.

Parameters:
  • by_group

    If True, sort the data by group.

    By default it is set to True.

  • as_dict

    If True, return the data as a dictionary.

    By default it is set to False.

Returns:

All the data stored in the dataset.

Return type:

Union[Dict[str, Union[Dict[str, ndarray], ndarray]], Tuple[Union[ndarray, Dict[str, ndarray]], List[str], Dict[str, int]]]

static get_available_plots()

Return the available plot methods.

Return type:

list[str]

get_column_names(variables=None, as_tuple=False, start=0)

Return the names of the columns of the dataset.

If dim(x)=1, its column name is ‘x’ while if dim(y)=2, its column names are either ‘x_0’ and ‘x_1’ or ColumnName(group_name, ‘x’, ‘0’) and ColumnName(group_name, ‘x’, ‘1’).

Parameters:
  • variables (Sequence[str]) – The names of the variables. If None, use all the variables.

  • as_tuple (bool) –

    If True, return the names as named tuples. otherwise, return the names as strings.

    By default it is set to False.

  • start (int) –

    The first index for the components of a variable. E.g. with ‘0’: ‘x_0’, ‘x_1’, …

    By default it is set to 0.

Returns:

The names of the columns of the dataset.

Return type:

list[str | ColumnName]

get_data_by_group(group, as_dict=False)

Get the data for a specific group name.

Parameters:
  • group (str) – The name of the group.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to False.

Returns:

The data related to the group.

Return type:

ndarray | dict[str, ndarray]

get_data_by_names(names, as_dict=True)

Get the data for specific names of variables.

Parameters:
  • names (str | Iterable[str]) – The names of the variables.

  • as_dict (bool) –

    If True, return values as dictionary.

    By default it is set to True.

Returns:

The data related to the variables.

Return type:

ndarray | dict[str, ndarray]

get_group(variable_name)

Get the name of the group that contains a variable.

Parameters:

variable_name (str) – The name of the variable.

Returns:

The group to which the variable belongs.

Return type:

str

get_names(group_name)

Get the names of the variables of a group.

Parameters:

group_name (str) – The name of the group.

Returns:

The names of the variables of the group.

Return type:

list[str]

get_normalized_dataset(excluded_variables=None, excluded_groups=None, use_min_max=True, center=False, scale=False)

Get a normalized copy of the dataset.

Parameters:
  • excluded_variables (Sequence[str] | None) – The names of the variables not to be normalized. If None, normalize all the variables.

  • excluded_groups (Sequence[str] | None) – The names of the groups not to be normalized. If None, normalize all the groups.

  • use_min_max (bool) –

    Whether to use the geometric normalization \((x-\min(x))/(\max(x)-\min(x))\).

    By default it is set to True.

  • center (bool) –

    Whether to center the variables so that they have a zero mean.

    By default it is set to False.

  • scale (bool) –

    Whether to scale the variables so that they have a unit variance.

    By default it is set to False.

Returns:

A normalized dataset.

Return type:

Dataset

is_empty()

Check if the dataset is empty.

Returns:

Whether the dataset is empty.

Return type:

bool

is_group(name)

Check if a name is a group name.

Parameters:

name (str) – A name of a group.

Returns:

Whether the name is a group name.

Return type:

bool

is_nan()

Check if an entry contains NaN.

Returns:

Whether any entries are NaN or not.

Return type:

ndarray

is_variable(name)

Check if a name is a variable name.

Parameters:

name (str) – A name of a variable.

Returns:

Whether the name is a variable name.

Return type:

bool

n_variables_by_group(group)

The number of variables for a group.

Parameters:

group (str) – The name of a group.

Returns:

The group dimension.

Return type:

int

plot(name, show=True, save=False, file_path=None, directory_path=None, file_name=None, file_format=None, properties=None, **options)

Plot the dataset from a DatasetPlot.

See Dataset.get_available_plots()

Parameters:
  • name (str) – The name of the post-processing, which is the name of a class inheriting from DatasetPlot.

  • show (bool) –

    If True, display the figure.

    By default it is set to True.

  • save (bool) –

    If True, save the figure.

    By default it is set to False.

  • file_path (str | Path | None) – The path of the file to save the figures. If None, create a file path from directory_path, file_name and file_format.

  • directory_path (str | Path | None) – The path of the directory to save the figures. If None, use the current working directory.

  • file_name (str | None) – The name of the file to save the figures. If None, use a default one generated by the post-processing.

  • file_format (str | None) – A file format, e.g. ‘png’, ‘pdf’, ‘svg’, … If None, use a default file extension.

  • properties (Mapping[str, DatasetPlotPropertyType] | None) – The general properties of a DatasetPlot.

  • **options – The options for the post-processing.

Return type:

DatasetPlot

remove(entries)

Remove entries.

Parameters:

entries (list[int] | ndarray) – The entries to be removed, either indices or a boolean 1D array whose length is equal to the length of the dataset and elements to delete are coded True.

Return type:

None

rename_variable(name, new_name)

Rename a variable.

Parameters:
  • name (str) – The name of the variable.

  • new_name (str) – The new name of the variable.

Return type:

None

set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)

Set the dataset from an array.

Parameters:
  • data (ndarray) – The data to be stored.

  • variables (list[str] | None) – The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.

  • sizes (dict[str, int] | None) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • groups (dict[str, str] | None) – The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

  • default_name (str | None) – The name of the variable to be used as a pattern when variables is None. If None, use the Dataset.DEFAULT_NAMES for this group if it exists. Otherwise, use the group name.

Return type:

None

set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)

Set the dataset from a file.

Parameters:
  • filename (Path | str) – The name of the file containing the data.

  • variables (list[str] | None) – The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the Dataset.DEFAULT_NAMES associated with the different groups.

  • sizes (dict[str, int] | None) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.

  • groups (dict[str, str] | None) – The groups of the variables. If None, use Dataset.DEFAULT_GROUP for all the variables.

  • delimiter (str) –

    The field delimiter.

    By default it is set to “,”.

  • header (bool) –

    If True, read the names of the variables on the first line of the file.

    By default it is set to True.

Return type:

None

set_metadata(name, value)

Set a metadata attribute.

Parameters:
  • name (str) – The name of the metadata attribute.

  • value (Any) – The value of the metadata attribute.

Return type:

None

transform_variable(name, transformation)

Transform a variable.

Parameters:
  • name (str) – The name of the variable, e.g. "foo".

  • transformation (Callable[[ndarray], ndarray]) – The function transforming the variable, e.g. "lambda x: np.exp(x)".

Return type:

None

DEFAULT_GROUP: ClassVar[str] = 'parameters'

The default name to group the variables.

DEFAULT_NAMES: ClassVar[dict[str, str]] = {'design_parameters': 'dp', 'functions': 'func', 'inputs': 'in', 'outputs': 'out', 'parameters': 'x'}

The default variable names for the different groups.

DESIGN_GROUP: ClassVar[str] = 'design_parameters'

The group name for the design variables of an OptimizationProblem.

FUNCTION_GROUP: ClassVar[str] = 'functions'

The group name for the functions of an OptimizationProblem.

GRADIENT_GROUP: ClassVar[str] = 'gradients'

The group name for the gradients.

HDF5_CACHE: ClassVar[str] = 'HDF5Cache'

The name of the HDF5Cache.

INPUT_GROUP: ClassVar[str] = 'inputs'

The group name for the input variables.

MEMORY_FULL_CACHE: ClassVar[str] = 'MemoryFullCache'

The name of the MemoryFullCache.

OUTPUT_GROUP: ClassVar[str] = 'outputs'

The group name for the output variables.

PARAMETER_GROUP: ClassVar[str] = 'parameters'

The group name for the parameters.

property columns_names: list[str | ColumnName]

The names of the columns of the dataset.

data: dict[str, ndarray]

The data stored by variable names or group names.

The values are NumPy arrays whose columns are features and rows are observations.

dimension: dict[str, int]

The dimensions of the groups of variables.

property groups: list[str]

The sorted names of the groups of variables.

length: int

The length of the dataset.

metadata: dict[str, Any]

The metadata used to store any kind of information that are not variables,

E.g. the mesh associated with a multi-dimensional variable.

property n_samples: int

The number of samples.

property n_variables: int

The number of variables.

name: str

The name of the dataset.

property row_names: list[str]

The names of the rows.

sizes: dict[str, int]

The sizes of the variables.

strings_encoding: dict[str, dict[int, int]]

The encoding structure mapping the values of the string variables with integers.

The keys are the names of the variables and the values are dictionaries whose keys are the components of the variables and the values are the integer values.

property variables: list[str]

The sorted names of the variables.

Example