Built-in datasets¶
Dataset factory¶
This module contains a factory
to instantiate a Dataset
from its class name.
The class can be internal to GEMSEO or located in an external module whose path
is provided to the constructor. It also provides a list of available cache
types and allows you to test if a cache type is available.
Classes:
This factory instantiates a |
- class gemseo.problems.dataset.factory.DatasetFactory[source]
This factory instantiates a
Dataset
from its class name.The class can be internal to GEMSEO or located in an external module whose path is provided to the constructor.
Initializes the factory: scans the directories to search for subclasses of Dataset.
Searches in “GEMSEO_PATH” and gemseo.mlearning.p_datasets
Methods:
create
(dataset, **options)Create a dataset.
is_available
(dataset)Checks the availability of a dataset.
Attributes:
Lists the available datasets.
- create(dataset, **options)[source]
Create a dataset.
- Parameters
dataset (str) – name of the dataset (its classname).
options – additional options specific
- Returns
dataset
- Return type
- property datasets
Lists the available datasets.
- Returns
the list of datasets.
- Return type
list(str)
- is_available(dataset)[source]
Checks the availability of a dataset.
- Parameters
dataset (str) – name of the dataset (its class name).
- Returns
True if the dataset is available.
- Return type
bool
Burgers dataset¶
This Dataset
contains solutions to the Burgers’ equation with
periodic boundary conditions on the interval \([0, 2\pi]\) for different
time steps:
An analytical expression can be obtained for the solution, using the Cole-Hopf transform:
where \(\phi\) is solution to the heat equation \(\phi_t = \nu \phi_{xx}\).
This Dataset
is based on a full-factorial
design of experiments. Each sample corresponds to a given time step \(t\),
while each feature corresponds to a given spatial point \(x\).
More information about Burgers’ equation
Classes:
|
Burgers dataset parametrization. |
A software integrated in the workflow. |
- class gemseo.problems.dataset.burgers.BurgersDataset(name='Burgers', by_group=True, n_samples=30, n_x=501, fluid_viscosity=0.1, categorize=True)[source]
Burgers dataset parametrization.
Constructor.
- Parameters
name (str) – name of the dataset.
by_group (bool) – if True, store the data by group. Otherwise, store them by variables. Default: True.
n_samples (int) – number of samples. Default: 30.
n_x (int) – number of spatial points. Default: 501.
fluid_viscosity (float) – fluid viscosity. Default: 0.1.
categorize (bool) – distinguish between the different groups of variables. Default: True.
- Parma bool opt_naming
use an optimization naming. Default: True.
Methods:
add_group
(group, data[, variables, sizes, …])Add data related to a group.
add_variable
(name, data[, group, cache_as_input])Add data related to a variable.
compare
(value_1, logical_operator, value_2)Compare either a variable and a value or a variable and another variable.
export_to_cache
([inputs, outputs, …])Export the dataset to a cache.
export_to_dataframe
([copy])Export the dataset to a pandas Dataframe.
find
(comparison)Find the entries for which a comparison is satisfied.
get_all_data
([by_group, as_dict])Get all the data stored in the dataset.
Return the available plot methods.
get_data_by_group
(group[, as_dict])Get the data for a specific group name.
get_data_by_names
(names[, as_dict])Get the data for specific names of variables.
get_group
(variable_name)Get the name of the group that contains a variable.
get_names
(group_name)Get the names of the variables of a group.
get_normalized_dataset
([excluded_variables, …])Get a normalized copy of the dataset.
is_empty
()Check if the dataset is empty.
is_group
(name)Check if a name is a group name.
is_nan
()Check if an entry contains NaN.
is_variable
(name)Check if a name is a variable name.
n_variables_by_group
(group)The number of variables for a group.
plot
(name[, show, save])Plot the dataset from a
DatasetPlot
.remove
(entries)Remove entries.
set_from_array
(data[, variables, sizes, …])Set the dataset from an array.
set_from_file
(filename[, variables, sizes, …])Set the dataset from a file.
set_metadata
(name, value)Set a metadata attribute.
Attributes:
The names of the columns of the dataset.
The sorted names of the groups of variables.
The number of samples.
The number of variables.
The names of the rows.
The sorted names of the variables.
- add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)
Add data related to a group.
- Parameters
group (str) – The name of the group of data to be added.
data (numpy.ndarray) – The data to be added.
variables (Optional[List[str]]) – The names of the variables. If None, use default names based on a pattern.
sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.
pattern (Optional[str]) – The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.cache_as_input (bool) – If True, cache these data as inputs when the cache is exported to a cache.
- Return type
str
- add_variable(name, data, group='parameters', cache_as_input=True)
Add data related to a variable.
- Parameters
name (str) – The name of the variable to be stored.
data (numpy.ndarray) – The data to be stored.
group (str) – The name of the group related to this variable.
cache_as_input (bool) – If True, cache these data as inputs when the cache is exported to a cache.
- Return type
None
- property columns_names
The names of the columns of the dataset.
- compare(value_1, logical_operator, value_2, component_1=0, component_2=0)
Compare either a variable and a value or a variable and another variable.
- Parameters
value_1 (Union[str, float]) – The first value, either a variable name or a numeric value.
logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.
value_2 (Union[str, float]) – The second value, either a variable name or a numeric value.
component_1 (int) – If value_1 is a variable name, component_1 corresponds to its component used in the comparison.
component_2 (int) – If value_2 is a variable name, component_2 corresponds to its component used in the comparison.
- Returns
Whether the comparison is valid for the different entries.
- Return type
numpy.ndarray
- export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)
Export the dataset to a cache.
- Parameters
inputs (Optional[Iterable[str]]) – The names of the inputs to cache. If None, use all inputs.
outputs (Optional[Iterable[str]]) – The names of the outputs to cache. If None, use all outputs.
cache_type (str) – The type of cache to use.
cache_hdf_file (Optional[str]) – The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.
cache_hdf_node_name (Optional[str]) – The name of the HDF node to store the discipline. If None, use the name of the dataset.
- Returns
A cache containing the dataset.
- Return type
- export_to_dataframe(copy=True)
Export the dataset to a pandas Dataframe.
- Parameters
copy (bool) – If True, copy data. Otherwise, use reference.
- Returns
A pandas DataFrame containing the dataset.
- Return type
DataFrame
- static find(comparison)
Find the entries for which a comparison is satisfied.
This search uses a boolean 1D array whose length is equal to the length of the dataset.
- Parameters
comparison (numpy.ndarray) – A boolean vector whose length is equal to the number of samples.
- Returns
The indices of the entries for which the comparison is satisfied.
- Return type
List[int]
- get_all_data(by_group=True, as_dict=False)
Get all the data stored in the dataset.
The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied with the names and sizes of the variables.
The data can also classified by groups of variables.
- Parameters
by_group – If True, sort the data by group.
as_dict – If True, return the data as a dictionary.
- Returns
All the data stored in the dataset.
- Return type
Union[Dict[str, Union[Dict[str, numpy.ndarray], numpy.ndarray]], Tuple[Union[numpy.ndarray, Dict[str, numpy.ndarray]], List[str], Dict[str, int]]]
- get_available_plots()
Return the available plot methods.
- Return type
List[str]
- get_data_by_group(group, as_dict=False)
Get the data for a specific group name.
- Parameters
group (str) – The name of the group.
as_dict (bool) – If True, return values as dictionary.
- Returns
The data related to the group.
- Return type
Union[numpy.ndarray, Dict[str, numpy.ndarray]]
- get_data_by_names(names, as_dict=True)
Get the data for specific names of variables.
- Parameters
names (Union[str, Iterable[str]]) – The names of the variables.
as_dict (bool) – If True, return values as dictionary.
- Returns
The data related to the variables.
- Return type
Union[numpy.ndarray, Dict[str, numpy.ndarray]]
- get_group(variable_name)
Get the name of the group that contains a variable.
- Parameters
variable_name (str) – The name of the variable.
- Returns
The group to which the variable belongs.
- Return type
str
- get_names(group_name)
Get the names of the variables of a group.
- Parameters
group_name (str) – The name of the group.
- Returns
The names of the variables of the group.
- Return type
List[str]
- get_normalized_dataset(excluded_variables=None, excluded_groups=None)
Get a normalized copy of the dataset.
- Parameters
excluded_variables (Optional[Sequence[str]]) – The names of the variables not to be normalized. If None, normalize all the variables.
excluded_groups (Optional[Sequence[str]]) – The names of the groups not to be normalized. If None, normalize all the groups.
- Returns
A normalized dataset.
- Return type
- property groups
The sorted names of the groups of variables.
- is_empty()
Check if the dataset is empty.
- Returns
Whether the dataset is empty.
- Return type
bool
- is_group(name)
Check if a name is a group name.
- Parameters
name (str) – A name of a group.
- Returns
Whether the name is a group name.
- Return type
bool
- is_nan()
Check if an entry contains NaN.
- Returns
Whether any entries is NaN or not.
- Return type
numpy.ndarray
- is_variable(name)
Check if a name is a variable name.
- Parameters
name (str) – A name of a variable.
- Returns
Whether the name is a variable name.
- Return type
bool
- property n_samples
The number of samples.
- property n_variables
The number of variables.
- n_variables_by_group(group)
The number of variables for a group.
- Parameters
group (str) – The name of a group.
- Returns
The group dimension.
- Return type
int
- plot(name, show=True, save=False, **options)
Plot the dataset from a
DatasetPlot
.See
Dataset.get_available_plots()
- Parameters
name (str) – The name of the post-processing, which is the name of a class inheriting from
DatasetPlot
.show (bool) – If True, display the figure.
save (bool) – If True, save the figure.
options – The options for the post-processing.
- Return type
None
- remove(entries)
Remove entries.
- Parameters
entries (Union[List[int], numpy.ndarray]) – The entries to be removed, either indices or a boolean 1D array whose length is equal to the length of the dataset and elements to delete are coded True.
- Return type
None
- property row_names
The names of the rows.
- set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)
Set the dataset from an array.
- Parameters
data (numpy.ndarray) – The data to be stored.
variables (Optional[List[str]]) – The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.
sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.
groups (Optional[Dict[str, str]]) – The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.default_name (Optional[str]) – The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.
- Return type
None
- set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)
Set the dataset from a file.
- Parameters
filename (str) – The name of the file containing the data.
variables (Optional[List[str]]) – The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the
Dataset.DEFAULT_NAMES
associated with the different groups.sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.
groups (Optional[Dict[str, str]]) – The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.delimiter (str) – The field delimiter.
header (bool) – If True, read the names of the variables on the first line of the file.
- Return type
None
- set_metadata(name, value)
Set a metadata attribute.
- Parameters
name (str) – The name of the metadata attribute.
value (Any) – The value of the metadata attribute.
- Return type
None
- property variables
The sorted names of the variables.
- class gemseo.problems.dataset.burgers.BurgersDiscipline[source]
A software integrated in the workflow.
The inputs and outputs are defined in a grammar, which can be either a SimpleGrammar or a JSONGrammar, or your own which derives from the Grammar abstract class
To be used, use a subclass and implement the _run method which defined the execution of the software. Typically, in the _run method, get the inputs from the input grammar, call your software, and write the outputs to the output grammar.
The JSON Grammar are automatically detected when in the same folder as your subclass module and named “CLASSNAME_input.json” use auto_detect_grammar_files=True to activate this option
Constructor.
- Parameters
name – the name of the discipline
input_grammar_file – the file for input grammar description, if None, name + “_input.json” is used
output_grammar_file – the file for output grammar description, if None, name + “_output.json” is used
auto_detect_grammar_files – if no input and output grammar files are provided, auto_detect_grammar_files uses a naming convention to associate a grammar file to a discipline: searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json
grammar_type – the type of grammar to use for IO declaration either JSON_GRAMMAR_TYPE or SIMPLE_GRAMMAR_TYPE
cache_type – type of cache policy, SIMPLE_CACHE or HDF5_CACHE
cache_file_path – the file to store the data, mandatory when HDF caching is used
Methods:
Activate the time stamps.
add_differentiated_inputs
([inputs])Add inputs to the differentiation list.
add_differentiated_outputs
([outputs])Add outputs to the differentiation list.
add_status_observer
(obs)Add an observer for the status.
auto_get_grammar_file
([is_input, name, comp_dir])Use a naming convention to associate a grammar file to a discipline.
check_input_data
(input_data[, raise_exception])Check the input data validity.
check_jacobian
([input_data, derr_approx, …])Check if the jacobian provided by the linearize() method is correct.
check_output_data
([raise_exception])Check the output data validity.
Deactivate the time stamps for storing start and end times of execution and linearizations.
deserialize
(in_file)Derialize the discipline from a file.
execute
([input_data])Execute the discipline.
Accessor for the input data as a list of values.
Accessor for the output data as a list of values.
Define the attributes to be serialized.
get_data_list_from_dict
(keys, data_dict)Filter the dict from a list of keys or a single key.
Return the expected data exchange sequence.
Return the expected execution sequence.
Accessor for the input data as a dict of values.
Accessor for the input names as a list.
Accessor for the input and output names as a list.
Accessor for the outputs as a large numpy array.
get_inputs_by_name
(data_names)Accessor for the inputs as a list.
get_local_data_by_name
(data_names)Accessor for the local data of the discipline as a dict of values.
Accessor for the output data as a dict of values.
Accessor for the output names as a list.
Accessor for the outputs as a large numpy array.
get_outputs_by_name
(data_names)Accessor for the outputs as a list.
Gets the sub disciplines of self By default, empty.
is_all_inputs_existing
(data_names)Test if all the names in data_names are inputs of the discipline.
is_all_outputs_existing
(data_names)Test if all the names in data_names are outputs of the discipline.
is_input_existing
(data_name)Test if input named data_name is an input of the discipline.
is_output_existing
(data_name)Test if output named data_name is an output of the discipline.
Return True if self is a scenario.
linearize
([input_data, force_all, force_no_exec])Execute the linearized version of the code.
Notify all status observers that the status has changed.
Remove an observer for the status.
Sets all the statuses to PENDING.
serialize
(out_file)Serialize the discipline.
set_cache_policy
([cache_type, …])Set the type of cache to use and the tolerance level.
set_disciplines_statuses
(status)Set the sub disciplines statuses.
Set the jacobian approximation method.
set_optimal_fd_step
([outputs, inputs, …])Compute the optimal finite-difference step.
store_local_data
(**kwargs)Store discipline data in local data.
Attributes:
Accessor to the cache input tolerance.
Accessor to the default inputs.
Return the cumulated execution time.
Accessor to the linearization mode.
Return the number of calls to execute() which triggered the _run().
Return the number of calls to linearize() which triggered the _compute_jacobian() method.
Status accessor.
- classmethod activate_time_stamps()
Activate the time stamps.
For storing start and end times of execution and linearizations.
- add_differentiated_inputs(inputs=None)
Add inputs to the differentiation list.
This method updates self._differentiated_inputs with inputs
- Parameters
inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)
- add_differentiated_outputs(outputs=None)
Add outputs to the differentiation list.
Update self._differentiated_inputs with inputs.
- Parameters
outputs – list of output variables to differentiate if None, all outputs of discipline are used
- add_status_observer(obs)
Add an observer for the status.
Add an observer for the status to be notified when self changes of status.
- Parameters
obs – the observer to add
- auto_get_grammar_file(is_input=True, name=None, comp_dir=None)
Use a naming convention to associate a grammar file to a discipline.
This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json
- Parameters
is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)
name – the name of the discipline (Default value = None)
comp_dir – the containing directory if None, use self.comp_dir (Default value = None)
- Returns
path to the grammar file
- Return type
string
- property cache_tol
Accessor to the cache input tolerance.
- check_input_data(input_data, raise_exception=True)
Check the input data validity.
- Parameters
input_data – the input data dict
raise_exception – Default value = True)
- check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)
Check if the jacobian provided by the linearize() method is correct.
- Parameters
input_data – input data dict (Default value = None)
derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)
threshold – acceptance threshold for the jacobian error (Default value = 1e-8)
linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)
inputs – list of inputs wrt which to differentiate (Default value = None)
outputs – list of outputs to differentiate (Default value = None)
step – the step for finite differences or complex step
parallel – if True, executes in parallel
n_processes – maximum number of processors on which to run
use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing
wait_time_between_fork – time waited between two forks of the process /Thread
auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation
plot_result – plot the result of the validation (computed and approximate jacobians)
file_path – path to the output file if plot_result is True
show – if True, open the figure
figsize_x – x size of the figure in inches
figsize_y – y size of the figure in inches
- Returns
True if the check is accepted, False otherwise
- check_output_data(raise_exception=True)
Check the output data validity.
- Parameters
raise_exception – if true, an exception is raised when data is invalid (Default value = True)
- classmethod deactivate_time_stamps()
Deactivate the time stamps for storing start and end times of execution and linearizations.
- property default_inputs
Accessor to the default inputs.
- static deserialize(in_file)
Derialize the discipline from a file.
- Parameters
in_file – input file for serialization
- Returns
a discipline instance
- property exec_time
Return the cumulated execution time.
Multiprocessing safe.
- execute(input_data=None)
Execute the discipline.
This method executes the discipline:
- Adds default inputs to the input_data if some inputs are not defined
in input_data but exist in self._default_data
- Checks if the last execution of the discipline wan not called with
identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)
Caches the inputs
Checks the input data against self.input_grammar
if self.data_processor is not None: runs the preprocessor
updates the status to RUNNING
calls the _run() method, that shall be defined
if self.data_processor is not None: runs the postprocessor
checks the output data
Caches the outputs
updates the status to DONE or FAILED
updates summed execution time
- Parameters
input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)
- Returns
the discipline local data after execution
- Return type
dict
- get_all_inputs()
Accessor for the input data as a list of values.
The order is given by self.get_input_data_names().
- Returns
the data
- get_all_outputs()
Accessor for the output data as a list of values.
The order is given by self.get_output_data_names().
- Returns
the data
- get_attributes_to_serialize()
Define the attributes to be serialized.
Shall be overloaded by disciplines
- Returns
the list of attributes names
- Return type
list
- static get_data_list_from_dict(keys, data_dict)
Filter the dict from a list of keys or a single key.
If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.
- Parameters
keys – a sting key or a list of keys
data_dict – the dict to get the data from
- Returns
a data or a generator of data
- get_expected_dataflow()
Return the expected data exchange sequence.
This method is used for the XDSM representation.
Default to empty list See MDOFormulation.get_expected_dataflow
- Returns
a list representing the data exchange arcs
- get_expected_workflow()
Return the expected execution sequence.
This method is used for XDSM representation Default to the execution of the discipline itself See MDOFormulation.get_expected_workflow
- get_input_data()
Accessor for the input data as a dict of values.
- Returns
the data dict
- get_input_data_names()
Accessor for the input names as a list.
- Returns
the data names list
- get_input_output_data_names()
Accessor for the input and output names as a list.
- Returns
the data names list
- get_inputs_asarray()
Accessor for the outputs as a large numpy array.
The order is the one of self.get_all_outputs().
- Returns
the outputs array
- Return type
ndarray
- get_inputs_by_name(data_names)
Accessor for the inputs as a list.
- Parameters
data_names – the data names list
- Returns
the data list
- get_local_data_by_name(data_names)
Accessor for the local data of the discipline as a dict of values.
- Parameters
data_names – the names of the data which will be the keys of the dictionary
- Returns
the data list
- get_output_data()
Accessor for the output data as a dict of values.
- Returns
the data dict
- get_output_data_names()
Accessor for the output names as a list.
- Returns
the data names list
- get_outputs_asarray()
Accessor for the outputs as a large numpy array.
The order is the one of self.get_all_outputs()
- Returns
the outputs array
- Return type
ndarray
- get_outputs_by_name(data_names)
Accessor for the outputs as a list.
- Parameters
data_names – the data names list
- Returns
the data list
- get_sub_disciplines()
Gets the sub disciplines of self By default, empty.
- Returns
the list of disciplines
- is_all_inputs_existing(data_names)
Test if all the names in data_names are inputs of the discipline.
- Parameters
data_names – the names of the inputs
- Returns
True if data_names are all in input grammar
- Return type
logical
- is_all_outputs_existing(data_names)
Test if all the names in data_names are outputs of the discipline.
- Parameters
data_names – the names of the outputs
- Returns
True if data_names are all in output grammar
- Return type
logical
- is_input_existing(data_name)
Test if input named data_name is an input of the discipline.
- Parameters
data_name – the name of the output
- Returns
True if data_name is in input grammar
- Return type
logical
- is_output_existing(data_name)
Test if output named data_name is an output of the discipline.
- Parameters
data_name – the name of the output
- Returns
True if data_name is in output grammar
- Return type
logical
- static is_scenario()
Return True if self is a scenario.
- Returns
True if self is a scenario
- property linearization_mode
Accessor to the linearization mode.
- linearize(input_data=None, force_all=False, force_no_exec=False)
Execute the linearized version of the code.
- Parameters
input_data – the input data dict needed to execute the disciplines according to the discipline input grammar
force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)
force_no_exec – if True, the discipline is not re executed, cache is loaded anyway
- property n_calls
Return the number of calls to execute() which triggered the _run().
Multiprocessing safe.
- property n_calls_linearize
Return the number of calls to linearize() which triggered the _compute_jacobian() method.
Multiprocessing safe.
- notify_status_observers()
Notify all status observers that the status has changed.
- remove_status_observer(obs)
Remove an observer for the status.
- Parameters
obs – the observer to remove
- reset_statuses_for_run()
Sets all the statuses to PENDING.
- serialize(out_file)
Serialize the discipline.
- Parameters
out_file – destination file for serialization
- set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)
Set the type of cache to use and the tolerance level.
This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g.
SimpleCache
andMemoryFullCache
, or on the disk, e.g.HDF5Cache
.CacheFactory.caches
provides the list of available types of caches.- Parameters
cache_type (str) – type of cache to use.
cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal
cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used
cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used
is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.
- set_disciplines_statuses(status)
Set the sub disciplines statuses.
To be implemented in subclasses. :param status: the status
- set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)
Set the jacobian approximation method.
Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize
- Parameters
jac_approx_type – “complex_step” or “finite_differences”
jax_approx_step – the step for finite differences or complex step
jac_approx_n_processes – maximum number of processors on which to run
jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing
jac_approx_wait_time – time waited between two forks of the process /Thread
- set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)
Compute the optimal finite-difference step.
Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.
Warning: this calls the discipline execution two times per input variables.
See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”
- Parameters
inputs – inputs wrt the linearization is made. If None, use differentiated inputs
outputs – outputs of the linearization is made. If None, use differentiated outputs
force_all – if True, all inputs and outputs are used
print_errors – if True, displays the estimated errors
numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution
- Returns
the estimated errors of truncation and cancelation error.
- property status
Status accessor.
- store_local_data(**kwargs)
Store discipline data in local data.
- Parameters
kwargs – the data as key value pairs
Iris dataset¶
This is one of the best known Dataset
to be found in the machine learning literature.
It was introduced by the statistician Ronald Fisher in his 1936 paper “The use of multiple measurements in taxonomic problems”, Annals of Eugenics. 7 (2): 179–188.
It contains 150 instances of iris plants:
50 Iris Setosa,
50 Iris Versicolour,
50 Iris Virginica.
Each instance is characterized by:
its sepal length in cm,
its sepal width in cm,
its petal length in cm,
its petal width in cm.
This Dataset
can be used for either clustering purposes
or classification ones.
More information about the Iris dataset
Classes:
|
Iris dataset parametrization. |
- class gemseo.problems.dataset.iris.IrisDataset(name='Iris', by_group=True, as_io=False)[source]
Iris dataset parametrization.
Constructor.
Methods:
add_group
(group, data[, variables, sizes, …])Add data related to a group.
add_variable
(name, data[, group, cache_as_input])Add data related to a variable.
compare
(value_1, logical_operator, value_2)Compare either a variable and a value or a variable and another variable.
export_to_cache
([inputs, outputs, …])Export the dataset to a cache.
export_to_dataframe
([copy])Export the dataset to a pandas Dataframe.
find
(comparison)Find the entries for which a comparison is satisfied.
get_all_data
([by_group, as_dict])Get all the data stored in the dataset.
Return the available plot methods.
get_data_by_group
(group[, as_dict])Get the data for a specific group name.
get_data_by_names
(names[, as_dict])Get the data for specific names of variables.
get_group
(variable_name)Get the name of the group that contains a variable.
get_names
(group_name)Get the names of the variables of a group.
get_normalized_dataset
([excluded_variables, …])Get a normalized copy of the dataset.
is_empty
()Check if the dataset is empty.
is_group
(name)Check if a name is a group name.
is_nan
()Check if an entry contains NaN.
is_variable
(name)Check if a name is a variable name.
n_variables_by_group
(group)The number of variables for a group.
plot
(name[, show, save])Plot the dataset from a
DatasetPlot
.remove
(entries)Remove entries.
set_from_array
(data[, variables, sizes, …])Set the dataset from an array.
set_from_file
(filename[, variables, sizes, …])Set the dataset from a file.
set_metadata
(name, value)Set a metadata attribute.
Attributes:
The names of the columns of the dataset.
The sorted names of the groups of variables.
The number of samples.
The number of variables.
The names of the rows.
The sorted names of the variables.
- add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)
Add data related to a group.
- Parameters
group (str) – The name of the group of data to be added.
data (numpy.ndarray) – The data to be added.
variables (Optional[List[str]]) – The names of the variables. If None, use default names based on a pattern.
sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.
pattern (Optional[str]) – The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.cache_as_input (bool) – If True, cache these data as inputs when the cache is exported to a cache.
- Return type
str
- add_variable(name, data, group='parameters', cache_as_input=True)
Add data related to a variable.
- Parameters
name (str) – The name of the variable to be stored.
data (numpy.ndarray) – The data to be stored.
group (str) – The name of the group related to this variable.
cache_as_input (bool) – If True, cache these data as inputs when the cache is exported to a cache.
- Return type
None
- property columns_names
The names of the columns of the dataset.
- compare(value_1, logical_operator, value_2, component_1=0, component_2=0)
Compare either a variable and a value or a variable and another variable.
- Parameters
value_1 (Union[str, float]) – The first value, either a variable name or a numeric value.
logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.
value_2 (Union[str, float]) – The second value, either a variable name or a numeric value.
component_1 (int) – If value_1 is a variable name, component_1 corresponds to its component used in the comparison.
component_2 (int) – If value_2 is a variable name, component_2 corresponds to its component used in the comparison.
- Returns
Whether the comparison is valid for the different entries.
- Return type
numpy.ndarray
- export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)
Export the dataset to a cache.
- Parameters
inputs (Optional[Iterable[str]]) – The names of the inputs to cache. If None, use all inputs.
outputs (Optional[Iterable[str]]) – The names of the outputs to cache. If None, use all outputs.
cache_type (str) – The type of cache to use.
cache_hdf_file (Optional[str]) – The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.
cache_hdf_node_name (Optional[str]) – The name of the HDF node to store the discipline. If None, use the name of the dataset.
- Returns
A cache containing the dataset.
- Return type
- export_to_dataframe(copy=True)
Export the dataset to a pandas Dataframe.
- Parameters
copy (bool) – If True, copy data. Otherwise, use reference.
- Returns
A pandas DataFrame containing the dataset.
- Return type
DataFrame
- static find(comparison)
Find the entries for which a comparison is satisfied.
This search uses a boolean 1D array whose length is equal to the length of the dataset.
- Parameters
comparison (numpy.ndarray) – A boolean vector whose length is equal to the number of samples.
- Returns
The indices of the entries for which the comparison is satisfied.
- Return type
List[int]
- get_all_data(by_group=True, as_dict=False)
Get all the data stored in the dataset.
The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied with the names and sizes of the variables.
The data can also classified by groups of variables.
- Parameters
by_group – If True, sort the data by group.
as_dict – If True, return the data as a dictionary.
- Returns
All the data stored in the dataset.
- Return type
Union[Dict[str, Union[Dict[str, numpy.ndarray], numpy.ndarray]], Tuple[Union[numpy.ndarray, Dict[str, numpy.ndarray]], List[str], Dict[str, int]]]
- get_available_plots()
Return the available plot methods.
- Return type
List[str]
- get_data_by_group(group, as_dict=False)
Get the data for a specific group name.
- Parameters
group (str) – The name of the group.
as_dict (bool) – If True, return values as dictionary.
- Returns
The data related to the group.
- Return type
Union[numpy.ndarray, Dict[str, numpy.ndarray]]
- get_data_by_names(names, as_dict=True)
Get the data for specific names of variables.
- Parameters
names (Union[str, Iterable[str]]) – The names of the variables.
as_dict (bool) – If True, return values as dictionary.
- Returns
The data related to the variables.
- Return type
Union[numpy.ndarray, Dict[str, numpy.ndarray]]
- get_group(variable_name)
Get the name of the group that contains a variable.
- Parameters
variable_name (str) – The name of the variable.
- Returns
The group to which the variable belongs.
- Return type
str
- get_names(group_name)
Get the names of the variables of a group.
- Parameters
group_name (str) – The name of the group.
- Returns
The names of the variables of the group.
- Return type
List[str]
- get_normalized_dataset(excluded_variables=None, excluded_groups=None)
Get a normalized copy of the dataset.
- Parameters
excluded_variables (Optional[Sequence[str]]) – The names of the variables not to be normalized. If None, normalize all the variables.
excluded_groups (Optional[Sequence[str]]) – The names of the groups not to be normalized. If None, normalize all the groups.
- Returns
A normalized dataset.
- Return type
- property groups
The sorted names of the groups of variables.
- is_empty()
Check if the dataset is empty.
- Returns
Whether the dataset is empty.
- Return type
bool
- is_group(name)
Check if a name is a group name.
- Parameters
name (str) – A name of a group.
- Returns
Whether the name is a group name.
- Return type
bool
- is_nan()
Check if an entry contains NaN.
- Returns
Whether any entries is NaN or not.
- Return type
numpy.ndarray
- is_variable(name)
Check if a name is a variable name.
- Parameters
name (str) – A name of a variable.
- Returns
Whether the name is a variable name.
- Return type
bool
- property n_samples
The number of samples.
- property n_variables
The number of variables.
- n_variables_by_group(group)
The number of variables for a group.
- Parameters
group (str) – The name of a group.
- Returns
The group dimension.
- Return type
int
- plot(name, show=True, save=False, **options)
Plot the dataset from a
DatasetPlot
.See
Dataset.get_available_plots()
- Parameters
name (str) – The name of the post-processing, which is the name of a class inheriting from
DatasetPlot
.show (bool) – If True, display the figure.
save (bool) – If True, save the figure.
options – The options for the post-processing.
- Return type
None
- remove(entries)
Remove entries.
- Parameters
entries (Union[List[int], numpy.ndarray]) – The entries to be removed, either indices or a boolean 1D array whose length is equal to the length of the dataset and elements to delete are coded True.
- Return type
None
- property row_names
The names of the rows.
- set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)
Set the dataset from an array.
- Parameters
data (numpy.ndarray) – The data to be stored.
variables (Optional[List[str]]) – The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.
sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.
groups (Optional[Dict[str, str]]) – The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.default_name (Optional[str]) – The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.
- Return type
None
- set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)
Set the dataset from a file.
- Parameters
filename (str) – The name of the file containing the data.
variables (Optional[List[str]]) – The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the
Dataset.DEFAULT_NAMES
associated with the different groups.sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.
groups (Optional[Dict[str, str]]) – The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.delimiter (str) – The field delimiter.
header (bool) – If True, read the names of the variables on the first line of the file.
- Return type
None
- set_metadata(name, value)
Set a metadata attribute.
- Parameters
name (str) – The name of the metadata attribute.
value (Any) – The value of the metadata attribute.
- Return type
None
- property variables
The sorted names of the variables.
Rosenbrock dataset¶
This Dataset
contains 100 evaluations
of the well-known Rosenbrock function:
This function is known for its global minimum at point (1,1), its banana valley and the difficulty to reach its minimum.
This Dataset
is based on a full-factorial
design of experiments.
More information about the Rosenbrock function
Classes:
|
Rosenbrock dataset parametrization. |
- class gemseo.problems.dataset.rosenbrock.RosenbrockDataset(name='Rosenbrock', by_group=True, n_samples=100, categorize=True, opt_naming=True)[source]
Rosenbrock dataset parametrization.
Constructor.
- Parameters
name (str) – name of the dataset.
by_group (bool) – if True, store the data by group. Otherwise, store them by variables. Default: True
n_samples (int) – number of samples
categorize (bool) – distinguish between the different groups of variables. Default: True.
- Parma bool opt_naming
use an optimization naming. Default: True.
Methods:
add_group
(group, data[, variables, sizes, …])Add data related to a group.
add_variable
(name, data[, group, cache_as_input])Add data related to a variable.
compare
(value_1, logical_operator, value_2)Compare either a variable and a value or a variable and another variable.
export_to_cache
([inputs, outputs, …])Export the dataset to a cache.
export_to_dataframe
([copy])Export the dataset to a pandas Dataframe.
find
(comparison)Find the entries for which a comparison is satisfied.
get_all_data
([by_group, as_dict])Get all the data stored in the dataset.
Return the available plot methods.
get_data_by_group
(group[, as_dict])Get the data for a specific group name.
get_data_by_names
(names[, as_dict])Get the data for specific names of variables.
get_group
(variable_name)Get the name of the group that contains a variable.
get_names
(group_name)Get the names of the variables of a group.
get_normalized_dataset
([excluded_variables, …])Get a normalized copy of the dataset.
is_empty
()Check if the dataset is empty.
is_group
(name)Check if a name is a group name.
is_nan
()Check if an entry contains NaN.
is_variable
(name)Check if a name is a variable name.
n_variables_by_group
(group)The number of variables for a group.
plot
(name[, show, save])Plot the dataset from a
DatasetPlot
.remove
(entries)Remove entries.
set_from_array
(data[, variables, sizes, …])Set the dataset from an array.
set_from_file
(filename[, variables, sizes, …])Set the dataset from a file.
set_metadata
(name, value)Set a metadata attribute.
Attributes:
The names of the columns of the dataset.
The sorted names of the groups of variables.
The number of samples.
The number of variables.
The names of the rows.
The sorted names of the variables.
- add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)
Add data related to a group.
- Parameters
group (str) – The name of the group of data to be added.
data (numpy.ndarray) – The data to be added.
variables (Optional[List[str]]) – The names of the variables. If None, use default names based on a pattern.
sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.
pattern (Optional[str]) – The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.cache_as_input (bool) – If True, cache these data as inputs when the cache is exported to a cache.
- Return type
str
- add_variable(name, data, group='parameters', cache_as_input=True)
Add data related to a variable.
- Parameters
name (str) – The name of the variable to be stored.
data (numpy.ndarray) – The data to be stored.
group (str) – The name of the group related to this variable.
cache_as_input (bool) – If True, cache these data as inputs when the cache is exported to a cache.
- Return type
None
- property columns_names
The names of the columns of the dataset.
- compare(value_1, logical_operator, value_2, component_1=0, component_2=0)
Compare either a variable and a value or a variable and another variable.
- Parameters
value_1 (Union[str, float]) – The first value, either a variable name or a numeric value.
logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.
value_2 (Union[str, float]) – The second value, either a variable name or a numeric value.
component_1 (int) – If value_1 is a variable name, component_1 corresponds to its component used in the comparison.
component_2 (int) – If value_2 is a variable name, component_2 corresponds to its component used in the comparison.
- Returns
Whether the comparison is valid for the different entries.
- Return type
numpy.ndarray
- export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)
Export the dataset to a cache.
- Parameters
inputs (Optional[Iterable[str]]) – The names of the inputs to cache. If None, use all inputs.
outputs (Optional[Iterable[str]]) – The names of the outputs to cache. If None, use all outputs.
cache_type (str) – The type of cache to use.
cache_hdf_file (Optional[str]) – The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.
cache_hdf_node_name (Optional[str]) – The name of the HDF node to store the discipline. If None, use the name of the dataset.
- Returns
A cache containing the dataset.
- Return type
- export_to_dataframe(copy=True)
Export the dataset to a pandas Dataframe.
- Parameters
copy (bool) – If True, copy data. Otherwise, use reference.
- Returns
A pandas DataFrame containing the dataset.
- Return type
DataFrame
- static find(comparison)
Find the entries for which a comparison is satisfied.
This search uses a boolean 1D array whose length is equal to the length of the dataset.
- Parameters
comparison (numpy.ndarray) – A boolean vector whose length is equal to the number of samples.
- Returns
The indices of the entries for which the comparison is satisfied.
- Return type
List[int]
- get_all_data(by_group=True, as_dict=False)
Get all the data stored in the dataset.
The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied with the names and sizes of the variables.
The data can also classified by groups of variables.
- Parameters
by_group – If True, sort the data by group.
as_dict – If True, return the data as a dictionary.
- Returns
All the data stored in the dataset.
- Return type
Union[Dict[str, Union[Dict[str, numpy.ndarray], numpy.ndarray]], Tuple[Union[numpy.ndarray, Dict[str, numpy.ndarray]], List[str], Dict[str, int]]]
- get_available_plots()
Return the available plot methods.
- Return type
List[str]
- get_data_by_group(group, as_dict=False)
Get the data for a specific group name.
- Parameters
group (str) – The name of the group.
as_dict (bool) – If True, return values as dictionary.
- Returns
The data related to the group.
- Return type
Union[numpy.ndarray, Dict[str, numpy.ndarray]]
- get_data_by_names(names, as_dict=True)
Get the data for specific names of variables.
- Parameters
names (Union[str, Iterable[str]]) – The names of the variables.
as_dict (bool) – If True, return values as dictionary.
- Returns
The data related to the variables.
- Return type
Union[numpy.ndarray, Dict[str, numpy.ndarray]]
- get_group(variable_name)
Get the name of the group that contains a variable.
- Parameters
variable_name (str) – The name of the variable.
- Returns
The group to which the variable belongs.
- Return type
str
- get_names(group_name)
Get the names of the variables of a group.
- Parameters
group_name (str) – The name of the group.
- Returns
The names of the variables of the group.
- Return type
List[str]
- get_normalized_dataset(excluded_variables=None, excluded_groups=None)
Get a normalized copy of the dataset.
- Parameters
excluded_variables (Optional[Sequence[str]]) – The names of the variables not to be normalized. If None, normalize all the variables.
excluded_groups (Optional[Sequence[str]]) – The names of the groups not to be normalized. If None, normalize all the groups.
- Returns
A normalized dataset.
- Return type
- property groups
The sorted names of the groups of variables.
- is_empty()
Check if the dataset is empty.
- Returns
Whether the dataset is empty.
- Return type
bool
- is_group(name)
Check if a name is a group name.
- Parameters
name (str) – A name of a group.
- Returns
Whether the name is a group name.
- Return type
bool
- is_nan()
Check if an entry contains NaN.
- Returns
Whether any entries is NaN or not.
- Return type
numpy.ndarray
- is_variable(name)
Check if a name is a variable name.
- Parameters
name (str) – A name of a variable.
- Returns
Whether the name is a variable name.
- Return type
bool
- property n_samples
The number of samples.
- property n_variables
The number of variables.
- n_variables_by_group(group)
The number of variables for a group.
- Parameters
group (str) – The name of a group.
- Returns
The group dimension.
- Return type
int
- plot(name, show=True, save=False, **options)
Plot the dataset from a
DatasetPlot
.See
Dataset.get_available_plots()
- Parameters
name (str) – The name of the post-processing, which is the name of a class inheriting from
DatasetPlot
.show (bool) – If True, display the figure.
save (bool) – If True, save the figure.
options – The options for the post-processing.
- Return type
None
- remove(entries)
Remove entries.
- Parameters
entries (Union[List[int], numpy.ndarray]) – The entries to be removed, either indices or a boolean 1D array whose length is equal to the length of the dataset and elements to delete are coded True.
- Return type
None
- property row_names
The names of the rows.
- set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)
Set the dataset from an array.
- Parameters
data (numpy.ndarray) – The data to be stored.
variables (Optional[List[str]]) – The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.
sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.
groups (Optional[Dict[str, str]]) – The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.default_name (Optional[str]) – The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.
- Return type
None
- set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)
Set the dataset from a file.
- Parameters
filename (str) – The name of the file containing the data.
variables (Optional[List[str]]) – The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the
Dataset.DEFAULT_NAMES
associated with the different groups.sizes (Optional[Dict[str, int]]) – The sizes of the variables. If None, assume that all the variables have a size equal to 1.
groups (Optional[Dict[str, str]]) – The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.delimiter (str) – The field delimiter.
header (bool) – If True, read the names of the variables on the first line of the file.
- Return type
None
- set_metadata(name, value)
Set a metadata attribute.
- Parameters
name (str) – The name of the metadata attribute.
value (Any) – The value of the metadata attribute.
- Return type
None
- property variables
The sorted names of the variables.