Built-in datasets¶
A factory for datasets.
- class gemseo.problems.dataset.factory.DatasetFactory[source]
A factory for
Dataset
.- Return type
None
- create(dataset, **options)[source]
Create a
Dataset
.- Parameters
dataset (str) – The name of the dataset (its classname).
**options (Any) – The options of the dataset.
- Returns
A dataset.
- Return type
- is_available(dataset)[source]
Check the availability of a dataset.
Burgers dataset¶
This Dataset
contains solutions to the Burgers’ equation with
periodic boundary conditions on the interval \([0, 2\pi]\) for different
time steps:
An analytical expression can be obtained for the solution, using the Cole-Hopf transform:
where \(\phi\) is solution to the heat equation \(\phi_t = \nu \phi_{xx}\).
This Dataset
is based on a full-factorial
design of experiments. Each sample corresponds to a given time step \(t\),
while each feature corresponds to a given spatial point \(x\).
More information about Burgers’ equation
- class gemseo.problems.dataset.burgers.BurgersDataset(name='Burgers', by_group=True, n_samples=30, n_x=501, fluid_viscosity=0.1, categorize=True)[source]
Burgers dataset parametrization.
- Parameters
name (str) –
The name of the dataset.
By default it is set to Burgers.
by_group (bool) –
Whether to store the data by group. Otherwise, store them by variables.
By default it is set to True.
n_samples (int) –
The number of samples.
By default it is set to 30.
n_x (int) –
The number of spatial points.
By default it is set to 501.
fluid_viscosity (float) –
The fluid viscosity.
By default it is set to 0.1.
categorize (bool) –
Whether to distinguish between the different groups of variables.
By default it is set to True.
- Return type
None
- add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)
Add data related to a group.
- Parameters
group (str) – The name of the group of data to be added.
data (ndarray) – The data to be added.
variables (list[str] | None) –
The names of the variables. If None, use default names based on a pattern.
By default it is set to None.
sizes (dict[str, int] | None) –
The sizes of the variables. If None, assume that all the variables have a size equal to 1.
By default it is set to None.
pattern (str | None) –
The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.By default it is set to None.
cache_as_input (bool) –
If True, cache these data as inputs when the cache is exported to a cache.
By default it is set to True.
- Return type
None
- add_variable(name, data, group='parameters', cache_as_input=True)
Add data related to a variable.
- Parameters
name (str) – The name of the variable to be stored.
data (numpy.ndarray) – The data to be stored.
group (str) –
The name of the group related to this variable.
By default it is set to parameters.
cache_as_input (bool) –
If True, cache these data as inputs when the cache is exported to a cache.
By default it is set to True.
- Return type
None
- compare(value_1, logical_operator, value_2, component_1=0, component_2=0)
Compare either a variable and a value or a variable and another variable.
- Parameters
value_1 (str | float) – The first value, either a variable name or a numeric value.
logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.
value_2 (str | float) – The second value, either a variable name or a numeric value.
component_1 (int) –
If value_1 is a variable name, component_1 corresponds to its component used in the comparison.
By default it is set to 0.
component_2 (int) –
If value_2 is a variable name, component_2 corresponds to its component used in the comparison.
By default it is set to 0.
- Returns
Whether the comparison is valid for the different entries.
- Return type
ndarray
- export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)
Export the dataset to a cache.
- Parameters
inputs (Iterable[str] | None) –
The names of the inputs to cache. If None, use all inputs.
By default it is set to None.
outputs (Iterable[str] | None) –
The names of the outputs to cache. If None, use all outputs.
By default it is set to None.
cache_type (str) –
The type of cache to use.
By default it is set to MemoryFullCache.
cache_hdf_file (str | None) –
The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.
By default it is set to None.
cache_hdf_node_name (str | None) –
The name of the HDF node to store the discipline. If None, use the name of the dataset.
By default it is set to None.
- Returns
A cache containing the dataset.
- Return type
- export_to_dataframe(copy=True, variable_names=None)
Export the dataset to a pandas Dataframe.
- static find(comparison)
Find the entries for which a comparison is satisfied.
This search uses a boolean 1D array whose length is equal to the length of the dataset.
- Parameters
comparison (numpy.ndarray) – A boolean vector whose length is equal to the number of samples.
- Returns
The indices of the entries for which the comparison is satisfied.
- Return type
- get_all_data(by_group=True, as_dict=False)
Get all the data stored in the dataset.
The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied by the names and sizes of the variables.
The data can also be classified by groups of variables.
- Parameters
by_group –
If True, sort the data by group.
By default it is set to True.
as_dict –
If True, return the data as a dictionary.
By default it is set to False.
- Returns
All the data stored in the dataset.
- Return type
Union[Dict[str, Union[Dict[str, numpy.ndarray], numpy.ndarray]], Tuple[Union[numpy.ndarray, Dict[str, numpy.ndarray]], List[str], Dict[str, int]]]
- get_column_names(variables=None, as_tuple=False, start=0)
Return the names of the columns of the dataset.
If dim(x)=1, its column name is ‘x’ while if dim(y)=2, its column names are either ‘x_0’ and ‘x_1’ or ColumnName(group_name, ‘x’, ‘0’) and ColumnName(group_name, ‘x’, ‘1’).
- Parameters
variables (Sequence[str]) –
The names of the variables. If
None
, use all the variables.By default it is set to None.
as_tuple (bool) –
If True, return the names as named tuples. otherwise, return the names as strings.
By default it is set to False.
start (int) –
The first index for the components of a variable. E.g. with ‘0’: ‘x_0’, ‘x_1’, …
By default it is set to 0.
- Returns
The names of the columns of the dataset.
- Return type
list[str | ColumnName]
- get_data_by_group(group, as_dict=False)
Get the data for a specific group name.
- get_data_by_names(names, as_dict=True)
Get the data for specific names of variables.
- get_group(variable_name)
Get the name of the group that contains a variable.
- get_names(group_name)
Get the names of the variables of a group.
- get_normalized_dataset(excluded_variables=None, excluded_groups=None)
Get a normalized copy of the dataset.
- Parameters
excluded_variables (Sequence[str] | None) –
The names of the variables not to be normalized. If None, normalize all the variables.
By default it is set to None.
excluded_groups (Sequence[str] | None) –
The names of the groups not to be normalized. If None, normalize all the groups.
By default it is set to None.
- Returns
A normalized dataset.
- Return type
- is_empty()
Check if the dataset is empty.
- Returns
Whether the dataset is empty.
- Return type
- is_group(name)
Check if a name is a group name.
- is_nan()
Check if an entry contains NaN.
- Returns
Whether any entries are NaN or not.
- Return type
- is_variable(name)
Check if a name is a variable name.
- n_variables_by_group(group)
The number of variables for a group.
- plot(name, show=True, save=False, file_path=None, directory_path=None, file_name=None, file_format=None, properties=None, **options)
Plot the dataset from a
DatasetPlot
.See
Dataset.get_available_plots()
- Parameters
name (str) – The name of the post-processing, which is the name of a class inheriting from
DatasetPlot
.show (bool) –
If True, display the figure.
By default it is set to True.
save (bool) –
If True, save the figure.
By default it is set to False.
file_path (str | Path | None) –
The path of the file to save the figures. If None, create a file path from
directory_path
,file_name
andfile_format
.By default it is set to None.
directory_path (str | Path | None) –
The path of the directory to save the figures. If None, use the current working directory.
By default it is set to None.
file_name (str | None) –
The name of the file to save the figures. If None, use a default one generated by the post-processing.
By default it is set to None.
file_format (str | None) –
A file format, e.g. ‘png’, ‘pdf’, ‘svg’, … If None, use a default file extension.
By default it is set to None.
properties (Mapping[str, DatasetPlotPropertyType] | None) –
The general properties of a
DatasetPlot
.By default it is set to None.
**options – The options for the post-processing.
- Return type
- remove(entries)
Remove entries.
- rename_variable(name, new_name)
Rename a variable.
- set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)
Set the dataset from an array.
- Parameters
data (ndarray) – The data to be stored.
variables (list[str] | None) –
The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.
By default it is set to None.
sizes (dict[str, int] | None) –
The sizes of the variables. If None, assume that all the variables have a size equal to 1.
By default it is set to None.
groups (dict[str, str] | None) –
The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.By default it is set to None.
default_name (str | None) –
The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.By default it is set to None.
- Return type
None
- set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)
Set the dataset from a file.
- Parameters
filename (Path | str) – The name of the file containing the data.
variables (list[str] | None) –
The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the
Dataset.DEFAULT_NAMES
associated with the different groups.By default it is set to None.
sizes (dict[str, int] | None) –
The sizes of the variables. If None, assume that all the variables have a size equal to 1.
By default it is set to None.
groups (dict[str, str] | None) –
The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.By default it is set to None.
delimiter (str) –
The field delimiter.
By default it is set to ,.
header (bool) –
If True, read the names of the variables on the first line of the file.
By default it is set to True.
- Return type
None
- set_metadata(name, value)
Set a metadata attribute.
- Parameters
name (str) – The name of the metadata attribute.
value (Any) – The value of the metadata attribute.
- Return type
None
- transform_variable(name, transformation)
Transform a variable.
- Parameters
name (str) – The name of the variable, e.g.
"foo"
.transformation (Callable[[numpy.ndarray], numpy.ndarray]) – The function transforming the variable, e.g.
"lambda x: np.exp(x)"
.
- Return type
None
- property columns_names: list[str | ColumnName]
The names of the columns of the dataset.
- property n_samples: int
The number of samples.
- property n_variables: int
The number of variables.
- class gemseo.problems.dataset.burgers.BurgersDiscipline[source]
A software integrated in the workflow.
To be used, subclass
MDODiscipline
and implement the_run()
method which defines the execution of the software. Typically,_run()
gets the input data stored in thelocal_data
, passes them to the callable computing the output data, e.g. a software, and stores these output data in thelocal_data
.Then, the end-user calls the
execute()
method with optionalinput_data
; if not,default_inputs
are used.This
execute()
method uses name grammars to check the variable names and types of both the passed input data before callingrun()
and the returned output data before they are stored in thecache
. A grammar can be either aSimpleGrammar
or aJSONGrammar
, or your own which derives fromAbstractGrammar
.- Parameters
name – The name of the discipline. If None, use the class name.
input_grammar_file – The input grammar file path. If
None
andauto_detect_grammar_files=True
, look for"ClassName_input.json"
in theGRAMMAR_DIRECTORY
if any or in the directory of the discipline class module. IfNone
andauto_detect_grammar_files=False
, do not initialize the input grammar from a schema file.output_grammar_file – The output grammar file path. If
None
andauto_detect_grammar_files=True
, look for"ClassName_output.json"
in theGRAMMAR_DIRECTORY
if any or in the directory of the discipline class module. IfNone
andauto_detect_grammar_files=False
, do not initialize the output grammar from a schema file.auto_detect_grammar_files – Whether to look for
"ClassName_{input,output}.json"
in theGRAMMAR_DIRECTORY
if any or in the directory of the discipline class module when{input,output}_grammar_file
isNone
.grammar_type – The type of grammar to define the input and output variables, e.g.
MDODiscipline.JSON_GRAMMAR_TYPE
orMDODiscipline.SIMPLE_GRAMMAR_TYPE
.cache_type – The type of policy to cache the discipline evaluations, e.g.
MDODiscipline.SIMPLE_CACHE
to cache the last one,MDODiscipline.HDF5_CACHE
to cache them in a HDF file, orMDODiscipline.MEMORY_FULL_CACHE
to cache them in memory. IfNone
or ifMDODiscipline.activate_cache
isTrue
, do not cache the discipline evaluations.cache_file_path – The HDF file path when
grammar_type
isMDODiscipline.HDF5_CACHE
.
- classmethod activate_time_stamps()
Activate the time stamps.
For storing start and end times of execution and linearizations.
- Return type
None
- add_differentiated_inputs(inputs=None)
Add inputs against which to differentiate the outputs.
This method updates
MDODiscipline._differentiated_inputs
withinputs
.- Parameters
inputs (Iterable[str] | None) –
The input variables against which to differentiate the outputs. If None, all the inputs of the discipline are used.
By default it is set to None.
- Raises
ValueError – When the inputs wrt which differentiate the discipline are not inputs of the latter.
- Return type
None
- add_differentiated_outputs(outputs=None)
Add outputs to be differentiated.
This method updates
MDODiscipline._differentiated_outputs
withoutputs
.- Parameters
outputs (Iterable[str] | None) –
The output variables to be differentiated. If None, all the outputs of the discipline are used.
By default it is set to None.
- Raises
ValueError – When the outputs to differentiate are not discipline outputs.
- Return type
None
- add_namespace_to_input(name, namespace)
Add a namespace prefix to an existing input grammar element.
The updated input grammar element name will be
namespace``+:data:`~gemseo.core.namespaces.namespace_separator`+``name
.
- add_namespace_to_output(name, namespace)
Add a namespace prefix to an existing output grammar element.
The updated output grammar element name will be
namespace``+:data:`~gemseo.core.namespaces.namespace_separator`+``name
.
- add_status_observer(obs)
Add an observer for the status.
Add an observer for the status to be notified when self changes of status.
- Parameters
obs (Any) – The observer to add.
- Return type
None
- auto_get_grammar_file(is_input=True, name=None, comp_dir=None)
Use a naming convention to associate a grammar file to the discipline.
Search in the directory
comp_dir
for either an input grammar file namedname + "_input.json"
or an output grammar file namedname + "_output.json"
.- Parameters
is_input (bool) –
Whether to search for an input or output grammar file.
By default it is set to True.
name (str | None) –
The name to be searched in the file names. If
None
, use the name of the discipline class.By default it is set to None.
comp_dir (str | Path | None) –
The directory in which to search the grammar file. If None, use the
GRAMMAR_DIRECTORY
if any, or the directory of the discipline class module.By default it is set to None.
- Returns
The grammar file path.
- Return type
- check_input_data(input_data, raise_exception=True)
Check the input data validity.
- check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, fig_size_x=10, fig_size_y=10, reference_jacobian_path=None, save_reference_jacobian=False, indices=None)
Check if the analytical Jacobian is correct with respect to a reference one.
If reference_jacobian_path is not None and save_reference_jacobian is True, compute the reference Jacobian with the approximation method and save it in reference_jacobian_path.
If reference_jacobian_path is not None and save_reference_jacobian is False, do not compute the reference Jacobian but read it from reference_jacobian_path.
If reference_jacobian_path is None, compute the reference Jacobian without saving it.
- Parameters
input_data (dict[str, ndarray] | None) –
The input data needed to execute the discipline according to the discipline input grammar. If None, use the
MDODiscipline.default_inputs
.By default it is set to None.
derr_approx (str) –
The approximation method, either “complex_step” or “finite_differences”.
By default it is set to finite_differences.
threshold (float) –
The acceptance threshold for the Jacobian error.
By default it is set to 1e-08.
linearization_mode (str) –
the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)
By default it is set to auto.
inputs (Iterable[str] | None) –
The names of the inputs wrt which to differentiate the outputs.
By default it is set to None.
outputs (Iterable[str] | None) –
The names of the outputs to be differentiated.
By default it is set to None.
step (float) –
The differentiation step.
By default it is set to 1e-07.
parallel (bool) –
Whether to differentiate the discipline in parallel.
By default it is set to False.
n_processes (int) –
The maximum simultaneous number of threads, if
use_threading
is True, or processes otherwise, used to parallelize the execution.By default it is set to 2.
use_threading (bool) –
Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.
By default it is set to False.
wait_time_between_fork (float) –
The time waited between two forks of the process / thread.
By default it is set to 0.
auto_set_step (bool) –
Whether to compute the optimal step for a forward first order finite differences gradient approximation.
By default it is set to False.
plot_result (bool) –
Whether to plot the result of the validation (computed vs approximated Jacobians).
By default it is set to False.
file_path (str | Path) –
The path to the output file if
plot_result
isTrue
.By default it is set to jacobian_errors.pdf.
show (bool) –
Whether to open the figure.
By default it is set to False.
fig_size_x (float) –
The x-size of the figure in inches.
By default it is set to 10.
fig_size_y (float) –
The y-size of the figure in inches.
By default it is set to 10.
reference_jacobian_path (str | Path | None) –
The path of the reference Jacobian file.
By default it is set to None.
save_reference_jacobian (bool) –
Whether to save the reference Jacobian.
By default it is set to False.
indices (Iterable[int] | None) –
The indices of the inputs and outputs for the different sub-Jacobian matrices, formatted as
{variable_name: variable_components}
wherevariable_components
can be either an integer, e.g. 2 a sequence of integers, e.g. [0, 3], a slice, e.g. slice(0,3), the ellipsis symbol (…) or None, which is the same as ellipsis. If a variable name is missing, consider all its components. If None, consider all the components of all theinputs
andoutputs
.By default it is set to None.
- Returns
Whether the analytical Jacobian is correct with respect to the reference one.
- check_output_data(raise_exception=True)
Check the output data validity.
- Parameters
raise_exception (bool) –
Whether to raise an exception when the data is invalid.
By default it is set to True.
- Return type
None
- classmethod deactivate_time_stamps()
Deactivate the time stamps.
For storing start and end times of execution and linearizations.
- Return type
None
- static deserialize(file_path)
Deserialize a discipline from a file.
- Parameters
file_path (str | Path) – The path to the file containing the discipline.
- Returns
The discipline instance.
- Return type
- execute(input_data=None)
Execute the discipline.
This method executes the discipline:
Adds the default inputs to the
input_data
if some inputs are not defined in input_data but exist inMDODiscipline.default_inputs
.Checks whether the last execution of the discipline was called with identical inputs, i.e. cached in
MDODiscipline.cache
; if so, directly returnsself.cache.get_output_cache(inputs)
.Caches the inputs.
Checks the input data against
MDODiscipline.input_grammar
.If
MDODiscipline.data_processor
is not None, runs the preprocessor.Updates the status to
MDODiscipline.STATUS_RUNNING
.Calls the
MDODiscipline._run()
method, that shall be defined.If
MDODiscipline.data_processor
is not None, runs the postprocessor.Checks the output data.
Caches the outputs.
Updates the status to
MDODiscipline.STATUS_DONE
orMDODiscipline.STATUS_FAILED
.Updates summed execution time.
- Parameters
input_data (Mapping[str, Any] | None) –
The input data needed to execute the discipline according to the discipline input grammar. If None, use the
MDODiscipline.default_inputs
.By default it is set to None.
- Returns
The discipline local data after execution.
- Raises
RuntimeError – When residual_variables are declared but self.run_solves_residuals is False. This is not suported yet.
- Return type
- get_all_inputs()
Return the local input data as a list.
The order is given by
MDODiscipline.get_input_data_names()
.
- get_all_outputs()
Return the local output data as a list.
The order is given by
MDODiscipline.get_output_data_names()
.
- get_attributes_to_serialize()
Define the names of the attributes to be serialized.
Shall be overloaded by disciplines
- static get_data_list_from_dict(keys, data_dict)
Filter the dict from a list of keys or a single key.
If keys is a string, then the method return the value associated to the key. If keys is a list of strings, then the method returns a generator of value corresponding to the keys which can be iterated.
- get_disciplines_in_dataflow_chain()
Return the disciplines that must be shown as blocks within the XDSM representation of a chain.
By default, only the discipline itself is shown. This function can be differently implemented for any type of inherited discipline.
- Returns
The disciplines shown in the XDSM chain.
- Return type
- get_expected_dataflow()
Return the expected data exchange sequence.
This method is used for the XDSM representation.
The default expected data exchange sequence is an empty list.
See also
MDOFormulation.get_expected_dataflow
- Returns
The data exchange arcs.
- Return type
list[tuple[gemseo.core.discipline.MDODiscipline, gemseo.core.discipline.MDODiscipline, list[str]]]
- get_expected_workflow()
Return the expected execution sequence.
This method is used for the XDSM representation.
The default expected execution sequence is the execution of the discipline itself.
See also
MDOFormulation.get_expected_workflow
- Returns
The expected execution sequence.
- Return type
- get_input_data(with_namespaces=True)
Return the local input data as a dictionary.
- get_input_data_names(with_namespaces=True)
Return the names of the input variables.
- get_input_output_data_names(with_namespaces=True)
Return the names of the input and output variables.
- Args:
- with_namespaces: Whether to keep the namespace prefix of the
output names, if any.
- get_inputs_asarray()
Return the local output data as a large NumPy array.
The order is the one of
MDODiscipline.get_all_outputs()
.- Returns
The local output data.
- Return type
- get_inputs_by_name(data_names)
Return the local data associated with input variables.
- Parameters
data_names (Iterable[str]) – The names of the input variables.
- Returns
The local data for the given input variables.
- Raises
ValueError – When a variable is not an input of the discipline.
- Return type
- get_local_data_by_name(data_names)
Return the local data of the discipline associated with variables names.
- Parameters
data_names (Iterable[str]) – The names of the variables.
- Returns
The local data associated with the variables names.
- Raises
ValueError – When a name is not a discipline input name.
- Return type
Generator[Any]
- get_output_data(with_namespaces=True)
Return the local output data as a dictionary.
- get_output_data_names(with_namespaces=True)
Return the names of the output variables.
- get_outputs_asarray()
Return the local input data as a large NumPy array.
The order is the one of
MDODiscipline.get_all_inputs()
.- Returns
The local input data.
- Return type
- get_outputs_by_name(data_names)
Return the local data associated with output variables.
- Parameters
data_names (Iterable[str]) – The names of the output variables.
- Returns
The local data for the given output variables.
- Raises
ValueError – When a variable is not an output of the discipline.
- Return type
- get_sub_disciplines()
Return the sub-disciplines if any.
- Returns
The sub-disciplines.
- Return type
- is_all_inputs_existing(data_names)
Test if several variables are discipline inputs.
- is_all_outputs_existing(data_names)
Test if several variables are discipline outputs.
- is_input_existing(data_name)
Test if a variable is a discipline input.
- is_output_existing(data_name)
Test if a variable is a discipline output.
- static is_scenario()
Whether the discipline is a scenario.
- Return type
- linearize(input_data=None, force_all=False, force_no_exec=False)
Execute the linearized version of the code.
- Parameters
input_data (dict[str, Any] | None) –
The input data needed to linearize the discipline according to the discipline input grammar. If None, use the
MDODiscipline.default_inputs
.By default it is set to None.
force_all (bool) –
If False,
MDODiscipline._differentiated_inputs
andMDODiscipline._differentiated_outputs
are used to filter the differentiated variables. otherwise, all outputs are differentiated wrt all inputs.By default it is set to False.
force_no_exec (bool) –
If True, the discipline is not re-executed, cache is loaded anyway.
By default it is set to False.
- Returns
The Jacobian of the discipline.
- Return type
- notify_status_observers()
Notify all status observers that the status has changed.
- Return type
None
- remove_status_observer(obs)
Remove an observer for the status.
- Parameters
obs (Any) – The observer to remove.
- Return type
None
- reset_statuses_for_run()
Set all the statuses to
MDODiscipline.STATUS_PENDING
.- Raises
ValueError – When the discipline cannot be run because of its status.
- Return type
None
- serialize(file_path)
Serialize the discipline and store it in a file.
- Parameters
file_path (str | Path) – The path to the file to store the discipline.
- Return type
None
- set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)
Set the type of cache to use and the tolerance level.
This method defines when the output data have to be cached according to the distance between the corresponding input data and the input data already cached for which output data are also cached.
The cache can be either a
SimpleCache
recording the last execution or a cache storing all executions, e.g.MemoryFullCache
andHDF5Cache
. Caching data can be either in-memory, e.g.SimpleCache
andMemoryFullCache
, or on the disk, e.g.HDF5Cache
.The attribute
CacheFactory.caches
provides the available caches types.- Parameters
cache_type (str) –
The type of cache.
By default it is set to SimpleCache.
cache_tolerance (float) –
The maximum relative norm of the difference between two input arrays to consider that two input arrays are equal.
By default it is set to 0.0.
cache_hdf_file (str | Path | None) –
The path to the HDF file to store the data; this argument is mandatory when the
MDODiscipline.HDF5_CACHE
policy is used.By default it is set to None.
cache_hdf_node_name (str | None) –
The name of the HDF file node to store the discipline data. If None,
MDODiscipline.name
is used.By default it is set to None.
is_memory_shared (bool) –
Whether to store the data with a shared memory dictionary, which makes the cache compatible with multiprocessing.
By default it is set to True.
- Return type
None
- set_disciplines_statuses(status)
Set the sub-disciplines statuses.
To be implemented in subclasses.
- Parameters
status (str) – The status.
- Return type
None
- set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)
Set the Jacobian approximation method.
Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling
MDODiscipline.linearize()
.- Parameters
jac_approx_type (str) –
The approximation method, either “complex_step” or “finite_differences”.
By default it is set to finite_differences.
jax_approx_step (float) –
The differentiation step.
By default it is set to 1e-07.
jac_approx_n_processes (int) –
The maximum simultaneous number of threads, if
jac_approx_use_threading
is True, or processes otherwise, used to parallelize the execution.By default it is set to 1.
jac_approx_use_threading (bool) –
Whether to use threads instead of processes to parallelize the execution; multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing.
By default it is set to False.
jac_approx_wait_time (float) –
The time waited between two forks of the process / thread.
By default it is set to 0.
- Return type
None
- set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)
Compute the optimal finite-difference step.
Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of the perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (round-off when doing f(x+step)-f(x)) are approximately equal.
Warning
This calls the discipline execution twice per input variables.
See also
https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differentiation”
- Parameters
inputs (Iterable[str] | None) –
The inputs wrt which the outputs are linearized. If None, use the
MDODiscipline._differentiated_inputs
.By default it is set to None.
outputs (Iterable[str] | None) –
The outputs to be linearized. If None, use the
MDODiscipline._differentiated_outputs
.By default it is set to None.
force_all (bool) –
Whether to consider all the inputs and outputs of the discipline;
By default it is set to False.
print_errors (bool) –
Whether to display the estimated errors.
By default it is set to False.
numerical_error (float) –
The numerical error associated to the calculation of f. By default, this is the machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution.
By default it is set to 2.220446049250313e-16.
- Returns
The estimated errors of truncation and cancellation error.
- Raises
ValueError – When the Jacobian approximation method has not been set.
- store_local_data(**kwargs)
Store discipline data in local data.
- Parameters
**kwargs (Any) – The data to be stored in
MDODiscipline.local_data
.- Return type
None
- property cache_tol: float
The cache input tolerance.
This is the tolerance for equality of the inputs in the cache. If norm(stored_input_data-input_data) <= cache_tol * norm(stored_input_data), the cached data for
stored_input_data
is returned when callingself.execute(input_data)
.- Raises
ValueError – When the discipline does not have a cache.
- property default_inputs: dict[str, Any]
The default inputs.
- Raises
TypeError – When the default inputs are not passed as a dictionary.
- property exec_time: float | None
The cumulated execution time of the discipline.
This property is multiprocessing safe.
- Raises
RuntimeError – When the discipline counters are disabled.
- property grammar_type: gemseo.core.grammars.base_grammar.BaseGrammar
The type of grammar to be used for inputs and outputs declaration.
- property linearization_mode: str
The linearization mode among
MDODiscipline.AVAILABLE_MODES
.- Raises
ValueError – When the linearization mode is unknown.
- property local_data: gemseo.core.discipline_data.DisciplineData
The current input and output data.
- property n_calls: int | None
The number of times the discipline was executed.
This property is multiprocessing safe.
- Raises
RuntimeError – When the discipline counters are disabled.
- property n_calls_linearize: int | None
The number of times the discipline was linearized.
This property is multiprocessing safe.
- Raises
RuntimeError – When the discipline counters are disabled.
- property status: str
The status of the discipline.
Iris dataset¶
This is one of the best known Dataset
to be found in the machine learning literature.
It was introduced by the statistician Ronald Fisher in his 1936 paper “The use of multiple measurements in taxonomic problems”, Annals of Eugenics. 7 (2): 179–188.
It contains 150 instances of iris plants:
50 Iris Setosa,
50 Iris Versicolour,
50 Iris Virginica.
Each instance is characterized by:
its sepal length in cm,
its sepal width in cm,
its petal length in cm,
its petal width in cm.
This Dataset
can be used for either clustering purposes
or classification ones.
More information about the Iris dataset
- class gemseo.problems.dataset.iris.IrisDataset(name='Iris', by_group=True, as_io=False)[source]
Iris dataset parametrization.
Constructor.
- add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)
Add data related to a group.
- Parameters
group (str) – The name of the group of data to be added.
data (ndarray) – The data to be added.
variables (list[str] | None) –
The names of the variables. If None, use default names based on a pattern.
By default it is set to None.
sizes (dict[str, int] | None) –
The sizes of the variables. If None, assume that all the variables have a size equal to 1.
By default it is set to None.
pattern (str | None) –
The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.By default it is set to None.
cache_as_input (bool) –
If True, cache these data as inputs when the cache is exported to a cache.
By default it is set to True.
- Return type
None
- add_variable(name, data, group='parameters', cache_as_input=True)
Add data related to a variable.
- Parameters
name (str) – The name of the variable to be stored.
data (numpy.ndarray) – The data to be stored.
group (str) –
The name of the group related to this variable.
By default it is set to parameters.
cache_as_input (bool) –
If True, cache these data as inputs when the cache is exported to a cache.
By default it is set to True.
- Return type
None
- compare(value_1, logical_operator, value_2, component_1=0, component_2=0)
Compare either a variable and a value or a variable and another variable.
- Parameters
value_1 (str | float) – The first value, either a variable name or a numeric value.
logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.
value_2 (str | float) – The second value, either a variable name or a numeric value.
component_1 (int) –
If value_1 is a variable name, component_1 corresponds to its component used in the comparison.
By default it is set to 0.
component_2 (int) –
If value_2 is a variable name, component_2 corresponds to its component used in the comparison.
By default it is set to 0.
- Returns
Whether the comparison is valid for the different entries.
- Return type
ndarray
- export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)
Export the dataset to a cache.
- Parameters
inputs (Iterable[str] | None) –
The names of the inputs to cache. If None, use all inputs.
By default it is set to None.
outputs (Iterable[str] | None) –
The names of the outputs to cache. If None, use all outputs.
By default it is set to None.
cache_type (str) –
The type of cache to use.
By default it is set to MemoryFullCache.
cache_hdf_file (str | None) –
The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.
By default it is set to None.
cache_hdf_node_name (str | None) –
The name of the HDF node to store the discipline. If None, use the name of the dataset.
By default it is set to None.
- Returns
A cache containing the dataset.
- Return type
- export_to_dataframe(copy=True, variable_names=None)
Export the dataset to a pandas Dataframe.
- static find(comparison)
Find the entries for which a comparison is satisfied.
This search uses a boolean 1D array whose length is equal to the length of the dataset.
- Parameters
comparison (numpy.ndarray) – A boolean vector whose length is equal to the number of samples.
- Returns
The indices of the entries for which the comparison is satisfied.
- Return type
- get_all_data(by_group=True, as_dict=False)
Get all the data stored in the dataset.
The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied by the names and sizes of the variables.
The data can also be classified by groups of variables.
- Parameters
by_group –
If True, sort the data by group.
By default it is set to True.
as_dict –
If True, return the data as a dictionary.
By default it is set to False.
- Returns
All the data stored in the dataset.
- Return type
Union[Dict[str, Union[Dict[str, numpy.ndarray], numpy.ndarray]], Tuple[Union[numpy.ndarray, Dict[str, numpy.ndarray]], List[str], Dict[str, int]]]
- get_column_names(variables=None, as_tuple=False, start=0)
Return the names of the columns of the dataset.
If dim(x)=1, its column name is ‘x’ while if dim(y)=2, its column names are either ‘x_0’ and ‘x_1’ or ColumnName(group_name, ‘x’, ‘0’) and ColumnName(group_name, ‘x’, ‘1’).
- Parameters
variables (Sequence[str]) –
The names of the variables. If
None
, use all the variables.By default it is set to None.
as_tuple (bool) –
If True, return the names as named tuples. otherwise, return the names as strings.
By default it is set to False.
start (int) –
The first index for the components of a variable. E.g. with ‘0’: ‘x_0’, ‘x_1’, …
By default it is set to 0.
- Returns
The names of the columns of the dataset.
- Return type
list[str | ColumnName]
- get_data_by_group(group, as_dict=False)
Get the data for a specific group name.
- get_data_by_names(names, as_dict=True)
Get the data for specific names of variables.
- get_group(variable_name)
Get the name of the group that contains a variable.
- get_names(group_name)
Get the names of the variables of a group.
- get_normalized_dataset(excluded_variables=None, excluded_groups=None)
Get a normalized copy of the dataset.
- Parameters
excluded_variables (Sequence[str] | None) –
The names of the variables not to be normalized. If None, normalize all the variables.
By default it is set to None.
excluded_groups (Sequence[str] | None) –
The names of the groups not to be normalized. If None, normalize all the groups.
By default it is set to None.
- Returns
A normalized dataset.
- Return type
- is_empty()
Check if the dataset is empty.
- Returns
Whether the dataset is empty.
- Return type
- is_group(name)
Check if a name is a group name.
- is_nan()
Check if an entry contains NaN.
- Returns
Whether any entries are NaN or not.
- Return type
- is_variable(name)
Check if a name is a variable name.
- n_variables_by_group(group)
The number of variables for a group.
- plot(name, show=True, save=False, file_path=None, directory_path=None, file_name=None, file_format=None, properties=None, **options)
Plot the dataset from a
DatasetPlot
.See
Dataset.get_available_plots()
- Parameters
name (str) – The name of the post-processing, which is the name of a class inheriting from
DatasetPlot
.show (bool) –
If True, display the figure.
By default it is set to True.
save (bool) –
If True, save the figure.
By default it is set to False.
file_path (str | Path | None) –
The path of the file to save the figures. If None, create a file path from
directory_path
,file_name
andfile_format
.By default it is set to None.
directory_path (str | Path | None) –
The path of the directory to save the figures. If None, use the current working directory.
By default it is set to None.
file_name (str | None) –
The name of the file to save the figures. If None, use a default one generated by the post-processing.
By default it is set to None.
file_format (str | None) –
A file format, e.g. ‘png’, ‘pdf’, ‘svg’, … If None, use a default file extension.
By default it is set to None.
properties (Mapping[str, DatasetPlotPropertyType] | None) –
The general properties of a
DatasetPlot
.By default it is set to None.
**options – The options for the post-processing.
- Return type
- remove(entries)
Remove entries.
- rename_variable(name, new_name)
Rename a variable.
- set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)
Set the dataset from an array.
- Parameters
data (ndarray) – The data to be stored.
variables (list[str] | None) –
The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.
By default it is set to None.
sizes (dict[str, int] | None) –
The sizes of the variables. If None, assume that all the variables have a size equal to 1.
By default it is set to None.
groups (dict[str, str] | None) –
The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.By default it is set to None.
default_name (str | None) –
The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.By default it is set to None.
- Return type
None
- set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)
Set the dataset from a file.
- Parameters
filename (Path | str) – The name of the file containing the data.
variables (list[str] | None) –
The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the
Dataset.DEFAULT_NAMES
associated with the different groups.By default it is set to None.
sizes (dict[str, int] | None) –
The sizes of the variables. If None, assume that all the variables have a size equal to 1.
By default it is set to None.
groups (dict[str, str] | None) –
The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.By default it is set to None.
delimiter (str) –
The field delimiter.
By default it is set to ,.
header (bool) –
If True, read the names of the variables on the first line of the file.
By default it is set to True.
- Return type
None
- set_metadata(name, value)
Set a metadata attribute.
- Parameters
name (str) – The name of the metadata attribute.
value (Any) – The value of the metadata attribute.
- Return type
None
- transform_variable(name, transformation)
Transform a variable.
- Parameters
name (str) – The name of the variable, e.g.
"foo"
.transformation (Callable[[numpy.ndarray], numpy.ndarray]) – The function transforming the variable, e.g.
"lambda x: np.exp(x)"
.
- Return type
None
- property columns_names: list[str | ColumnName]
The names of the columns of the dataset.
- property n_samples: int
The number of samples.
- property n_variables: int
The number of variables.
Rosenbrock dataset¶
This Dataset
contains 100 evaluations
of the well-known Rosenbrock function:
This function is known for its global minimum at point (1,1), its banana valley and the difficulty to reach its minimum.
This Dataset
is based on a full-factorial
design of experiments.
More information about the Rosenbrock function
- class gemseo.problems.dataset.rosenbrock.RosenbrockDataset(name='Rosenbrock', by_group=True, n_samples=100, categorize=True, opt_naming=True)[source]
Rosenbrock dataset parametrization.
- Parameters
name (str) –
The name of the dataset.
By default it is set to Rosenbrock.
by_group (bool) –
Whether to store the data by group. Otherwise, store them by variables.
By default it is set to True.
n_samples (int) –
The number of samples.
By default it is set to 100.
categorize (bool) –
Whether to distinguish between the different groups of variables.
By default it is set to True.
opt_naming (bool) –
Whether to use an optimization naming.
By default it is set to True.
- Return type
None
- add_group(group, data, variables=None, sizes=None, pattern=None, cache_as_input=True)
Add data related to a group.
- Parameters
group (str) – The name of the group of data to be added.
data (ndarray) – The data to be added.
variables (list[str] | None) –
The names of the variables. If None, use default names based on a pattern.
By default it is set to None.
sizes (dict[str, int] | None) –
The sizes of the variables. If None, assume that all the variables have a size equal to 1.
By default it is set to None.
pattern (str | None) –
The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.By default it is set to None.
cache_as_input (bool) –
If True, cache these data as inputs when the cache is exported to a cache.
By default it is set to True.
- Return type
None
- add_variable(name, data, group='parameters', cache_as_input=True)
Add data related to a variable.
- Parameters
name (str) – The name of the variable to be stored.
data (numpy.ndarray) – The data to be stored.
group (str) –
The name of the group related to this variable.
By default it is set to parameters.
cache_as_input (bool) –
If True, cache these data as inputs when the cache is exported to a cache.
By default it is set to True.
- Return type
None
- compare(value_1, logical_operator, value_2, component_1=0, component_2=0)
Compare either a variable and a value or a variable and another variable.
- Parameters
value_1 (str | float) – The first value, either a variable name or a numeric value.
logical_operator (str) – The logical operator, either “==”, “<”, “<=”, “>” or “>=”.
value_2 (str | float) – The second value, either a variable name or a numeric value.
component_1 (int) –
If value_1 is a variable name, component_1 corresponds to its component used in the comparison.
By default it is set to 0.
component_2 (int) –
If value_2 is a variable name, component_2 corresponds to its component used in the comparison.
By default it is set to 0.
- Returns
Whether the comparison is valid for the different entries.
- Return type
ndarray
- export_to_cache(inputs=None, outputs=None, cache_type='MemoryFullCache', cache_hdf_file=None, cache_hdf_node_name=None, **options)
Export the dataset to a cache.
- Parameters
inputs (Iterable[str] | None) –
The names of the inputs to cache. If None, use all inputs.
By default it is set to None.
outputs (Iterable[str] | None) –
The names of the outputs to cache. If None, use all outputs.
By default it is set to None.
cache_type (str) –
The type of cache to use.
By default it is set to MemoryFullCache.
cache_hdf_file (str | None) –
The name of the HDF file to store the data. Required if the type of the cache is ‘HDF5Cache’.
By default it is set to None.
cache_hdf_node_name (str | None) –
The name of the HDF node to store the discipline. If None, use the name of the dataset.
By default it is set to None.
- Returns
A cache containing the dataset.
- Return type
- export_to_dataframe(copy=True, variable_names=None)
Export the dataset to a pandas Dataframe.
- static find(comparison)
Find the entries for which a comparison is satisfied.
This search uses a boolean 1D array whose length is equal to the length of the dataset.
- Parameters
comparison (numpy.ndarray) – A boolean vector whose length is equal to the number of samples.
- Returns
The indices of the entries for which the comparison is satisfied.
- Return type
- get_all_data(by_group=True, as_dict=False)
Get all the data stored in the dataset.
The data can be returned either as a dictionary indexed by the names of the variables, or as an array concatenating them, accompanied by the names and sizes of the variables.
The data can also be classified by groups of variables.
- Parameters
by_group –
If True, sort the data by group.
By default it is set to True.
as_dict –
If True, return the data as a dictionary.
By default it is set to False.
- Returns
All the data stored in the dataset.
- Return type
Union[Dict[str, Union[Dict[str, numpy.ndarray], numpy.ndarray]], Tuple[Union[numpy.ndarray, Dict[str, numpy.ndarray]], List[str], Dict[str, int]]]
- get_column_names(variables=None, as_tuple=False, start=0)
Return the names of the columns of the dataset.
If dim(x)=1, its column name is ‘x’ while if dim(y)=2, its column names are either ‘x_0’ and ‘x_1’ or ColumnName(group_name, ‘x’, ‘0’) and ColumnName(group_name, ‘x’, ‘1’).
- Parameters
variables (Sequence[str]) –
The names of the variables. If
None
, use all the variables.By default it is set to None.
as_tuple (bool) –
If True, return the names as named tuples. otherwise, return the names as strings.
By default it is set to False.
start (int) –
The first index for the components of a variable. E.g. with ‘0’: ‘x_0’, ‘x_1’, …
By default it is set to 0.
- Returns
The names of the columns of the dataset.
- Return type
list[str | ColumnName]
- get_data_by_group(group, as_dict=False)
Get the data for a specific group name.
- get_data_by_names(names, as_dict=True)
Get the data for specific names of variables.
- get_group(variable_name)
Get the name of the group that contains a variable.
- get_names(group_name)
Get the names of the variables of a group.
- get_normalized_dataset(excluded_variables=None, excluded_groups=None)
Get a normalized copy of the dataset.
- Parameters
excluded_variables (Sequence[str] | None) –
The names of the variables not to be normalized. If None, normalize all the variables.
By default it is set to None.
excluded_groups (Sequence[str] | None) –
The names of the groups not to be normalized. If None, normalize all the groups.
By default it is set to None.
- Returns
A normalized dataset.
- Return type
- is_empty()
Check if the dataset is empty.
- Returns
Whether the dataset is empty.
- Return type
- is_group(name)
Check if a name is a group name.
- is_nan()
Check if an entry contains NaN.
- Returns
Whether any entries are NaN or not.
- Return type
- is_variable(name)
Check if a name is a variable name.
- n_variables_by_group(group)
The number of variables for a group.
- plot(name, show=True, save=False, file_path=None, directory_path=None, file_name=None, file_format=None, properties=None, **options)
Plot the dataset from a
DatasetPlot
.See
Dataset.get_available_plots()
- Parameters
name (str) – The name of the post-processing, which is the name of a class inheriting from
DatasetPlot
.show (bool) –
If True, display the figure.
By default it is set to True.
save (bool) –
If True, save the figure.
By default it is set to False.
file_path (str | Path | None) –
The path of the file to save the figures. If None, create a file path from
directory_path
,file_name
andfile_format
.By default it is set to None.
directory_path (str | Path | None) –
The path of the directory to save the figures. If None, use the current working directory.
By default it is set to None.
file_name (str | None) –
The name of the file to save the figures. If None, use a default one generated by the post-processing.
By default it is set to None.
file_format (str | None) –
A file format, e.g. ‘png’, ‘pdf’, ‘svg’, … If None, use a default file extension.
By default it is set to None.
properties (Mapping[str, DatasetPlotPropertyType] | None) –
The general properties of a
DatasetPlot
.By default it is set to None.
**options – The options for the post-processing.
- Return type
- remove(entries)
Remove entries.
- rename_variable(name, new_name)
Rename a variable.
- set_from_array(data, variables=None, sizes=None, groups=None, default_name=None)
Set the dataset from an array.
- Parameters
data (ndarray) – The data to be stored.
variables (list[str] | None) –
The names of the variables. If None, use one default name per column of the array based on the pattern ‘default_name’.
By default it is set to None.
sizes (dict[str, int] | None) –
The sizes of the variables. If None, assume that all the variables have a size equal to 1.
By default it is set to None.
groups (dict[str, str] | None) –
The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.By default it is set to None.
default_name (str | None) –
The name of the variable to be used as a pattern when variables is None. If None, use the
Dataset.DEFAULT_NAMES
for this group if it exists. Otherwise, use the group name.By default it is set to None.
- Return type
None
- set_from_file(filename, variables=None, sizes=None, groups=None, delimiter=',', header=True)
Set the dataset from a file.
- Parameters
filename (Path | str) – The name of the file containing the data.
variables (list[str] | None) –
The names of the variables. If None and header is True, read the names from the first line of the file. If None and header is False, use default names based on the patterns the
Dataset.DEFAULT_NAMES
associated with the different groups.By default it is set to None.
sizes (dict[str, int] | None) –
The sizes of the variables. If None, assume that all the variables have a size equal to 1.
By default it is set to None.
groups (dict[str, str] | None) –
The groups of the variables. If None, use
Dataset.DEFAULT_GROUP
for all the variables.By default it is set to None.
delimiter (str) –
The field delimiter.
By default it is set to ,.
header (bool) –
If True, read the names of the variables on the first line of the file.
By default it is set to True.
- Return type
None
- set_metadata(name, value)
Set a metadata attribute.
- Parameters
name (str) – The name of the metadata attribute.
value (Any) – The value of the metadata attribute.
- Return type
None
- transform_variable(name, transformation)
Transform a variable.
- Parameters
name (str) – The name of the variable, e.g.
"foo"
.transformation (Callable[[numpy.ndarray], numpy.ndarray]) – The function transforming the variable, e.g.
"lambda x: np.exp(x)"
.
- Return type
None
- property columns_names: list[str | ColumnName]
The names of the columns of the dataset.
- property n_samples: int
The number of samples.
- property n_variables: int
The number of variables.