Scalable models

Scalability study - API

This API facilitates the use of the gemseo.problems.scalable.data_driven.study package implementing classes to benchmark MDO formulations based on scalable disciplines.

ScalabilityStudy class implements the concept of scalability study:

  1. By instantiating a ScalabilityStudy, the user defines the MDO problem in terms of design parameters, objective function and constraints.

  2. For each discipline, the user adds a dataset stored in a Dataset and select a type of ScalableModel to build the ScalableDiscipline associated with this discipline.

  3. The user adds different optimization strategies, defined in terms of both optimization algorithms and MDO formulation.

  4. The user adds different scaling strategies, in terms of sizes of design parameters, coupling variables and equality and inequality constraints. The user can also define a scaling strategies according to particular parameters rather than groups of parameters.

  5. Lastly, the user executes the ScalabilityStudy and the results are written in several files and stored into directories in a hierarchical way, where names depends on both MDO formulation, scaling strategy and replications when it is necessary. Different kinds of files are stored: optimization graphs, dependency matrix plots and of course, scalability results by means of a dedicated class: ScalabilityResult.

Functions:

create_scalability_study(objective, …[, …])

This method creates a ScalabilityStudy.

plot_scalability_results(study_directory)

This method plots the set of ScalabilityResult generated by a ScalabilityStudy and located in the directory created by this study.

gemseo.problems.scalable.data_driven.api.create_scalability_study(objective, design_variables, directory='study', prefix='', eq_constraints=None, ineq_constraints=None, maximize_objective=False, fill_factor=0.7, active_probability=0.1, feasibility_level=0.8, start_at_equilibrium=True, early_stopping=True, coupling_variables=None)[source]

This method creates a ScalabilityStudy. It requires two mandatory arguments:

  • the 'objective' name,

  • the list of 'design_variables' names.

Concerning output files, we can specify:

  • the directory which is 'study' by default,

  • the prefix of output file names (default: no prefix).

Regarding optimization parametrization, we can specify:

  • the list of equality constraints names (eq_constraints),

  • the list of inequality constraints names (ineq_constraints),

  • the choice of maximizing the objective function (maximize_objective).

By default, the objective function is minimized and the MDO problem is unconstrained.

Last but not least, with regard to the scalability methodology, we can overwrite:

  • the default fill factor of the input-output dependency matrix ineq_constraints,

  • the probability to set the inequality constraints as active at initial step of the optimization active_probability,

  • the offset of satisfaction for inequality constraints feasibility_level,

  • the use of a preliminary MDA to start at equilibrium start_at_equilibrium,

  • the post-processing of the optimization database to get results earlier than final step early_stopping.

Parameters
  • objective (str) – name of the objective

  • design_variables (list(str)) – names of the design variables

  • directory (str) – working directory of the study. Default: ‘study’.

  • prefix (str) – prefix for the output filenames. Default: ‘’.

  • eq_constraints (list(str)) – names of the equality constraints. Default: None.

  • ineq_constraints (list(str)) – names of the inequality constraints Default: None.

  • maximize_objective (bool) – maximizing objective. Default: False.

  • fill_factor (float) – default fill factor of the input-output dependency matrix. Default: 0.7.

  • active_probability (float) – probability to set the inequality constraints as active at initial step of the optimization. Default: 0.1

  • feasibility_level (float) – offset of satisfaction for inequality constraints. Default: 0.8.

  • start_at_equilibrium (bool) – start at equilibrium using a preliminary MDA. Default: True.

  • early_stopping (bool) – post-process the optimization database to get results earlier than final step.

gemseo.problems.scalable.data_driven.api.plot_scalability_results(study_directory)[source]

This method plots the set of ScalabilityResult generated by a ScalabilityStudy and located in the directory created by this study.

Parameters

study_directory (str) – directory of the scalability study.

Scalable MDO problem

This module implements the concept of scalable problem by means of the ScalableProblem class.

Given

  • a MDO scenario based on a set of sampled disciplines with a particular problem dimension,

  • a new problem dimension (= number of inputs and outputs),

a scalable problem:

  1. makes each discipline scalable based on the new problem dimension,

  2. creates the corresponding MDO scenario.

Then, this MDO scenario can be executed and post-processed.

We can repeat this tasks for different sizes of variables and compare the scalability, which is the dependence of the scenario results on the problem dimension.

Classes:

ScalableProblem(datasets, design_variables, …)

Scalable problem.

class gemseo.problems.scalable.data_driven.problem.ScalableProblem(datasets, design_variables, objective_function, eq_constraints=None, ineq_constraints=None, maximize_objective=False, sizes=None, **parameters)[source]

Scalable problem.

Constructor.

Parameters
  • datasets (list(Dataset)) – disciplinary datasets.

  • design_variables (list(str)) – list of design variable names

  • objective_function (str) – objective function

  • eq_constraints (list(str)) – equality constraints. Default: None.

  • eq_constraints – inequality constraints. Default: None.

  • maximize_objective (bool) – maximize objective. Default: False.

  • sizes (dict) – sizes of input and output variables. If None, use the original sizes. Default: None.

  • parameters – optional parameters for the scalable model.

Methods:

create_scenario([formulation, …])

Create MDO scenario from the scalable disciplines.

exec_time([do_sum])

Get total execution time per discipline.

plot_1d_interpolations([save, show, step, …])

Plot 1d interpolations.

plot_coupling_graph()

Plot a coupling graph.

plot_dependencies([save, show, directory])

Plot dependency matrices.

plot_n2_chart([save, show])

Plot a N2 chart.

Attributes:

is_feasible

Get the feasibility property of the scenario.

n_calls

Get number of disciplinary calls per discipline.

n_calls_linearize

Get number of disciplinary calls per discipline.

n_calls_linearize_top_level

Get number of top level disciplinary calls per discipline.

n_calls_top_level

Get number of top level disciplinary calls per discipline.

status

Get the status of the scenario.

create_scenario(formulation='DisciplinaryOpt', scenario_type='MDO', start_at_equilibrium=False, active_probability=0.1, feasibility_level=0.5, **options)[source]

Create MDO scenario from the scalable disciplines.

Parameters
  • formulation (str) – MDO formulation. Default: ‘DisciplinaryOpt’.

  • scenario_type (str) – type of scenario (‘MDO’ or ‘DOE’). Default: ‘MDO’.

  • start_at_equilibrium (bool) – start at equilibrium using a preliminary MDA. Default: True.

  • active_probability (float) – probability to set the inequality constraints as active at initial step of the optimization. Default: 0.1.

  • feasibility_level (float) – offset of satisfaction for inequality constraints. Default: 0.5.

  • options – formulation options.

exec_time(do_sum=True)[source]

Get total execution time per discipline.

Parameters

do_sum (bool) – sum over disciplines (default: True)

Returns

execution time

Return type

list(float) or float

property is_feasible

Get the feasibility property of the scenario.

property n_calls

Get number of disciplinary calls per discipline.

Returns

number of disciplinary calls per discipline

Return type

list(int) or int

property n_calls_linearize

Get number of disciplinary calls per discipline.

Returns

number of disciplinary calls per discipline

Return type

list(int) or int

property n_calls_linearize_top_level

Get number of top level disciplinary calls per discipline.

Returns

number of top level disciplinary calls per discipline

Return type

list(int) or int

property n_calls_top_level

Get number of top level disciplinary calls per discipline.

Returns

number of top level disciplinary calls per discipline

Return type

list(int) or int

plot_1d_interpolations(save=True, show=False, step=0.01, varnames=None, directory='.', png=False)[source]

Plot 1d interpolations.

Parameters
  • save (bool) – save plot. Default: True.

  • show (bool) – show plot. Default: False.

  • step (bool) – Step to evaluate the 1d interpolation function Default: 0.01.

  • varnames (list(str)) – names of the variable to plot; if None, all variables are plotted. Default: None.

  • directory (str) – directory path. Default: ‘.’.

  • png (bool) – if True, the file format is PNG. Otherwise, use PDF. Default: False.

plot_coupling_graph()[source]

Plot a coupling graph.

plot_dependencies(save=True, show=False, directory='.')[source]

Plot dependency matrices.

Parameters
  • save (bool) – save plot (default: True)

  • show (bool) – show plot (default: False)

  • directory (str) – directory path (default: ‘.’)

plot_n2_chart(save=True, show=False)[source]

Plot a N2 chart.

Parameters
  • save (bool) – save plot. Default: True.

  • show (bool) – show plot. Default: False.

property status

Get the status of the scenario.

Scalable discipline

The discipline implements the concept of scalable discipline. This is a particular discipline built from a input-output learning dataset associated with a function and generalizing its behavior to a new user-defined problem dimension, that is to say new user-defined input and output dimensions.

Alone or in interaction with other objects of the same type, a scalable discipline can be used to compare the efficiency of an algorithm applying to disciplines with respect to the problem dimension, e.g. optimization algorithm, surrogate model, MDO formulation, MDA, …

The ScalableDiscipline class implements this concept. It inherits from the MDODiscipline class in such a way that it can easily be used in a Scenario. It is composed of a ScalableModel.

The user only needs to provide:

  • the name of a class overloading ScalableModel,

  • a dataset as an Dataset

  • variables sizes as a dictionary whose keys are the names of inputs and outputs and values are their new sizes. If a variable is missing, its original size is considered.

The ScalableModel parameters can also be filled in, otherwise the model uses default values.

Classes:

ScalableDiscipline(name, data[, sizes])

Scalable discipline.

class gemseo.problems.scalable.data_driven.discipline.ScalableDiscipline(name, data, sizes=None, **parameters)[source]

Scalable discipline.

Constructor.

Parameters
  • name (str) – scalable model class name.

  • data (Dataset) – learning dataset.

  • sizes (dict) – sizes of input and output variables. If None, use the original sizes. Default: None.

  • parameters – model parameters

Methods:

activate_time_stamps()

Activate the time stamps.

add_differentiated_inputs([inputs])

Add inputs to the differentiation list.

add_differentiated_outputs([outputs])

Add outputs to the differentiation list.

add_status_observer(obs)

Add an observer for the status.

auto_get_grammar_file([is_input, name, comp_dir])

Use a naming convention to associate a grammar file to a discipline.

check_input_data(input_data[, raise_exception])

Check the input data validity.

check_jacobian([input_data, derr_approx, …])

Check if the jacobian provided by the linearize() method is correct.

check_output_data([raise_exception])

Check the output data validity.

deactivate_time_stamps()

Deactivate the time stamps for storing start and end times of execution and linearizations.

deserialize(in_file)

Derialize the discipline from a file.

execute([input_data])

Execute the discipline.

get_all_inputs()

Accessor for the input data as a list of values.

get_all_outputs()

Accessor for the output data as a list of values.

get_attributes_to_serialize()

Define the attributes to be serialized.

get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

get_expected_dataflow()

Return the expected data exchange sequence.

get_expected_workflow()

Return the expected execution sequence.

get_input_data()

Accessor for the input data as a dict of values.

get_input_data_names()

Accessor for the input names as a list.

get_input_output_data_names()

Accessor for the input and output names as a list.

get_inputs_asarray()

Accessor for the outputs as a large numpy array.

get_inputs_by_name(data_names)

Accessor for the inputs as a list.

get_local_data_by_name(data_names)

Accessor for the local data of the discipline as a dict of values.

get_output_data()

Accessor for the output data as a dict of values.

get_output_data_names()

Accessor for the output names as a list.

get_outputs_asarray()

Accessor for the outputs as a large numpy array.

get_outputs_by_name(data_names)

Accessor for the outputs as a list.

get_sub_disciplines()

Gets the sub disciplines of self By default, empty.

initialize_grammars(data)

Initialize input and output grammars from data names.

is_all_inputs_existing(data_names)

Test if all the names in data_names are inputs of the discipline.

is_all_outputs_existing(data_names)

Test if all the names in data_names are outputs of the discipline.

is_input_existing(data_name)

Test if input named data_name is an input of the discipline.

is_output_existing(data_name)

Test if output named data_name is an output of the discipline.

is_scenario()

Return True if self is a scenario.

linearize([input_data, force_all, force_no_exec])

Execute the linearized version of the code.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

reset_statuses_for_run()

Sets all the statuses to PENDING.

serialize(out_file)

Serialize the discipline.

set_cache_policy([cache_type, …])

Set the type of cache to use and the tolerance level.

set_disciplines_statuses(status)

Set the sub disciplines statuses.

set_jacobian_approximation([…])

Set the jacobian approximation method.

set_optimal_fd_step([outputs, inputs, …])

Compute the optimal finite-difference step.

store_local_data(**kwargs)

Store discipline data in local data.

Attributes:

cache_tol

Accessor to the cache input tolerance.

default_inputs

Accessor to the default inputs.

exec_time

Return the cumulated execution time.

linearization_mode

Accessor to the linearization mode.

n_calls

Return the number of calls to execute() which triggered the _run().

n_calls_linearize

Return the number of calls to linearize() which triggered the _compute_jacobian() method.

status

Status accessor.

classmethod activate_time_stamps()

Activate the time stamps.

For storing start and end times of execution and linearizations.

add_differentiated_inputs(inputs=None)

Add inputs to the differentiation list.

This method updates self._differentiated_inputs with inputs

Parameters

inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)

add_differentiated_outputs(outputs=None)

Add outputs to the differentiation list.

Update self._differentiated_inputs with inputs.

Parameters

outputs – list of output variables to differentiate if None, all outputs of discipline are used

add_status_observer(obs)

Add an observer for the status.

Add an observer for the status to be notified when self changes of status.

Parameters

obs – the observer to add

auto_get_grammar_file(is_input=True, name=None, comp_dir=None)

Use a naming convention to associate a grammar file to a discipline.

This method searches in the “comp_dir” directory containing the discipline source file for files basenames self.name _input.json and self.name _output.json

Parameters
  • is_input – if True, searches for _input.json, otherwise _output.json (Default value = True)

  • name – the name of the discipline (Default value = None)

  • comp_dir – the containing directory if None, use self.comp_dir (Default value = None)

Returns

path to the grammar file

Return type

string

property cache_tol

Accessor to the cache input tolerance.

check_input_data(input_data, raise_exception=True)

Check the input data validity.

Parameters
  • input_data – the input data dict

  • raise_exception – Default value = True)

check_jacobian(input_data=None, derr_approx='finite_differences', step=1e-07, threshold=1e-08, linearization_mode='auto', inputs=None, outputs=None, parallel=False, n_processes=2, use_threading=False, wait_time_between_fork=0, auto_set_step=False, plot_result=False, file_path='jacobian_errors.pdf', show=False, figsize_x=10, figsize_y=10)

Check if the jacobian provided by the linearize() method is correct.

Parameters
  • input_data – input data dict (Default value = None)

  • derr_approx – derivative approximation method: COMPLEX_STEP (Default value = COMPLEX_STEP)

  • threshold – acceptance threshold for the jacobian error (Default value = 1e-8)

  • linearization_mode – the mode of linearization: direct, adjoint or automated switch depending on dimensions of inputs and outputs (Default value = ‘auto’)

  • inputs – list of inputs wrt which to differentiate (Default value = None)

  • outputs – list of outputs to differentiate (Default value = None)

  • step – the step for finite differences or complex step

  • parallel – if True, executes in parallel

  • n_processes – maximum number of processors on which to run

  • use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

  • wait_time_between_fork – time waited between two forks of the process /Thread

  • auto_set_step – Compute optimal step for a forward first order finite differences gradient approximation

  • plot_result – plot the result of the validation (computed and approximate jacobians)

  • file_path – path to the output file if plot_result is True

  • show – if True, open the figure

  • figsize_x – x size of the figure in inches

  • figsize_y – y size of the figure in inches

Returns

True if the check is accepted, False otherwise

check_output_data(raise_exception=True)

Check the output data validity.

Parameters

raise_exception – if true, an exception is raised when data is invalid (Default value = True)

classmethod deactivate_time_stamps()

Deactivate the time stamps for storing start and end times of execution and linearizations.

property default_inputs

Accessor to the default inputs.

static deserialize(in_file)

Derialize the discipline from a file.

Parameters

in_file – input file for serialization

Returns

a discipline instance

property exec_time

Return the cumulated execution time.

Multiprocessing safe.

execute(input_data=None)

Execute the discipline.

This method executes the discipline:

  • Adds default inputs to the input_data if some inputs are not defined

    in input_data but exist in self._default_data

  • Checks if the last execution of the discipline wan not called with

    identical inputs, cached in self.cache, if yes, directly return self.cache.get_output_cache(inputs)

  • Caches the inputs

  • Checks the input data against self.input_grammar

  • if self.data_processor is not None: runs the preprocessor

  • updates the status to RUNNING

  • calls the _run() method, that shall be defined

  • if self.data_processor is not None: runs the postprocessor

  • checks the output data

  • Caches the outputs

  • updates the status to DONE or FAILED

  • updates summed execution time

Parameters

input_data (dict) – the input data dict needed to execute the disciplines according to the discipline input grammar (Default value = None)

Returns

the discipline local data after execution

Return type

dict

get_all_inputs()

Accessor for the input data as a list of values.

The order is given by self.get_input_data_names().

Returns

the data

get_all_outputs()

Accessor for the output data as a list of values.

The order is given by self.get_output_data_names().

Returns

the data

get_attributes_to_serialize()

Define the attributes to be serialized.

Shall be overloaded by disciplines

Returns

the list of attributes names

Return type

list

static get_data_list_from_dict(keys, data_dict)

Filter the dict from a list of keys or a single key.

If keys is a string, then the method return the value associated to the key. If keys is a list of string, then the method return a generator of value corresponding to the keys which can be iterated.

Parameters
  • keys – a sting key or a list of keys

  • data_dict – the dict to get the data from

Returns

a data or a generator of data

get_expected_dataflow()

Return the expected data exchange sequence.

This method is used for the XDSM representation.

Default to empty list See MDOFormulation.get_expected_dataflow

Returns

a list representing the data exchange arcs

get_expected_workflow()

Return the expected execution sequence.

This method is used for XDSM representation Default to the execution of the discipline itself See MDOFormulation.get_expected_workflow

get_input_data()

Accessor for the input data as a dict of values.

Returns

the data dict

get_input_data_names()

Accessor for the input names as a list.

Returns

the data names list

get_input_output_data_names()

Accessor for the input and output names as a list.

Returns

the data names list

get_inputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs().

Returns

the outputs array

Return type

ndarray

get_inputs_by_name(data_names)

Accessor for the inputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_local_data_by_name(data_names)

Accessor for the local data of the discipline as a dict of values.

Parameters

data_names – the names of the data which will be the keys of the dictionary

Returns

the data list

get_output_data()

Accessor for the output data as a dict of values.

Returns

the data dict

get_output_data_names()

Accessor for the output names as a list.

Returns

the data names list

get_outputs_asarray()

Accessor for the outputs as a large numpy array.

The order is the one of self.get_all_outputs()

Returns

the outputs array

Return type

ndarray

get_outputs_by_name(data_names)

Accessor for the outputs as a list.

Parameters

data_names – the data names list

Returns

the data list

get_sub_disciplines()

Gets the sub disciplines of self By default, empty.

Returns

the list of disciplines

initialize_grammars(data)[source]

Initialize input and output grammars from data names.

Parameters

data (Dataset) – learning dataset.

is_all_inputs_existing(data_names)

Test if all the names in data_names are inputs of the discipline.

Parameters

data_names – the names of the inputs

Returns

True if data_names are all in input grammar

Return type

logical

is_all_outputs_existing(data_names)

Test if all the names in data_names are outputs of the discipline.

Parameters

data_names – the names of the outputs

Returns

True if data_names are all in output grammar

Return type

logical

is_input_existing(data_name)

Test if input named data_name is an input of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in input grammar

Return type

logical

is_output_existing(data_name)

Test if output named data_name is an output of the discipline.

Parameters

data_name – the name of the output

Returns

True if data_name is in output grammar

Return type

logical

static is_scenario()

Return True if self is a scenario.

Returns

True if self is a scenario

property linearization_mode

Accessor to the linearization mode.

linearize(input_data=None, force_all=False, force_no_exec=False)

Execute the linearized version of the code.

Parameters
  • input_data – the input data dict needed to execute the disciplines according to the discipline input grammar

  • force_all – if False, self._differentiated_inputs and self.differentiated_output are used to filter the differentiated variables otherwise, all outputs are differentiated wrt all inputs (Default value = False)

  • force_no_exec – if True, the discipline is not re executed, cache is loaded anyway

property n_calls

Return the number of calls to execute() which triggered the _run().

Multiprocessing safe.

property n_calls_linearize

Return the number of calls to linearize() which triggered the _compute_jacobian() method.

Multiprocessing safe.

notify_status_observers()

Notify all status observers that the status has changed.

remove_status_observer(obs)

Remove an observer for the status.

Parameters

obs – the observer to remove

reset_statuses_for_run()

Sets all the statuses to PENDING.

serialize(out_file)

Serialize the discipline.

Parameters

out_file – destination file for serialization

set_cache_policy(cache_type='SimpleCache', cache_tolerance=0.0, cache_hdf_file=None, cache_hdf_node_name=None, is_memory_shared=True)

Set the type of cache to use and the tolerance level.

This method set the cache policy to cache data whose inputs are close to inputs whose outputs are already cached. The cache can be either a simple cache recording the last execution or a full cache storing all executions. Caching data can be either in-memory, e.g. SimpleCache and MemoryFullCache , or on the disk, e.g. HDF5Cache . CacheFactory.caches provides the list of available types of caches.

Parameters
  • cache_type (str) – type of cache to use.

  • cache_tolerance (float) – tolerance for the approximate cache maximal relative norm difference to consider that two input arrays are equal

  • cache_hdf_file (str) – the file to store the data, mandatory when HDF caching is used

  • cache_hdf_node_name (str) – name of the HDF dataset to store the discipline data. If None, self.name is used

  • is_memory_shared (bool) – If True, a shared memory dict is used to store the data, which makes the cache compatible with multiprocessing. WARNING: if set to False, and multiple disciplines point to the same cache or the process is multiprocessed, there may be duplicate computations because the cache will not be shared among the processes.

set_disciplines_statuses(status)

Set the sub disciplines statuses.

To be implemented in subclasses. :param status: the status

set_jacobian_approximation(jac_approx_type='finite_differences', jax_approx_step=1e-07, jac_approx_n_processes=1, jac_approx_use_threading=False, jac_approx_wait_time=0)

Set the jacobian approximation method.

Sets the linearization mode to approx_method, sets the parameters of the approximation for further use when calling self.linearize

Parameters
  • jac_approx_type – “complex_step” or “finite_differences”

  • jax_approx_step – the step for finite differences or complex step

  • jac_approx_n_processes – maximum number of processors on which to run

  • jac_approx_use_threading – if True, use Threads instead of processes to parallelize the execution multiprocessing will copy (serialize) all the disciplines, while threading will share all the memory This is important to note if you want to execute the same discipline multiple times, you shall use multiprocessing

  • jac_approx_wait_time – time waited between two forks of the process /Thread

set_optimal_fd_step(outputs=None, inputs=None, force_all=False, print_errors=False, numerical_error=2.220446049250313e-16)

Compute the optimal finite-difference step.

Compute the optimal step for a forward first order finite differences gradient approximation. Requires a first evaluation of perturbed functions values. The optimal step is reached when the truncation error (cut in the Taylor development), and the numerical cancellation errors (roundoff when doing f(x+step)-f(x)) are approximately equal.

Warning: this calls the discipline execution two times per input variables.

See: https://en.wikipedia.org/wiki/Numerical_differentiation and “Numerical Algorithms and Digital Representation”, Knut Morken , Chapter 11, “Numerical Differenciation”

Parameters
  • inputs – inputs wrt the linearization is made. If None, use differentiated inputs

  • outputs – outputs of the linearization is made. If None, use differentiated outputs

  • force_all – if True, all inputs and outputs are used

  • print_errors – if True, displays the estimated errors

  • numerical_error – numerical error associated to the calculation of f. By default Machine epsilon (appx 1e-16), but can be higher when the calculation of f requires a numerical resolution

Returns

the estimated errors of truncation and cancelation error.

property status

Status accessor.

store_local_data(**kwargs)

Store discipline data in local data.

Parameters

kwargs – the data as key value pairs

Scalable model factory

This module contains the ScalableModelFactory which is a factory to create a ScalableModel from its class name by means of the ScalableModelFactory.create() method. It is also possible to get a list of available scalable models (see ScalableModelFactory.scalable_models method) and to check is a type of scalable model is available (see ScalableModelFactory.is_available() method)

Classes:

ScalableModelFactory()

This factory instantiates a class:.ScalableModel from its class name.

class gemseo.problems.scalable.data_driven.factory.ScalableModelFactory[source]

This factory instantiates a class:.ScalableModel from its class name.

The class can be internal to GEMSEO or located in an external module whose path is provided to the constructor.

Initializes the factory: scans the directories to search for subclasses of ScalableModel.

Searches in “GEMSEO_PATH” and gemseo.caches

Methods:

create(model_name, data[, sizes])

Create a scalable model.

is_available(model_name)

Checks the availability of a scalable model.

Attributes:

scalable_models

Lists the available classes for scalable models.

create(model_name, data, sizes=None, **parameters)[source]

Create a scalable model.

Parameters
  • model_name (str) – name of the scalable model (its classname)

  • data (Dataset) – learning dataset.

  • sizes (dict) – sizes of input and output variables. If None, use the original sizes. Default: None.

  • parameters – model parameters

Returns

model_name scalable model

is_available(model_name)[source]

Checks the availability of a scalable model.

Parameters

model_name (str) – model_name of the scalable model.

Returns

True if the scalable model is available.

Return type

bool

property scalable_models

Lists the available classes for scalable models.

Returns

the list of classes names.

Return type

list(str)

Scalable model

This module implements the abstract concept of scalable model which is used by scalable disciplines. A scalable model is built from a input-output learning dataset associated with a function and generalizing its behavior to a new user-defined problem dimension, that is to say new user-defined input and output dimensions.

The concept of scalable model is implemented through ScalableModel, an abstract class which is instantiated from:

  • data provided as a Dataset

  • variables sizes provided as a dictionary whose keys are the names of inputs and outputs and values are their new sizes. If a variable is missing, its original size is considered.

Scalable model parameters can also be filled in. Otherwise the model uses default values.

See also

The ScalableDiagonalModel class overloads ScalableModel.

Classes:

ScalableModel(data[, sizes])

Scalable model.

class gemseo.problems.scalable.data_driven.model.ScalableModel(data, sizes=None, **parameters)[source]

Scalable model.

Constructor.

Parameters
  • data (Dataset) – learning dataset.

  • sizes (dict) – sizes of input and output variables. If None, use the original sizes. Default: None.

  • parameters – model parameters

Methods:

build_model()

Build model with original sizes for input and output variables.

compute_bounds()

Compute lower and upper bounds of both input and output variables.

normalize_data()

Normalize dataset from lower and upper bounds.

scalable_derivatives([input_value])

Evaluate the scalable derivatives.

scalable_function([input_value])

Evaluate the scalable function.

Attributes:

inputs_names

Inputs names.

original_sizes

Original sizes of variables.

outputs_names

Outputs names.

build_model()[source]

Build model with original sizes for input and output variables.

compute_bounds()[source]

Compute lower and upper bounds of both input and output variables.

Returns

lower bounds, upper bounds.

Return type

dict, dict

property inputs_names

Inputs names.

Returns

names of the inputs.

Return type

list(str)

normalize_data()[source]

Normalize dataset from lower and upper bounds.

property original_sizes

Original sizes of variables.

Returns

original sizes of variables.

Return type

dict

property outputs_names

Outputs names.

Returns

names of the outputs.

Return type

list(str)

scalable_derivatives(input_value=None)[source]

Evaluate the scalable derivatives.

Parameters

input_value (dict) – input values. If None, use default inputs. Default: None

Returns

evaluation of the scalable derivatives.

Return type

dict

scalable_function(input_value=None)[source]

Evaluate the scalable function.

Parameters

input_value (dict) – input values. If None, use default inputs. Default: None.

Returns

evaluation of the scalable function.

Return type

dict

Scalable diagonal model

This module implements the concept of scalable diagonal model, which is a particular scalable model built from an input-output dataset relying on a diagonal design of experiments (DOE) where inputs vary proportionally from their lower bounds to their upper bounds, following the diagonal of the input space.

So for every output, the dataset catches its evolution with respect to this proportion, which makes it a monodimensional behavior. Then, for a new user-defined problem dimension, the scalable model extrapolates this monodimensional behavior to the different input directions.

The concept of scalable diagonal model is implemented through the ScalableDiagonalModel class which is composed of a ScalableDiagonalApproximation. With regard to the diagonal DOE, GEMSEO proposes the DiagonalDOE class.

Classes:

ScalableDiagonalApproximation(sizes, …[, seed])

Methodology that captures the trends of a physical problem, and extends it into a problem that has scalable input and outputs dimensions The original and the resulting scalable problem have the same interface:

ScalableDiagonalModel(data[, sizes, …])

Scalable diagonal model.

Functions:

choice(a[, size, replace, p])

Generates a random sample from a given 1-D array

npseed

seed(self, seed=None)

rand(d0, d1, …, dn)

Random values in a given shape.

randint(low[, high, size, dtype])

Return random integers from low (inclusive) to high (exclusive).

class gemseo.problems.scalable.data_driven.diagonal.ScalableDiagonalApproximation(sizes, output_dependency, io_dependency, seed=0)[source]

Methodology that captures the trends of a physical problem, and extends it into a problem that has scalable input and outputs dimensions The original and the resulting scalable problem have the same interface:

all inputs and outputs have the same names; only their dimensions vary.

Constructor:

Parameters
  • sizes (dict) – sizes of both input and output variables.

  • output_dependency (dict) – dependency between old and new outputs.

  • io_dependency (dict) – dependency between new inputs and new outputs.

Methods:

build_scalable_function(function_name, …)

Build interpolation interpolation from a 1D input and output function.

get_scalable_derivative(output_function)

Retrieve the (scalable) gradient of the scalable function generated from the original discipline.

get_scalable_function(output_function)

Retrieve the scalable function generated from the original discipline.

scale_samples(samples)

Scale samples of array into [0, 1]

build_scalable_function(function_name, dataset, input_names, degree=3)[source]

Build interpolation interpolation from a 1D input and output function. Add the model to the local dictionary.

Parameters
  • function_name (str) – name of the output function

  • dataset (Dataset) – the input-output dataset

  • input_names (list(str)) – names of the input variables

  • degree (int) – degree of interpolation (Default value = 3)

get_scalable_derivative(output_function)[source]

Retrieve the (scalable) gradient of the scalable function generated from the original discipline.

Parameters

output_function (str) – name of the output function

get_scalable_function(output_function)[source]

Retrieve the scalable function generated from the original discipline.

Parameters

output_function (str) – name of the output function

static scale_samples(samples)[source]

Scale samples of array into [0, 1]

Parameters

samples (list(array)) – samples of multivariate array

Returns

samples of multivariate array

Return type

array

class gemseo.problems.scalable.data_driven.diagonal.ScalableDiagonalModel(data, sizes=None, fill_factor=- 1, comp_dep=None, inpt_dep=None, force_input_dependency=False, allow_unused_inputs=True, seed=1, group_dep=None)[source]

Scalable diagonal model.

Constructor.

Parameters
  • data (Dataset) – learning dataset.

  • sizes (dict) – sizes of input and output variables. If None, use the original sizes. Default: None.

  • fill_factor – degree of sparsity of the dependency matrix. Default: -1.

  • comp_dep – matrix that establishes the selection of a single original component for each scalable component

  • inpt_dep – dependency matrix that establishes the dependency of outputs wrt inputs

  • force_input_dependency (bool) – for any output, force dependency with at least on input.

  • allow_unused_inputs (bool) – possibility to have an input with no dependence with any output

  • seed (int) – seed

  • group_dep (dict(list(str))) – dependency between inputs and outputs

Methods:

build_model()

Build model with original sizes for input and output variables.

compute_bounds()

Compute lower and upper bounds of both input and output variables.

generate_random_dependency()

Generates a random dependency structure for use in scalable discipline.

normalize_data()

Normalize dataset from lower and upper bounds.

plot_1d_interpolations([save, show, step, …])

This methods plots the scaled 1D interpolations, a.k.a.

plot_dependency([add_levels, save, show, …])

This method plots the dependency matrix of a discipline in the form of a chessboard, where rows represent inputs, columns represent output and gray scale represent the dependency level between inputs and outputs.

scalable_derivatives([input_value])

Evaluate the scalable derivatives.

scalable_function([input_value])

Evaluate the scalable functions.

Attributes:

inputs_names

Inputs names.

original_sizes

Original sizes of variables.

outputs_names

Outputs names.

build_model()[source]

Build model with original sizes for input and output variables.

Returns

scalable approximation.

Return type

ScalableDiagonalApproximation

compute_bounds()

Compute lower and upper bounds of both input and output variables.

Returns

lower bounds, upper bounds.

Return type

dict, dict

generate_random_dependency()[source]

Generates a random dependency structure for use in scalable discipline.

Returns

output component dependency and input-output dependency

Return type

dict(int), dict(dict(array))

property inputs_names

Inputs names.

Returns

names of the inputs.

Return type

list(str)

normalize_data()

Normalize dataset from lower and upper bounds.

property original_sizes

Original sizes of variables.

Returns

original sizes of variables.

Return type

dict

property outputs_names

Outputs names.

Returns

names of the outputs.

Return type

list(str)

plot_1d_interpolations(save=False, show=False, step=0.01, varnames=None, directory='.', png=False)[source]

This methods plots the scaled 1D interpolations, a.k.a. basis functions.

A basis function is a monodimensional function interpolating the samples of a given output component over the input sampling line \(t\in[0,1]\mapsto \\underline{x}+t(\overline{x}-\\underline{x})\).

There are as many basis functions as there are output components from the discipline. Thus, for a discipline with a single output in dimension 1, there is 1 basis function. For a discipline with a single output in dimension 2, there are 2 basis functions. For a discipline with an output in dimension 2 and an output in dimension 13, there are 15 basis functions. And so on. This method allows to plot the basis functions associated with all outputs or only part of them, either on screen (show=True), in a file (save=True) or both. We can also specify the discretization step whose default value is 0.01.

Parameters
  • save (bool) – if True, export the plot as a PDF file (Default value = False)

  • show (bool) – if True, display the plot (Default value = False)

  • step (bool) – Step to evaluate the 1d interpolation function (Default value = 0.01)

  • varnames (list(str)) – names of the variable to plot; if None, all variables are plotted (Default value = None)

  • directory (str) – directory path. Default: ‘.’.

  • png (bool) – if True, the file format is PNG. Otherwise, use PDF. Default: False.

plot_dependency(add_levels=True, save=True, show=False, directory='.', png=False)[source]

This method plots the dependency matrix of a discipline in the form of a chessboard, where rows represent inputs, columns represent output and gray scale represent the dependency level between inputs and outputs.

Parameters
  • add_levels (bool) – add values of dependency levels in percentage. Default: True.

  • save (bool) – if True, export the plot into a file. Default: True.

  • show (bool) – if True, display the plot. Default: False.

  • directory (str) – directory path. Default: ‘.’.

  • png (bool) – if True, the file format is PNG. Otherwise, use PDF. Default: False.

scalable_derivatives(input_value=None)[source]

Evaluate the scalable derivatives.

Parameters

input_value (dict) – input values. If None, use default inputs.

Returns

evaluation of the scalable derivatives.

Return type

dict

scalable_function(input_value=None)[source]

Evaluate the scalable functions.

Parameters

input_value (dict) – input values. If None, use default inputs.

Returns

evaluation of the scalable functions.

Return type

dict

gemseo.problems.scalable.data_driven.diagonal.choice(a, size=None, replace=True, p=None)

Generates a random sample from a given 1-D array

New in version 1.7.0.

Note

New code should use the choice method of a default_rng() instance instead; please see the random-quick-start.

Parameters
  • a (1-D array-like or int) – If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated as if a were np.arange(a)

  • size (int or tuple of ints, optional) – Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. Default is None, in which case a single value is returned.

  • replace (boolean, optional) – Whether the sample is with or without replacement

  • p (1-D array-like, optional) – The probabilities associated with each entry in a. If not given the sample assumes a uniform distribution over all entries in a.

Returns

samples – The generated random samples

Return type

single item or ndarray

Raises

ValueError – If a is an int and less than zero, if a or p are not 1-dimensional, if a is an array-like of size 0, if p is not a vector of probabilities, if a and p have different lengths, or if replace=False and the sample size is greater than the population size

See also

randint, shuffle, permutation

Generator.choice

which should be used in new code

Notes

Sampling random rows from a 2-D array is not possible with this function, but is possible with Generator.choice through its axis keyword.

Examples

Generate a uniform random sample from np.arange(5) of size 3:

>>> np.random.choice(5, 3)
array([0, 3, 4]) # random
>>> #This is equivalent to np.random.randint(0,5,3)

Generate a non-uniform random sample from np.arange(5) of size 3:

>>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0])
array([3, 3, 0]) # random

Generate a uniform random sample from np.arange(5) of size 3 without replacement:

>>> np.random.choice(5, 3, replace=False)
array([3,1,0]) # random
>>> #This is equivalent to np.random.permutation(np.arange(5))[:3]

Generate a non-uniform random sample from np.arange(5) of size 3 without replacement:

>>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0])
array([2, 3, 0]) # random

Any of the above can be repeated with an arbitrary array-like instead of just integers. For instance:

>>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher']
>>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3])
array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random
      dtype='<U11')
gemseo.problems.scalable.data_driven.diagonal.npseed()

seed(self, seed=None)

Reseed a legacy MT19937 BitGenerator

Notes

This is a convenience, legacy function.

The best practice is to not reseed a BitGenerator, rather to recreate a new one. This method is here for legacy reasons. This example demonstrates best practice.

>>> from numpy.random import MT19937
>>> from numpy.random import RandomState, SeedSequence
>>> rs = RandomState(MT19937(SeedSequence(123456789)))
# Later, you want to restart the stream
>>> rs = RandomState(MT19937(SeedSequence(987654321)))
gemseo.problems.scalable.data_driven.diagonal.rand(d0, d1, ..., dn)

Random values in a given shape.

Note

This is a convenience function for users porting code from Matlab, and wraps random_sample. That function takes a tuple to specify the size of the output, which is consistent with other NumPy functions like numpy.zeros and numpy.ones.

Create an array of the given shape and populate it with random samples from a uniform distribution over [0, 1).

Parameters
  • d0 (int, optional) – The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned.

  • d1 (int, optional) – The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned.

  • ... (int, optional) – The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned.

  • dn (int, optional) – The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned.

Returns

out – Random values.

Return type

ndarray, shape (d0, d1, ..., dn)

See also

random

Examples

>>> np.random.rand(3,2)
array([[ 0.14022471,  0.96360618],  #random
       [ 0.37601032,  0.25528411],  #random
       [ 0.49313049,  0.94909878]]) #random
gemseo.problems.scalable.data_driven.diagonal.randint(low, high=None, size=None, dtype=int)

Return random integers from low (inclusive) to high (exclusive).

Return random integers from the “discrete uniform” distribution of the specified dtype in the “half-open” interval [low, high). If high is None (the default), then results are from [0, low).

Note

New code should use the integers method of a default_rng() instance instead; please see the random-quick-start.

Parameters
  • low (int or array-like of ints) – Lowest (signed) integers to be drawn from the distribution (unless high=None, in which case this parameter is one above the highest such integer).

  • high (int or array-like of ints, optional) – If provided, one above the largest (signed) integer to be drawn from the distribution (see above for behavior if high=None). If array-like, must contain integer values

  • size (int or tuple of ints, optional) – Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. Default is None, in which case a single value is returned.

  • dtype (dtype, optional) –

    Desired dtype of the result. Byteorder must be native. The default value is int.

    New in version 1.11.0.

Returns

outsize-shaped array of random integers from the appropriate distribution, or a single such random int if size not provided.

Return type

int or ndarray of ints

See also

random_integers

similar to randint, only for the closed interval [low, high], and 1 is the lowest value if high is omitted.

Generator.integers

which should be used for new code.

Examples

>>> np.random.randint(2, size=10)
array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random
>>> np.random.randint(1, size=10)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])

Generate a 2 x 4 array of ints between 0 and 4, inclusive:

>>> np.random.randint(5, size=(2, 4))
array([[4, 0, 2, 1], # random
       [3, 2, 2, 0]])

Generate a 1 x 3 array with 3 different upper bounds

>>> np.random.randint(1, [3, 5, 10])
array([2, 2, 9]) # random

Generate a 1 by 3 array with 3 different lower bounds

>>> np.random.randint([1, 5, 7], 10)
array([9, 8, 7]) # random

Generate a 2 by 4 array using broadcasting with dtype of uint8

>>> np.random.randint([1, 3, 5, 7], [[10], [20]], dtype=np.uint8)
array([[ 8,  6,  9,  7], # random
       [ 1, 16,  9, 12]], dtype=uint8)