# Tutorial: Build a scalable model¶

In this tutorial, we are going to build a scalable version of the Sellar1 MDODiscipline. As a reminder, its expression reads:

$$y_1(x_{shared},x_{local},y_2)=\sqrt{x_{shared,1}^2+x_{shared,2}+x_{local}-0.2y_2$$}.

## 1. Overview¶

The expected outputs sizes are specified in a dictionary. The keys are the output names and the values are the sizes. Here, we take 5 for the dimension of all outputs (here “y_1”, which is of dimension 1 in the standard Sellar1).

## 2. Creation of the discipline¶

First of all, we create the reference MDODiscipline: with the help of the create_discipline API function and the argument "Sellar1". As a reminder, this argument refers to the class Sellar1, which is internally known by GEMSEO by means of the DisciplinesFactory.

from gemseo.api import create_discipline

sellar = create_discipline("Sellar1")


Tip

It is possible to implement a new discipline in a directory outside of the GEMSEO directory (see Extending GEMSEO). Then, the user has to create a new class NewMDODiscipline inheriting from MDODiscipline. Then, the code reads:

from gemseo.api import create_discipline

newmdodiscipline = create_discipline("NewMDODiscipline")


Then, the scalable discipline can be created.

## 2. Creation of the scalable discipline¶

In GEMSEO, scalable disciplines are defined by the class ScalableDiscipline that inherits from MDODiscipline.

Such a scalable discipline takes as mandatory arguments:

and optional ones :

• a comp_dep matrix (default: None) that establishes the selection of a single original component for each scalable component,

• a inpt_dep matrix (default: None) that establishes the dependency of outputs w.r.t. inputs,

• a force_input_dependency assertion (default: False) describing that for any output, force dependency with at least on input,

• a allow_unused_inputs assertion (default: False) describing the possibility to have an input with no dependence with any output

• a seed (default: 1)

### 2.1. Sample the discipline¶

The hdf_file_path file is built from the create_scenario() API function applied to the MDODiscipline instance, sellar, with DOE scenario type and the following DesignSpace:

from gemseo.problems.sellar.sellar_design_space import SellarDesignSpace

design_space = SellarDesignSpace()


The DOE algorithm is 'DiagonalDOE' and use a sampling of size n_samples=30:

from gemseo.api import create_scenario

sellar.set_cache_policy(cache_type='HDF5_cache', cache_tolerance=1e-6, cache_hdf_file='sellar.hdf5')
output = sellar.get_output_data_names()[0]
scenario = create_scenario([sellar], 'DisciplinaryOpt', output,
design_space, scenario_type='DOE')
scenario.execute({'algo': 'DiagonalDOE', 'n_samples': 30})


A DiagonalDOE consists of equispaced points located on the diagonal of the design space.

### 2.2. Define the input and output dimensions¶

A scalable discipline is a discipline version for which inputs and outputs can take arbitrary dimensions:

# Set the size of input and output variables at 5
# - Number of n_x = number_of_inputs*variables_sizes
# - Number of n_y = number_of_outputs*variables_sizes
variables_sizes = 5
input_names = sellar.get_input_data_names()
output_names = sellar.get_output_data_names()
sizes = {name: variables_sizes for name in input_names + output_names}


The sizes of the inputs are specified in a dictionary at the construction of the ScalableDiscipline instance.

Lastly, we define the density factor for the matrix S describing the dependencies between the inputs and the outputs of the discipline:

# Density factor for the dependency matrix S
fill_factor = 0.6


From this, we can create the ScalableDiscipline by means of the API function create_discipline():

# Creation of the scalable discipline
scalable_sellar = create_discipline('ScalableDiscipline',
hdf_file_path='sellar.hdf5',
hdf_node_path='Sellar1',
sizes=sizes,
fill_factor=fill_factor)


## 3. Run the scalable discipline¶

After its creation, the scalable discipline can be executed by means of the MDODiscipline.execute() method. For this, we build an input dictionary. Remember that the inputs and outputs shall all be in $$(0,1)$$ (see The scalable problem). Here we take $$( 0. , 0.2, 0.4, 0.6, 0.8)$$ for all inputs of the discipline (“x_shared”, “x_local”, and “y_2”).

from numpy import arange

input_data = {name: arange(variables_sizes) / float(variables_sizes)
for name in input_names}
print(scalable_sellar.execute(input_data)['y_1'])


The output of the discipline is:

[0.64353709  0.3085585   0.36497918  0.48043751  0.56740874]


of dimension 5, as expected.

Arbitrary input dimensions arrays can be provided. Here, only three components for all inputs and outputs are considered:

variables_sizes = 3
sizes = {name: variables_sizes for name in input_names + output_names}
scalable_sellar = create_discipline('ScalableDiscipline',
hdf_file_path='sellar.hdf5',
hdf_node_path='Sellar1',
sizes=sizes,
fill_factor=fill_factor)
input_data = {name: arange(variables_sizes) / float(variables_sizes)
for name in input_names}

print(scalable_sellar.execute(input_data)['y_1'])


The scalable discipline outputs different values :

[ 0.45727936  0.45727936  0.52084604]


We can see that multiple components of the output may be identical, because the original Sellar problem is of very low dimensions (1 or 2). Therefore, the combinatorial effects that the scalable methodology uses to generate the outputs is not exploited (see The scalable problem). We obtain different output components in higher dimension.

## 4. Perspectives¶

This ScalableDiscipline can now be included as any other in a MDOScenario to compare the scalability of MDO or coupling strategies.

Such a ScalableDiscipline as two main advantages:

• The execution time shall be very small even for thousands of inputs and outputs.

• Analytical derivatives are also available (Jacobian matrices), even if the original discipline has no analytic derivatives.