The scalable problem


In this section we describe the GEMSEO’ scalable problem, or scalable discipline feature, based on the paper [VGM17]:

@conference {VGM2017,
   title = {On the Consequences of the "No Free Lunch" Theorem for Optimization on the Choice of {MDO} Architecture},
   booktitle = {Proceedings of the AIAA SciTech Conference},
   year = {2017},
   month = {January},
   author = {Charlie Vanaret and Francois Gallard and Joaquim R. R. A. Martins}

See also

Scalable model is implemented in the class ScalableDiscipline which inherited from MDODiscipline.

See also

The scalable model is illustrated in several examples: Scalable diagonal discipline, Scalable problem and

Based on computationally cheap disciplines, the scalable problem allows to choose a MDO formulation:

  • for the problem from which derives the scalable problem or

  • for a family of problems having:

    • a greater number of design and coupling variables and

    • common properties with the original problem.

According to the authors, this scalable problem “preserve[s] the functional characteristics of the original problem and they proved useful in performing a rapid benchmarking of MDO formulation”. This “provides insights on the scalability of MDO architectures with respect to the dimensions of the problem. This may be achieved without having to execute the MDO processes with the original models. Our methodology thus requires a limited number of evaluations of the original models that is independent of the desired dimensions of the design and the coupling variables of the scalable problem.”


The proposed methodology

  1. builds a surrogate model \(\Phi^{(int)}\) for each discipline \(\Phi\) of the initial problem with a limited amount of evaluations \(T\),

  2. extrapolates the surrogate model \(\Phi^{(ext)}\) to an arbitrary dimension.

The methodology preserves the interface of the initial problem, that is the names of the inputs (design variables) and the outputs (coupling and state variables). Any high-fidelity discipline of the initial problem may therefore be replaced by a cheap scalable component generated by the methodology. Strong properties are guaranteed by the methodology.

One-dimensional restriction

The original model \(\Phi:\mathbb{R}^n\rightarrow\mathbb{R}^m\) is restricted to a one-dimensional function \(\Phi^{(1d)}:[0,1]\rightarrow\mathbb{R}^m\) by evaluating it along a diagonal line in the domain \([x_1,\overline{x_1}]\times\ldots[x_n,\overline{x_n}]\):



For any component \(i\in\{1,\ldots,m\}\) of \(\Phi^{(1d)}\), the direct image of \(T\) a finite subset of \([0,1]\) with cardinality \(|T|\), is:

\[\Phi_i^{(1d)}(T) = \left\{\Phi^{(1d)}(t)|t\in T\right\}\]

mapping from \([0,1]\) to \([m_i, M_i]\) and where \(m_i\) and \(M_i\) are respectively the minimal and maximal values reached by \(\Phi^{(1d)}\) over \(T\).

The scaled version of \(\Phi_i^{(1d)}(T)\) is

\[\Phi_i^{(s1d)}(T) = \left\{\Phi^{(1d)}(t)|t\in T\right\}\]

mapping from \([0,1]\) to \([0,1]\).

Then, each component \(i\) of \(\Phi^{(1d)}(t)\) is approximated by a polynomial interpolation \(\Phi_i^{(int)}\) over the date \(\left(T,\Phi^{(s1d)}(T)\right)\).

Input-output dependency

Dependencies between inputs and outputs can be represented by a sparse dependency matrix \(S\) where:

  • each block row represents a function of the problem (constraint or coupling),

  • each block column represents an input (design variable or coupling),

  • a nonzero element represents the dependency of a particular component of a function with respect to a particular component of an input.

In practice, the dependencies between inputs and outputs are not precisely known. Consequently, the matrix \(S\) is randomly computed by block by means of a density factor (the filling of a block is proportional to this density factor).

Furthermore, initially taken in \(\mathcal{M}_{n,m}(\mathbb{R})\),this matrix \(S\) can be taken in \(\mathcal{M}_{n_x,n_y}(\mathbb{R})\) where the number of inputs \(n_x\) and the number of outputs \(n_y\) of the scalable model is freely chosen by the user.


Once \(n_x\) and \(n_y\) are chosen, we build the function \(\Phi^{(ext)}:[0,1]^{n_x}\rightarrow[0,1]^{n_y}\) extrapolates \(\Phi^{(int)}:[0,1]\rightarrow[0,1]^{m}\) to \(n_y\) dimensions:

\[\Phi_i^{(ext)}(x)=\frac{1}{|S_{i.}|}\sum_{j\in S_{i.}} \Phi_{k_i}^{(int)}(x_j)\]


  • \(S_{i.}\) represents the nonzero elements of the \(i\)-th row of the dependency matrix \(S\).

  • \(k_i\) is an uniform random variable over \(\left\{1,\ldots,m\right\}\).


  • Existence of a solution to the coupling problem. An equilibrium between all disciplines exists for any value of the design variables \(x\),

  • Preservation of ratio. When \(n_y\) approaches \(+\infty\), the ratio of components of the original functions is preserved.

  • Existence of a minimum. There exists a feasible solution to the scalable problem, for any dimension of inputs and outputs.

  • Existence of derivatives. The scalable extrapolations are continuously differentiable with respect to their inputs.

  • Existence of bounds on the target coupling variables. All inputs and outputs belong to \([0; 1]\), which ensures that all optimization variables are bounded, in particular coupling variables in IDF.