Note

Click here to download the full example code

# GEMSEO in 10 minutes¶

## Introduction¶

This is a short introduction to GEMSEO, geared mainly for new users. In this example, we will set up a simple Multi-disciplinary Design Optimization (MDO) problem based on a simple analytic problem.

## Imports¶

First, we will import all the classes and functions needed for the tutorials. The first imports (__future__ and future) enable to run the tutorial using either a Python 2 or a Python 3 interpreter.

```
from __future__ import division, unicode_literals
from math import exp
from matplotlib import pyplot as plt
from numpy import array, ones
```

Finally, the following functions from the GEMSEO API are imported. They will be used latter in order to instantiate GEMSEO objects.

```
from gemseo.api import (
configure_logger,
create_design_space,
create_discipline,
create_scenario,
)
configure_logger()
```

Out:

```
<RootLogger root (INFO)>
```

These imports enables to compute mathematical expressions, as well to instantiate numpy arrays. Numpy arrays are used to store numerical data in GEMSEO at low level. If you are not confortable with using Numpy, please have a look at the Numpy Quickstart tutorial.

## A simple MDO test case: the Sellar Problem¶

We will consider in this example the Sellar’s problem:

where the coupling variables are

and

and where the general constraints are

## Definition of the disciplines using Python functions¶

The Sellar’s problem is composed of two disciplines and an objective function. As they are expressed analytically, it is possible to write them as simple Python functions which take as parameters the design variables and the coupling variables. The returned values may be the outputs of a discipline, the values of the constraints or the value of the objective function. Their definitions read:

```
def f_sellar_system(x_local=1.0, x_shared_2=3.0, y_1=1.0, y_2=1.0):
"""Objective function."""
obj = x_local ** 2 + x_shared_2 + y_1 ** 2 + exp(-y_2)
c_1 = 3.16 - y_1 ** 2
c_2 = y_2 - 24.0
return obj, c_1, c_2
def f_sellar_1(x_local=1.0, y_2=1.0, x_shared_1=1.0, x_shared_2=3.0):
"""Function for discipline 1."""
y_1 = (x_shared_1 ** 2 + x_shared_2 + x_local - 0.2 * y_2) ** 0.5
return y_1
def f_sellar_2(y_1=1.0, x_shared_1=1.0, x_shared_2=3.0):
"""Function for discipline 2."""
y_2 = abs(y_1) + x_shared_1 + x_shared_2
return y_2
```

These Python functions can be easily converted into GEMSEO
`MDODiscipline`

objects by using the `AutoPyDiscipline`

discipline. It enables the automatic wrapping of a Python function into a
GEMSEO
`MDODiscipline`

by only passing a reference to the function to be
wrapped. GEMSEO handles the wrapping and the grammar creation under the
hood. The `AutoPyDiscipline`

discipline can be instantiated using the
`create_discipline()`

function from the GEMSEO API:

```
disc_sellar_system = create_discipline("AutoPyDiscipline", py_func=f_sellar_system)
disc_sellar_1 = create_discipline("AutoPyDiscipline", py_func=f_sellar_1)
disc_sellar_2 = create_discipline("AutoPyDiscipline", py_func=f_sellar_2)
```

Note that it is possible to define the Sellar disciplines by subclassing the
`MDODiscipline`

class and implementing the constuctor and the _run
method by hand. Although it would take more time, it may also provide more
flexibily and more options. This method is illustrated in the Sellar
from scratch tutorial.

```
# We then create a list of disciplines, which will be used later to create an
# :class:`.MDOScenario`:
disciplines = [disc_sellar_system, disc_sellar_1, disc_sellar_2]
```

Note

For the sake of clarity, these disciplines are overly simple. Yet, GEMSEO enables the definition of much more complex disciplines, such as wrapping complex COTS. Check out the other tutorials and our publications list for more information.

## Definition of the design space¶

In order to define `MDOScenario`

,
a design space has to be defined by creating a `DesignSpace`

object. The design space definition reads:

```
design_space = create_design_space()
design_space.add_variable("x_local", 1, l_b=0.0, u_b=10.0, value=ones(1))
design_space.add_variable("x_shared_1", 1, l_b=-10, u_b=10.0, value=array([4.0]))
design_space.add_variable("x_shared_2", 1, l_b=0.0, u_b=10.0, value=array([3.0]))
design_space.add_variable("y_1", 1, l_b=-100.0, u_b=100.0, value=ones(1))
design_space.add_variable("y_2", 1, l_b=-100.0, u_b=100.0, value=ones(1))
```

## Definition of the MDO scenario¶

Once the disciplines and the design space have been defined,
we can create our MDO scenario by using the `create_scenario()`

API call. In this simple example,
we are using a Multiple Disciplinary Feasible (MDF) strategy.
The Multiple Disciplinary Analyses (MDA) are carried out using the
Gauss-Seidel method. The scenario definition reads:

```
scenario = create_scenario(
disciplines,
formulation="MDF",
main_mda_class="MDAGaussSeidel",
objective_name="obj",
design_space=design_space,
)
```

It can be noted that neither a workflow nor a dataflow has been defined. By design, there is no need to explicitely define the workflow and the dataflow in GEMSEO:

the workflow is determined from the MDO formulation used.

the dataflow is determined from the variable names used in the disciplines. Then, it is of uttermost importance to be consistent while choosing and using the variable names in the disciplines.

Warning

As the workflow and the dataflow are implicitely determined by GEMSEO, set-up errors may easily occur. Although it is not performed in this example, it is strongly advised to

check the interfaces between the several disciplines using an N2 diagram,

check the MDO process using an XDSM representation

## Setting the constraints¶

Most of the MDO problems are under constraints. In our problem, we have two inequality constraints, and their declaration reads:

```
scenario.add_constraint("c_1", "ineq")
scenario.add_constraint("c_2", "ineq")
```

## Execution of the scenario¶

The scenario is now complete and ready to be executed. When running the optimization process, the user can choose the optimization algorithm and the maximum number of iterations to perform. The execution of the scenario reads:

```
scenario.execute(input_data={"max_iter": 10, "algo": "SLSQP"})
```

Out:

```
INFO - 14:43:20:
INFO - 14:43:20: *** Start MDO Scenario execution ***
INFO - 14:43:20: MDOScenario
INFO - 14:43:20: Disciplines: f_sellar_system f_sellar_1 f_sellar_2
INFO - 14:43:20: MDOFormulation: MDF
INFO - 14:43:20: Algorithm: SLSQP
INFO - 14:43:20: Optimization problem:
INFO - 14:43:20: Minimize: obj(x_local, x_shared_1, x_shared_2)
INFO - 14:43:20: With respect to: x_local, x_shared_1, x_shared_2
INFO - 14:43:20: Subject to constraints:
INFO - 14:43:20: c_1(x_local, x_shared_1, x_shared_2) <= 0.0
INFO - 14:43:20: c_2(x_local, x_shared_1, x_shared_2) <= 0.0
INFO - 14:43:20: Design space:
INFO - 14:43:20: +------------+-------------+-------+-------------+-------+
INFO - 14:43:20: | name | lower_bound | value | upper_bound | type |
INFO - 14:43:20: +------------+-------------+-------+-------------+-------+
INFO - 14:43:20: | x_local | 0 | 1 | 10 | float |
INFO - 14:43:20: | x_shared_1 | -10 | 4 | 10 | float |
INFO - 14:43:20: | x_shared_2 | 0 | 3 | 10 | float |
INFO - 14:43:20: +------------+-------------+-------+-------------+-------+
INFO - 14:43:20: Optimization: 0%| | 0/10 [00:00<?, ?it]
/home/docs/checkouts/readthedocs.org/user_builds/gemseo/conda/stable/lib/python3.8/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:407: SparseEfficiencyWarning: splu requires CSC matrix format
warn('splu requires CSC matrix format', SparseEfficiencyWarning)
INFO - 14:43:20: Optimization: 50%|█████ | 5/10 [00:00<00:00, 84.03 it/sec, obj=3.18]
INFO - 14:43:21: Optimization: 70%|███████ | 7/10 [00:00<00:00, 53.63 it/sec, obj=3.18]
INFO - 14:43:21: Optimization result:
INFO - 14:43:21: Objective value = 3.1833939865709007
INFO - 14:43:21: The result is feasible.
INFO - 14:43:21: Status: 8
INFO - 14:43:21: Optimizer message: Positive directional derivative for linesearch
INFO - 14:43:21: Number of calls to the objective function by the optimizer: 8
INFO - 14:43:21: Constraints values:
INFO - 14:43:21: c_1 = 1.5472068071176182e-12
INFO - 14:43:21: c_2 = -20.244723726272028
INFO - 14:43:21: Design space:
INFO - 14:43:21: +------------+-------------+-------------------+-------------+-------+
INFO - 14:43:21: | name | lower_bound | value | upper_bound | type |
INFO - 14:43:21: +------------+-------------+-------------------+-------------+-------+
INFO - 14:43:21: | x_local | 0 | 0 | 10 | float |
INFO - 14:43:21: | x_shared_1 | -10 | 1.977637390265288 | 10 | float |
INFO - 14:43:21: | x_shared_2 | 0 | 0 | 10 | float |
INFO - 14:43:21: +------------+-------------+-------------------+-------------+-------+
INFO - 14:43:21: *** MDO Scenario run terminated in 0:00:00.193028 ***
{'max_iter': 10, 'algo': 'SLSQP'}
```

The scenario is converged after 7 iterations. Useful information can be found in the standard output, as shown below:

```
*** Start MDO Scenario execution ***
MDOScenario
Disciplines: f_sellar_system f_sellar_1 f_sellar_2
MDOFormulation: MDF
Algorithm: SLSQP
Optimization problem:
Minimize: obj(x_local, x_shared_1, x_shared_2)
With respect to: x_local, x_shared_1, x_shared_2
Subject to constraints:
c_1(x_local, x_shared_1, x_shared_2) <= 0.0
c_2(x_local, x_shared_1, x_shared_2) <= 0.0
Design Space:
+------------+-------------+-------+-------------+-------+
| name | lower_bound | value | upper_bound | type |
+------------+-------------+-------+-------------+-------+
| x_local | 0 | 1 | 10 | float |
| x_shared_1 | -10 | 4 | 10 | float |
| x_shared_2 | 0 | 3 | 10 | float |
+------------+-------------+-------+-------------+-------+
Optimization: 0%| | 0/10 [00:00<?, ?it]
Optimization: 70%|███████ | 7/10 [00:00<00:00, 109.54 it/sec, obj=3.18]
Optimization result:
Objective value = 3.1833939865673546
The result is feasible.
Status: 8
Optimizer message: Positive directional derivative for linesearch
Number of calls to the objective function by the optimizer: 8
Constraints values w.r.t. 0:
c_1 = 5.140776693224325e-12
c_2 = -20.24472372627405
Design Space:
+------------+-------------+-------------------+-------------+-------+
| name | lower_bound | value | upper_bound | type |
+------------+-------------+-------------------+-------------+-------+
| x_local | 0 | 0 | 10 | float |
| x_shared_1 | -10 | 1.977637390264277 | 10 | float |
| x_shared_2 | 0 | 0 | 10 | float |
+------------+-------------+-------------------+-------------+-------+
*** MDO Scenario run terminated in 0:00:00.099332 ***
```

Note

GEMSEO provides the user with a lot of optimization algorithms and options. An exhaustive list of the algorithms available in GEMSEO can be found in the Options for Optimization algorithms section.

## Post-processing the results¶

Post-processings such as plots exhibiting the evolutions of the objective function, the design variables or the constraints can be extremely useful. The convergence of the objective function, design variables and of the inequality constraints can be observed in the following plots. Many other post-processings are available in GEMSEO and are described in Post-processing.

```
scenario.post_process("OptHistoryView", save=False, show=False)
# Workaround for HTML rendering, instead of ``show=True``
plt.show()
```

Note

Such post-processings can be exported in PDF format,
by setting `save`

to `True`

and potentially additional
settings (see the `Scenario.post_process()`

options).

## What’s next?¶

You have completed a short introduction to GEMSEO. You can now look at the tutorials which exhibit more complex use-cases. You can also have a look at the documentation to discover the several features and options of GEMSEO.

**Total running time of the script:** ( 0 minutes 1.147 seconds)