# Linear regression¶

We want to approximate a discipline with two inputs and two outputs:

• $$y_1=1+2x_1+3x_2$$

• $$y_2=-1-2x_1-3x_2$$

over the unit hypercube $$[0,1]\times[0,1]$$.

## Import¶

from __future__ import division, unicode_literals

from numpy import array

from gemseo.api import (
configure_logger,
create_design_space,
create_discipline,
create_scenario,
)
from gemseo.mlearning.api import create_regression_model

configure_logger()


Out:

<RootLogger root (INFO)>


## Create the discipline to learn¶

We can implement this analytic discipline by means of the AnalyticDiscipline class.

expressions_dict = {"y_1": "1+2*x_1+3*x_2", "y_2": "-1-2*x_1-3*x_2"}
discipline = create_discipline(
"AnalyticDiscipline", name="func", expressions_dict=expressions_dict
)


## Create the input sampling space¶

We create the input sampling space by adding the variables one by one.

design_space = create_design_space()


## Create the learning set¶

We can build a learning set by means of a DOEScenario with a full factorial design of experiments. The number of samples can be equal to 9 for example.

discipline.set_cache_policy(discipline.MEMORY_FULL_CACHE)
scenario = create_scenario(
[discipline], "DisciplinaryOpt", "y_1", design_space, scenario_type="DOE"
)
scenario.execute({"algo": "fullfact", "n_samples": 9})


Out:

    INFO - 14:41:25:
INFO - 14:41:25: *** Start DOE Scenario execution ***
INFO - 14:41:25: DOEScenario
INFO - 14:41:25:    Disciplines: func
INFO - 14:41:25:    MDOFormulation: DisciplinaryOpt
INFO - 14:41:25:    Algorithm: fullfact
INFO - 14:41:25: Optimization problem:
INFO - 14:41:25:    Minimize: y_1(x_1, x_2)
INFO - 14:41:25:    With respect to: x_1, x_2
INFO - 14:41:25: Full factorial design required. Number of samples along each direction for a design vector of size 2 with 9 samples: 3
INFO - 14:41:25: Final number of samples for DOE = 9 vs 9 requested
INFO - 14:41:25: DOE sampling:   0%|          | 0/9 [00:00<?, ?it]
INFO - 14:41:25: DOE sampling: 100%|██████████| 9/9 [00:00<00:00, 579.57 it/sec, obj=6]
INFO - 14:41:25: Optimization result:
INFO - 14:41:25: Objective value = 1.0
INFO - 14:41:25: The result is feasible.
INFO - 14:41:25: Status: None
INFO - 14:41:25: Optimizer message: None
INFO - 14:41:25: Number of calls to the objective function by the optimizer: 9
INFO - 14:41:25: Design space:
INFO - 14:41:25: +------+-------------+-------+-------------+-------+
INFO - 14:41:25: | name | lower_bound | value | upper_bound | type  |
INFO - 14:41:25: +------+-------------+-------+-------------+-------+
INFO - 14:41:25: | x_1  |      0      |   0   |      1      | float |
INFO - 14:41:25: | x_2  |      0      |   0   |      1      | float |
INFO - 14:41:25: +------+-------------+-------+-------------+-------+
INFO - 14:41:25: *** DOE Scenario run terminated ***

{'eval_jac': False, 'algo': 'fullfact', 'n_samples': 9}


## Create the regression model¶

Then, we build the linear regression model from the discipline cache and displays this model.

dataset = discipline.cache.export_to_dataset()
model = create_regression_model("LinearRegression", data=dataset, transformer=None)
model.learn()
print(model)


Out:

/home/docs/checkouts/readthedocs.org/user_builds/gemseo/conda/3.2.2/lib/python3.8/site-packages/sklearn/linear_model/_base.py:148: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
warnings.warn(
LinearRegression(fit_intercept=True, l2_penalty_ratio=1.0, penalty_level=0.0)
based on the scikit-learn library
built from 9 learning samples


## Predict output¶

Once it is built, we can use it for prediction.

input_value = {"x_1": array([1.0]), "x_2": array([2.0])}
output_value = model.predict(input_value)
print(output_value)


Out:

{'y_1': array([9.]), 'y_2': array([-9.])}


## Predict jacobian¶

We can also use it to predict the jacobian of the discipline.

jacobian_value = model.predict_jacobian(input_value)
print(jacobian_value)


Out:

{'y_1': {'x_1': array([[2.]]), 'x_2': array([[3.]])}, 'y_2': {'x_1': array([[-2.]]), 'x_2': array([[-3.]])}}


## Get intercept¶

In addition, it is possible to access the intercept of the model, either directly or by means of a method returning either a dictionary (default option) or an array.

print(model.intercept)
print(model.get_intercept())


Out:

[ 1. -1.]
{'y_1': [0.9999999999999987], 'y_2': [-0.9999999999999987]}


## Get coefficients¶

In addition, it is possible to access the coefficients of the model, either directly or by means of a method returning either a dictionary (default option) or an array.

print(model.coefficients)
print(model.get_coefficients())


Out:

[[ 2.  3.]
[-2. -3.]]
{'y_1': [{'x_1': [2.000000000000001], 'x_2': [3.0000000000000018]}], 'y_2': [{'x_1': [-2.000000000000001], 'x_2': [-3.0000000000000018]}]}


Total running time of the script: ( 0 minutes 0.077 seconds)

Gallery generated by Sphinx-Gallery