.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "examples/mlearning/calibration/plot_calibration.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_examples_mlearning_calibration_plot_calibration.py: Calibration of a polynomial regression ====================================== .. GENERATED FROM PYTHON SOURCE LINES 24-34 .. code-block:: default from __future__ import annotations import matplotlib.pyplot as plt from gemseo import configure_logger from gemseo.algos.design_space import DesignSpace from gemseo.mlearning.core.calibration import MLAlgoCalibration from gemseo.mlearning.quality_measures.mse_measure import MSEMeasure from gemseo.problems.dataset.rosenbrock import create_rosenbrock_dataset from matplotlib.tri import Triangulation .. GENERATED FROM PYTHON SOURCE LINES 35-37 Load the dataset ---------------- .. GENERATED FROM PYTHON SOURCE LINES 37-39 .. code-block:: default dataset = create_rosenbrock_dataset(opt_naming=False, n_samples=25) .. GENERATED FROM PYTHON SOURCE LINES 40-42 Define the measure ------------------ .. GENERATED FROM PYTHON SOURCE LINES 42-47 .. code-block:: default configure_logger() test_dataset = create_rosenbrock_dataset(opt_naming=False) measure_evaluation_method_name = "TEST" measure_options = {"test_data": test_dataset} .. GENERATED FROM PYTHON SOURCE LINES 48-52 Calibrate the degree of the polynomial regression ------------------------------------------------- Define and execute the calibration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 52-69 .. code-block:: default calibration_space = DesignSpace() calibration_space.add_variable("degree", 1, "integer", 1, 10, 1) calibration = MLAlgoCalibration( "PolynomialRegressor", dataset, ["degree"], calibration_space, MSEMeasure, measure_evaluation_method_name=measure_evaluation_method_name, measure_options=measure_options, ) calibration.execute({"algo": "fullfact", "n_samples": 10}) x_opt = calibration.optimal_parameters f_opt = calibration.optimal_criterion degree = x_opt["degree"][0] f"optimal degree = {degree}; optimal criterion = {f_opt}" .. rst-class:: sphx-glr-script-out .. code-block:: none INFO - 08:25:07: INFO - 08:25:07: *** Start DOEScenario execution *** INFO - 08:25:07: DOEScenario INFO - 08:25:07: Disciplines: MLAlgoAssessor INFO - 08:25:07: MDO formulation: DisciplinaryOpt INFO - 08:25:07: Optimization problem: INFO - 08:25:07: minimize criterion(degree) INFO - 08:25:07: with respect to degree INFO - 08:25:07: over the design space: INFO - 08:25:07: +--------+-------------+-------+-------------+---------+ INFO - 08:25:07: | name | lower_bound | value | upper_bound | type | INFO - 08:25:07: +--------+-------------+-------+-------------+---------+ INFO - 08:25:07: | degree | 1 | 1 | 10 | integer | INFO - 08:25:07: +--------+-------------+-------+-------------+---------+ INFO - 08:25:07: Solving optimization problem with algorithm fullfact: INFO - 08:25:07: ... 0%| | 0/10 [00:00
GROUP inputs outputs
VARIABLE degree criterion learning
COMPONENT 0 0 0
0 1.0 5.888317e+05 8.200828e+05
1 2.0 1.732475e+05 2.404571e+05
2 3.0 3.001292e+04 1.645714e+04
3 4.0 1.095763e-24 1.703801e-24
4 5.0 1.097877e-01 1.391092e-23
5 6.0 1.183264e+03 2.332471e-24
6 7.0 6.895919e+03 1.401963e-23
7 8.0 1.356307e+04 5.192192e-23
8 9.0 9.180547e+04 8.964290e-23
9 10.0 1.625259e+05 8.767875e-23


.. GENERATED FROM PYTHON SOURCE LINES 75-77 Visualize the results ^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 77-89 .. code-block:: default degree = calibration.get_history("degree") criterion = calibration.get_history("criterion") learning = calibration.get_history("learning") plt.plot(degree, criterion, "-o", label="test", color="red") plt.plot(degree, learning, "-o", label="learning", color="blue") plt.xlabel("polynomial degree") plt.ylabel("quality") plt.axvline(x_opt["degree"], color="red", ls="--") plt.legend() plt.show() .. image-sg:: /examples/mlearning/calibration/images/sphx_glr_plot_calibration_001.png :alt: plot calibration :srcset: /examples/mlearning/calibration/images/sphx_glr_plot_calibration_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 90-94 Calibrate the ridge penalty of the polynomial regression -------------------------------------------------------- Define and execute the calibration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 94-111 .. code-block:: default calibration_space = DesignSpace() calibration_space.add_variable("penalty_level", 1, "float", 0.0, 100.0, 0.0) calibration = MLAlgoCalibration( "PolynomialRegressor", dataset, ["penalty_level"], calibration_space, MSEMeasure, measure_evaluation_method_name=measure_evaluation_method_name, measure_options=measure_options, degree=10, ) calibration.execute({"algo": "fullfact", "n_samples": 10}) x_opt = calibration.optimal_parameters f_opt = calibration.optimal_criterion x_opt["penalty_level"][0], f_opt .. rst-class:: sphx-glr-script-out .. code-block:: none INFO - 08:25:07: INFO - 08:25:07: *** Start DOEScenario execution *** INFO - 08:25:07: DOEScenario INFO - 08:25:07: Disciplines: MLAlgoAssessor INFO - 08:25:07: MDO formulation: DisciplinaryOpt INFO - 08:25:07: Optimization problem: INFO - 08:25:07: minimize criterion(penalty_level) INFO - 08:25:07: with respect to penalty_level INFO - 08:25:07: over the design space: INFO - 08:25:07: +---------------+-------------+-------+-------------+-------+ INFO - 08:25:07: | name | lower_bound | value | upper_bound | type | INFO - 08:25:07: +---------------+-------------+-------+-------------+-------+ INFO - 08:25:07: | penalty_level | 0 | 0 | 100 | float | INFO - 08:25:07: +---------------+-------------+-------+-------------+-------+ INFO - 08:25:07: Solving optimization problem with algorithm fullfact: INFO - 08:25:07: ... 0%| | 0/10 [00:00
GROUP inputs outputs
VARIABLE penalty_level criterion learning
COMPONENT 0 0 0
0 0.000000 162525.860760 8.767875e-23
1 11.111111 32506.221289 1.087801e+03
2 22.222222 17820.599507 1.982580e+03
3 33.333333 17189.526493 2.690007e+03
4 44.444444 19953.420378 3.251453e+03
5 55.555556 23493.269988 3.703714e+03
6 66.666667 27024.053276 4.074147e+03
7 77.777778 30303.486633 4.382362e+03
8 88.888889 33272.062306 4.642448e+03
9 100.000000 35934.745536 4.864667e+03


.. GENERATED FROM PYTHON SOURCE LINES 117-119 Visualize the results ^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 119-131 .. code-block:: default penalty_level = calibration.get_history("penalty_level") criterion = calibration.get_history("criterion") learning = calibration.get_history("learning") plt.plot(penalty_level, criterion, "-o", label="test", color="red") plt.plot(penalty_level, learning, "-o", label="learning", color="blue") plt.axvline(x_opt["penalty_level"], color="red", ls="--") plt.xlabel("ridge penalty") plt.ylabel("quality") plt.legend() plt.show() .. image-sg:: /examples/mlearning/calibration/images/sphx_glr_plot_calibration_002.png :alt: plot calibration :srcset: /examples/mlearning/calibration/images/sphx_glr_plot_calibration_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 132-136 Calibrate the lasso penalty of the polynomial regression -------------------------------------------------------- Define and execute the calibration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 136-154 .. code-block:: default calibration_space = DesignSpace() calibration_space.add_variable("penalty_level", 1, "float", 0.0, 100.0, 0.0) calibration = MLAlgoCalibration( "PolynomialRegressor", dataset, ["penalty_level"], calibration_space, MSEMeasure, measure_evaluation_method_name=measure_evaluation_method_name, measure_options=measure_options, degree=10, l2_penalty_ratio=0.0, ) calibration.execute({"algo": "fullfact", "n_samples": 10}) x_opt = calibration.optimal_parameters f_opt = calibration.optimal_criterion x_opt["penalty_level"][0], f_opt .. rst-class:: sphx-glr-script-out .. code-block:: none INFO - 08:25:07: INFO - 08:25:07: *** Start DOEScenario execution *** INFO - 08:25:07: DOEScenario INFO - 08:25:07: Disciplines: MLAlgoAssessor INFO - 08:25:07: MDO formulation: DisciplinaryOpt INFO - 08:25:07: Optimization problem: INFO - 08:25:07: minimize criterion(penalty_level) INFO - 08:25:07: with respect to penalty_level INFO - 08:25:07: over the design space: INFO - 08:25:07: +---------------+-------------+-------+-------------+-------+ INFO - 08:25:07: | name | lower_bound | value | upper_bound | type | INFO - 08:25:07: +---------------+-------------+-------+-------------+-------+ INFO - 08:25:07: | penalty_level | 0 | 0 | 100 | float | INFO - 08:25:07: +---------------+-------------+-------+-------------+-------+ INFO - 08:25:07: Solving optimization problem with algorithm fullfact: INFO - 08:25:07: ... 0%| | 0/10 [00:00
GROUP inputs outputs
VARIABLE penalty_level criterion learning
COMPONENT 0 0 0
0 0.000000 162525.860760 8.767875e-23
1 11.111111 15775.989581 1.814382e+03
2 22.222222 31529.584354 4.057302e+03
3 33.333333 47420.249503 5.792299e+03
4 44.444444 59358.207437 7.169565e+03
5 55.555556 62656.171431 7.278397e+03
6 66.666667 66256.259889 7.410137e+03
7 77.777778 69336.190346 7.540731e+03
8 88.888889 72457.378777 7.675963e+03
9 100.000000 75749.793494 7.816545e+03


.. GENERATED FROM PYTHON SOURCE LINES 160-162 Visualize the results ^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 162-174 .. code-block:: default penalty_level = calibration.get_history("penalty_level") criterion = calibration.get_history("criterion") learning = calibration.get_history("learning") plt.plot(penalty_level, criterion, "-o", label="test", color="red") plt.plot(penalty_level, learning, "-o", label="learning", color="blue") plt.axvline(x_opt["penalty_level"], color="red", ls="--") plt.xlabel("lasso penalty") plt.ylabel("quality") plt.legend() plt.show() .. image-sg:: /examples/mlearning/calibration/images/sphx_glr_plot_calibration_003.png :alt: plot calibration :srcset: /examples/mlearning/calibration/images/sphx_glr_plot_calibration_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 175-179 Calibrate the elasticnet penalty of the polynomial regression ------------------------------------------------------------- Define and execute the calibration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 179-197 .. code-block:: default calibration_space = DesignSpace() calibration_space.add_variable("penalty_level", 1, "float", 0.0, 40.0, 0.0) calibration_space.add_variable("l2_penalty_ratio", 1, "float", 0.0, 1.0, 0.5) calibration = MLAlgoCalibration( "PolynomialRegressor", dataset, ["penalty_level", "l2_penalty_ratio"], calibration_space, MSEMeasure, measure_evaluation_method_name=measure_evaluation_method_name, measure_options=measure_options, degree=10, ) calibration.execute({"algo": "fullfact", "n_samples": 100}) x_opt = calibration.optimal_parameters f_opt = calibration.optimal_criterion x_opt["penalty_level"][0], x_opt["l2_penalty_ratio"][0], f_opt .. rst-class:: sphx-glr-script-out .. code-block:: none INFO - 08:25:07: INFO - 08:25:07: *** Start DOEScenario execution *** INFO - 08:25:07: DOEScenario INFO - 08:25:07: Disciplines: MLAlgoAssessor INFO - 08:25:07: MDO formulation: DisciplinaryOpt INFO - 08:25:07: Optimization problem: INFO - 08:25:07: minimize criterion(penalty_level, l2_penalty_ratio) INFO - 08:25:07: with respect to l2_penalty_ratio, penalty_level INFO - 08:25:07: over the design space: INFO - 08:25:07: +------------------+-------------+-------+-------------+-------+ INFO - 08:25:07: | name | lower_bound | value | upper_bound | type | INFO - 08:25:07: +------------------+-------------+-------+-------------+-------+ INFO - 08:25:07: | penalty_level | 0 | 0 | 40 | float | INFO - 08:25:07: | l2_penalty_ratio | 0 | 0.5 | 1 | float | INFO - 08:25:07: +------------------+-------------+-------+-------------+-------+ INFO - 08:25:07: Solving optimization problem with algorithm fullfact: INFO - 08:25:07: ... 0%| | 0/100 [00:00
GROUP inputs outputs
VARIABLE penalty_level l2_penalty_ratio criterion learning
COMPONENT 0 0 0 0
0 0.000000 0.0 162525.860760 8.767875e-23
1 4.444444 0.0 4136.820827 4.546714e+02
2 8.888889 0.0 13371.034446 1.375915e+03
3 13.333333 0.0 17860.819693 2.176736e+03
4 17.777778 0.0 23914.366014 3.005032e+03
... ... ... ... ...
95 22.222222 1.0 17820.599507 1.982580e+03
96 26.666667 1.0 16816.595592 2.285780e+03
97 31.111111 1.0 16894.821607 2.561538e+03
98 35.555556 1.0 17602.178769 2.812662e+03
99 40.000000 1.0 18674.751406 3.041823e+03

100 rows × 4 columns



.. GENERATED FROM PYTHON SOURCE LINES 203-205 Visualize the results ^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 205-228 .. code-block:: default penalty_level = calibration.get_history("penalty_level").flatten() l2_penalty_ratio = calibration.get_history("l2_penalty_ratio").flatten() criterion = calibration.get_history("criterion").flatten() learning = calibration.get_history("learning").flatten() triang = Triangulation(penalty_level, l2_penalty_ratio) fig = plt.figure() ax = fig.add_subplot(1, 2, 1) ax.tricontourf(triang, criterion, cmap="Purples") ax.scatter(x_opt["penalty_level"][0], x_opt["l2_penalty_ratio"][0]) ax.set_xlabel("penalty level") ax.set_ylabel("l2 penalty ratio") ax.set_title("Test measure") ax = fig.add_subplot(1, 2, 2) ax.tricontourf(triang, learning, cmap="Purples") ax.scatter(x_opt["penalty_level"][0], x_opt["l2_penalty_ratio"][0]) ax.set_xlabel("penalty level") ax.set_ylabel("l2 penalty ratio") ax.set_title("Learning measure") plt.show() .. image-sg:: /examples/mlearning/calibration/images/sphx_glr_plot_calibration_004.png :alt: Test measure, Learning measure :srcset: /examples/mlearning/calibration/images/sphx_glr_plot_calibration_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 229-231 Add an optimization stage ^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 231-269 .. code-block:: default calibration_space = DesignSpace() calibration_space.add_variable("penalty_level", 1, "float", 0.0, 40.0, 0.0) calibration_space.add_variable("l2_penalty_ratio", 1, "float", 0.0, 1.0, 0.5) calibration = MLAlgoCalibration( "PolynomialRegressor", dataset, ["penalty_level", "l2_penalty_ratio"], calibration_space, MSEMeasure, measure_evaluation_method_name=measure_evaluation_method_name, measure_options=measure_options, degree=10, ) calibration.execute({"algo": "NLOPT_COBYLA", "max_iter": 100}) x_opt2 = calibration.optimal_parameters f_opt2 = calibration.optimal_criterion fig = plt.figure() ax = fig.add_subplot(1, 2, 1) ax.tricontourf(triang, criterion, cmap="Purples") ax.scatter(x_opt["penalty_level"][0], x_opt["l2_penalty_ratio"][0]) ax.scatter(x_opt2["penalty_level"][0], x_opt2["l2_penalty_ratio"][0], color="red") ax.set_xlabel("penalty level") ax.set_ylabel("l2 penalty ratio") ax.set_title("Test measure") ax = fig.add_subplot(1, 2, 2) ax.tricontourf(triang, learning, cmap="Purples") ax.scatter(x_opt["penalty_level"][0], x_opt["l2_penalty_ratio"][0]) ax.scatter(x_opt2["penalty_level"][0], x_opt2["l2_penalty_ratio"][0], color="red") ax.set_xlabel("penalty level") ax.set_ylabel("l2 penalty ratio") ax.set_title("Learning measure") plt.show() n_iterations = len(calibration.scenario.disciplines[0].cache) print(f"MSE with DOE: {f_opt} (100 evaluations)") print(f"MSE with OPT: {f_opt2} ({n_iterations} evaluations)") print(f"MSE reduction:{round((f_opt2 - f_opt) / f_opt * 100)}%") .. image-sg:: /examples/mlearning/calibration/images/sphx_glr_plot_calibration_005.png :alt: Test measure, Learning measure :srcset: /examples/mlearning/calibration/images/sphx_glr_plot_calibration_005.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none INFO - 08:25:09: INFO - 08:25:09: *** Start MDOScenario execution *** INFO - 08:25:09: MDOScenario INFO - 08:25:09: Disciplines: MLAlgoAssessor INFO - 08:25:09: MDO formulation: DisciplinaryOpt INFO - 08:25:09: Optimization problem: INFO - 08:25:09: minimize criterion(penalty_level, l2_penalty_ratio) INFO - 08:25:09: with respect to l2_penalty_ratio, penalty_level INFO - 08:25:09: over the design space: INFO - 08:25:09: +------------------+-------------+-------+-------------+-------+ INFO - 08:25:09: | name | lower_bound | value | upper_bound | type | INFO - 08:25:09: +------------------+-------------+-------+-------------+-------+ INFO - 08:25:09: | penalty_level | 0 | 0 | 40 | float | INFO - 08:25:09: | l2_penalty_ratio | 0 | 0.5 | 1 | float | INFO - 08:25:09: +------------------+-------------+-------+-------------+-------+ INFO - 08:25:09: Solving optimization problem with algorithm NLOPT_COBYLA: INFO - 08:25:09: ... 0%| | 0/100 [00:00` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_calibration.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_