Note
Go to the end to download the full example code
Pipeline¶
from __future__ import annotations
from numpy import allclose
from numpy import linspace
from numpy import newaxis
from gemseo.mlearning.transformers.pipeline import Pipeline
from gemseo.mlearning.transformers.scaler.scaler import Scaler
To illustrate the pipeline, we consider very simple data:
data = linspace(0, 1, 100)[:, newaxis]
First, we create a pipeline of two transformers: the first one shifts the data while the second one reduces their amplitude.
pipeline = Pipeline(transformers=[Scaler(offset=1), Scaler(coefficient=2)])
Then,
we fit this Pipeline
to the data, transform them and compute the Jacobian:
transformed_data = pipeline.fit_transform(data)
transformed_jac_data = pipeline.compute_jacobian(data)
Lastly, we can do the same with two scalers:
shifter = Scaler(offset=1)
shifted_data = shifter.fit_transform(data)
scaler = Scaler(coefficient=2)
data_shifted_then_scaled = scaler.fit_transform(shifted_data)
jac_shifted_then_scaled = scaler.compute_jacobian(
shifted_data
) @ shifter.compute_jacobian(data)
and verify that the results are identical:
assert allclose(transformed_data, data_shifted_then_scaled)
assert allclose(transformed_jac_data, jac_shifted_then_scaled)
Note that a Pipeline
can compute the Jacobian
as long as the Transformer
instances that make it up can do so.
Total running time of the script: (0 minutes 0.003 seconds)