gemseo.mlearning.regression.algos.base_regressor module#
The base class for regression algorithms.
- class BaseRegressor(data, settings_model=None, **settings)[source]#
Bases:
BaseMLSupervisedAlgo
The base class for regression algorithms.
- Parameters:
data (Dataset) -- The training dataset.
settings_model (BaseMLAlgoSettings | None) -- The machine learning algorithm settings as a Pydantic model. If
None
, use**settings
.**settings (Any) -- The machine learning algorithm settings. These arguments are ignored when
settings_model
is notNone
.
- Raises:
ValueError -- When both the variable and the group it belongs to have a transformer.
- DataFormatters#
alias of
RegressionDataFormatters
- Settings#
alias of
BaseRegressorSettings
- predict_jacobian(input_data)[source]#
Predict the Jacobian with respect to the input variables.
The user can specify these input data either as a NumPy array, e.g.
array([1., 2., 3.])
or as a dictionary, e.g.{'a': array([1.]), 'b': array([2., 3.])}
.If the NumPy arrays are of dimension 2, their i-th rows represent the input data of the i-th sample; while if the NumPy arrays are of dimension 1, there is a single sample.
The type of the output data and the dimension of the output arrays will be consistent with the type of the input data and the size of the input arrays.
- predict_jacobian_wrt_special_variables(input_data)[source]#
Predict the Jacobian with respect to special variables.
The method
predict_jacobian()
predicts the standard Jacobian, which is the matrix of partial derivatives with respect to the input variables.In some cases, the regressor \(\hat{f}(x)\) is used to approximate a model \(f(x,p)\) at point \(p\) given a training dataset \(\left(x_i,f(x_i,p),\partial_p f(x_i,p)\right)_{1\leq i \leq N}\) including not only the input and output samples \(\left(x_i,f(x_i,p)\right)_{1\leq i \leq N}\) but also the samples of the partial derivatives of the outputs with respect to a special variable \(p\) that is not an input variable of the regressor. Then, as the regressor \(\hat{f}(x)\) is a function of \(\left(f(x_i,p)\right)_{1\leq i \leq N}\), it is also a function of \(p\). Consequently, it can be differentiated with respect to \(p\) using the chain rule principle if the regressor implements this mechanism.
- Parameters:
input_data (RealArray) -- The input data with shape
(n_samples, special_variable_dimension)
.- Returns:
The predicted Jacobian data with shape
(n_samples, n_outputs, special_variable_dimension)
.- Raises:
ValueError -- When the training dataset does not include gradient information.
- Return type:
RealArray
- predict_raw(input_data)[source]#
Predict output data from input data.
- Parameters:
input_data (RealArray) -- The input data with shape (n_samples, n_inputs).
- Returns:
The predicted output data with shape (n_samples, n_outputs).
- Return type:
RealArray
- DEFAULT_TRANSFORMER: DefaultTransformerType = mappingproxy({'inputs': <gemseo.mlearning.transformers.scaler.min_max_scaler.MinMaxScaler object>, 'outputs': <gemseo.mlearning.transformers.scaler.min_max_scaler.MinMaxScaler object>})#
The default transformer for the input and output data, if any.