gemseo.mlearning.regression.algos.linreg_settings module#
Settings of the linear regressor.
- Settings LinearRegressor_Settings(*, transformer=None, parameters=None, input_names=(), output_names=(), fit_intercept=True, penalty_level=0.0, l2_penalty_ratio=1.0, random_state=0)[source]#
Bases:
BaseRegressorSettings
The settings of the linear regressor.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
input_names (Sequence[str]) --
By default it is set to ().
output_names (Sequence[str]) --
By default it is set to ().
fit_intercept (bool) --
By default it is set to True.
penalty_level (Annotated[float, Ge(ge=0)]) --
By default it is set to 0.0.
l2_penalty_ratio (Annotated[float, Ge(ge=0)]) --
By default it is set to 1.0.
random_state (Annotated[int, Ge(ge=0)] | None) --
By default it is set to 0.
- Return type:
None
- l2_penalty_ratio: NonNegativeFloat = 1.0#
The penalty ratio related to the l2 regularization.
If 1, use the Ridge penalty. If 0, use the Lasso penalty. Between 0 and 1, use the ElasticNet penalty.
- Constraints:
ge = 0
- penalty_level: NonNegativeFloat = 0.0#
The penalty level greater or equal to 0.
If zero, there is no penalty.
- Constraints:
ge = 0
- random_state: NonNegativeInt | None = 0#
The random state parameter in the case of a penalty.
If
None
, use the global random state instance fromnumpy.random
. Creating the model multiple times will produce different results. Ifint
, use a new random number generator seeded by this integer. This will produce the same results.