package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type t
val of_pyobject : Py.Object.t -> t
val to_pyobject : t -> Py.Object.t
val create : ?alpha:float -> ?l1_ratio:float -> ?fit_intercept:bool -> ?normalize:bool -> ?precompute:[ `Bool of bool | `Ndarray of Ndarray.t ] -> ?max_iter:int -> ?copy_X:bool -> ?tol:float -> ?warm_start:bool -> ?positive:bool -> ?random_state:[ `Int of int | `RandomState of Py.Object.t | `None ] -> ?selection:string -> unit -> t

Linear regression with combined L1 and L2 priors as regularizer.

Minimizes the objective function::

1 / (2 * n_samples) * ||y - Xw||^2_2

  1. alpha * l1_ratio * ||w||_1
  2. 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2

If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to::

a * L1 + b * L2

where::

alpha = a + b and l1_ratio = a / (a + b)

The parameter l1_ratio corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. Specifically, l1_ratio = 1 is the lasso penalty. Currently, l1_ratio <= 0.01 is not reliable, unless you supply your own sequence of alpha.

Read more in the :ref:`User Guide <elastic_net>`.

Parameters ---------- alpha : float, optional Constant that multiplies the penalty terms. Defaults to 1.0. See the notes for the exact mathematical meaning of this parameter. ``alpha = 0`` is equivalent to an ordinary least square, solved by the :class:`LinearRegression` object. For numerical reasons, using ``alpha = 0`` with the ``Lasso`` object is not advised. Given this, you should use the :class:`LinearRegression` object.

l1_ratio : float The ElasticNet mixing parameter, with ``0 <= l1_ratio <= 1``. For ``l1_ratio = 0`` the penalty is an L2 penalty. ``For l1_ratio = 1`` it is an L1 penalty. For ``0 < l1_ratio < 1``, the penalty is a combination of L1 and L2.

fit_intercept : bool Whether the intercept should be estimated or not. If ``False``, the data is assumed to be already centered.

normalize : boolean, optional, default False This parameter is ignored when ``fit_intercept`` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use :class:`sklearn.preprocessing.StandardScaler` before calling ``fit`` on an estimator with ``normalize=False``.

precompute : True | False | array-like Whether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always ``True`` to preserve sparsity.

max_iter : int, optional The maximum number of iterations

copy_X : boolean, optional, default True If ``True``, X will be copied; else, it may be overwritten.

tol : float, optional The tolerance for the optimization: if the updates are smaller than ``tol``, the optimization code checks the dual gap for optimality and continues until it is smaller than ``tol``.

warm_start : bool, optional When set to ``True``, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See :term:`the Glossary <warm_start>`.

positive : bool, optional When set to ``True``, forces the coefficients to be positive.

random_state : int, RandomState instance or None, optional, default None The seed of the pseudo random number generator that selects a random feature to update. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`. Used when ``selection`` == 'random'.

selection : str, default 'cyclic' If set to 'random', a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to 'random') often leads to significantly faster convergence especially when tol is higher than 1e-4.

Attributes ---------- coef_ : array, shape (n_features,) | (n_targets, n_features) parameter vector (w in the cost function formula)

sparse_coef_ : scipy.sparse matrix, shape (n_features, 1) | (n_targets, n_features) ``sparse_coef_`` is a readonly property derived from ``coef_``

intercept_ : float | array, shape (n_targets,) independent term in decision function.

n_iter_ : array-like, shape (n_targets,) number of iterations run by the coordinate descent solver to reach the specified tolerance.

Examples -------- >>> from sklearn.linear_model import ElasticNet >>> from sklearn.datasets import make_regression

>>> X, y = make_regression(n_features=2, random_state=0) >>> regr = ElasticNet(random_state=0) >>> regr.fit(X, y) ElasticNet(random_state=0) >>> print(regr.coef_) 18.83816048 64.55968825 >>> print(regr.intercept_) 1.451... >>> print(regr.predict([0, 0])) 1.451...

Notes ----- To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array.

See also -------- ElasticNetCV : Elastic net model with best model selection by cross-validation. SGDRegressor: implements elastic net regression with incremental training. SGDClassifier: implements logistic regression with elastic net penalty (``SGDClassifier(loss="log", penalty="elasticnet")``).

val fit : ?check_input:bool -> x:[ `Ndarray of Ndarray.t | `PyObject of Py.Object.t ] -> y:Ndarray.t -> t -> t

Fit model with coordinate descent.

Parameters ---------- X : ndarray or scipy.sparse matrix, (n_samples, n_features) Data

y : ndarray, shape (n_samples,) or (n_samples, n_targets) Target. Will be cast to X's dtype if necessary

check_input : boolean, (default=True) Allow to bypass several input checking. Don't use this parameter unless you know what you do.

Notes -----

Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary.

To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.

val get_params : ?deep:bool -> t -> Py.Object.t

Get parameters for this estimator.

Parameters ---------- deep : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns ------- params : mapping of string to any Parameter names mapped to their values.

val predict : x:[ `Ndarray of Ndarray.t | `SparseMatrix of Csr_matrix.t ] -> t -> Ndarray.t

Predict using the linear model.

Parameters ---------- X : array_like or sparse matrix, shape (n_samples, n_features) Samples.

Returns ------- C : array, shape (n_samples,) Returns predicted values.

val score : ?sample_weight:Ndarray.t -> x:Ndarray.t -> y:Ndarray.t -> t -> float

Return the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.

Parameters ---------- X : array-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead, shape = (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

y : array-like of shape (n_samples,) or (n_samples, n_outputs) True values for X.

sample_weight : array-like of shape (n_samples,), default=None Sample weights.

Returns ------- score : float R^2 of self.predict(X) wrt. y.

Notes ----- The R2 score used when calling ``score`` on a regressor will use ``multioutput='uniform_average'`` from version 0.23 to keep consistent with :func:`~sklearn.metrics.r2_score`. This will influence the ``score`` method of all the multioutput regressors (except for :class:`~sklearn.multioutput.MultiOutputRegressor`). To specify the default value manually and avoid the warning, please either call :func:`~sklearn.metrics.r2_score` directly or make a custom scorer with :func:`~sklearn.metrics.make_scorer` (the built-in scorer ``'r2'`` uses ``multioutput='uniform_average'``).

val set_params : ?params:(string * Py.Object.t) list -> t -> t

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form ``<component>__<parameter>`` so that it's possible to update each component of a nested object.

Parameters ---------- **params : dict Estimator parameters.

Returns ------- self : object Estimator instance.

val coef_ : t -> Ndarray.t

Attribute coef_: see constructor for documentation

val sparse_coef_ : t -> Py.Object.t

Attribute sparse_coef_: see constructor for documentation

val intercept_ : t -> Ndarray.t

Attribute intercept_: see constructor for documentation

val n_iter_ : t -> Ndarray.t

Attribute n_iter_: see constructor for documentation

val to_string : t -> string

Print the object to a human-readable representation.

val show : t -> string

Print the object to a human-readable representation.

val pp : Format.formatter -> t -> unit

Pretty-print the object to a formatter.

OCaml

Innovation. Community. Security.