package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
val get_py : string -> Py.Object.t

Get an attribute of this module as a Py.Object.t. This is useful to pass a Python function to another function.

module ColumnTransformer : sig ... end
module TransformedTargetRegressor : sig ... end
module Make_column_selector : sig ... end
val make_column_transformer : ?kwargs:(string * Py.Object.t) list -> Py.Object.t list -> Py.Object.t

Construct a ColumnTransformer from the given transformers.

This is a shorthand for the ColumnTransformer constructor; it does not require, and does not permit, naming the transformers. Instead, they will be given names automatically based on their types. It also does not allow weighting with ``transformer_weights``.

Read more in the :ref:`User Guide <make_column_transformer>`.

Parameters ---------- *transformers : tuples Tuples of the form (transformer, column(s)) specifying the transformer objects to be applied to subsets of the data.

transformer : estimator or 'passthrough', 'drop' Estimator must support :term:`fit` and :term:`transform`. Special-cased strings 'drop' and 'passthrough' are accepted as well, to indicate to drop the columns or to pass them through untransformed, respectively. column(s) : string or int, array-like of string or int, slice, boolean mask array or callable Indexes the data on its second axis. Integers are interpreted as positional columns, while strings can reference DataFrame columns by name. A scalar string or int should be used where ``transformer`` expects X to be a 1d array-like (vector), otherwise a 2d array will be passed to the transformer. A callable is passed the input data `X` and can return any of the above.

remainder : 'drop', 'passthrough' or estimator, default 'drop' By default, only the specified columns in `transformers` are transformed and combined in the output, and the non-specified columns are dropped. (default of ``'drop'``). By specifying ``remainder='passthrough'``, all remaining columns that were not specified in `transformers` will be automatically passed through. This subset of columns is concatenated with the output of the transformers. By setting ``remainder`` to be an estimator, the remaining non-specified columns will use the ``remainder`` estimator. The estimator must support :term:`fit` and :term:`transform`.

sparse_threshold : float, default = 0.3 If the transformed output consists of a mix of sparse and dense data, it will be stacked as a sparse matrix if the density is lower than this value. Use ``sparse_threshold=0`` to always return dense. When the transformed output consists of all sparse or all dense data, the stacked result will be sparse or dense, respectively, and this keyword will be ignored.

n_jobs : int or None, optional (default=None) Number of jobs to run in parallel. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary <n_jobs>` for more details.

verbose : boolean, optional(default=False) If True, the time elapsed while fitting each transformer will be printed as it is completed.

Returns ------- ct : ColumnTransformer

See also -------- sklearn.compose.ColumnTransformer : Class that allows combining the outputs of multiple transformer objects used on column subsets of the data into a single feature space.

Examples -------- >>> from sklearn.preprocessing import StandardScaler, OneHotEncoder >>> from sklearn.compose import make_column_transformer >>> make_column_transformer( ... (StandardScaler(), 'numerical_column'), ... (OneHotEncoder(), 'categorical_column')) ColumnTransformer(transformers=('standardscaler', StandardScaler(...), ['numerical_column']), ('onehotencoder', OneHotEncoder(...), ['categorical_column']))

OCaml

Innovation. Community. Security.