Preprocessing ============================== .. py:module:: evalml.preprocessing .. autoapi-nested-parse:: Preprocessing utilities. Subpackages ----------- .. toctree:: :titlesonly: :maxdepth: 3 data_splitters/index.rst Submodules ---------- .. toctree:: :titlesonly: :maxdepth: 1 utils/index.rst Package Contents ---------------- Classes Summary ~~~~~~~~~~~~~~~ .. autoapisummary:: evalml.preprocessing.NoSplit evalml.preprocessing.TimeSeriesSplit evalml.preprocessing.TrainingValidationSplit Functions ~~~~~~~~~ .. autoapisummary:: :nosignatures: evalml.preprocessing.load_data evalml.preprocessing.number_of_features evalml.preprocessing.split_data evalml.preprocessing.split_multiseries_data evalml.preprocessing.target_distribution Contents ~~~~~~~~~~~~~~~~~~~ .. py:function:: load_data(path, index, target, n_rows=None, drop=None, verbose=True, **kwargs) Load features and target from file. :param path: Path to file or a http/ftp/s3 URL. :type path: str :param index: Column for index. :type index: str :param target: Column for target. :type target: str :param n_rows: Number of rows to return. Defaults to None. :type n_rows: int :param drop: List of columns to drop. Defaults to None. :type drop: list :param verbose: If True, prints information about features and target. Defaults to True. :type verbose: bool :param \*\*kwargs: Other keyword arguments that should be passed to panda's `read_csv` method. :returns: Features matrix and target. :rtype: pd.DataFrame, pd.Series .. py:class:: NoSplit(random_seed=0) Does not split the training data into training and validation sets. All data is passed as the training set, test data is simply an array of `None`. To be used for future unsupervised learning, should not be used in any of the currently supported pipelines. :param random_seed: The seed to use for random sampling. Defaults to 0. Not used. :type random_seed: int **Methods** .. autoapisummary:: :nosignatures: evalml.preprocessing.NoSplit.get_metadata_routing evalml.preprocessing.NoSplit.get_n_splits evalml.preprocessing.NoSplit.is_cv evalml.preprocessing.NoSplit.split .. py:method:: get_metadata_routing(self) Get metadata routing of this object. Please check :ref:`User Guide ` on how the routing mechanism works. :returns: **routing** -- A :class:`~sklearn.utils.metadata_routing.MetadataRequest` encapsulating routing information. :rtype: MetadataRequest .. py:method:: get_n_splits() :staticmethod: Return the number of splits of this object. :returns: Always returns 0. :rtype: int .. py:method:: is_cv(self) :property: Returns whether or not the data splitter is a cross-validation data splitter. :returns: If the splitter is a cross-validation data splitter :rtype: bool .. py:method:: split(self, X, y=None) Divide the data into training and testing sets, where the testing set is empty. :param X: Dataframe of points to split :type X: pd.DataFrame :param y: Series of points to split :type y: pd.Series :returns: Indices to split data into training and test set :rtype: list .. py:function:: number_of_features(dtypes) Get the number of features of each specific dtype in a DataFrame. :param dtypes: DataFrame.dtypes to get the number of features for. :type dtypes: pd.Series :returns: dtypes and the number of features for each input type. :rtype: pd.Series .. rubric:: Example >>> X = pd.DataFrame() >>> X["integers"] = [i for i in range(10)] >>> X["floats"] = [float(i) for i in range(10)] >>> X["strings"] = [str(i) for i in range(10)] >>> X["booleans"] = [bool(i%2) for i in range(10)] Lists the number of columns corresponding to each dtype. >>> number_of_features(X.dtypes) Number of Features Boolean 1 Categorical 1 Numeric 2 .. py:function:: split_data(X, y, problem_type, problem_configuration=None, test_size=None, random_seed=0) Split data into train and test sets. :param X: data of shape [n_samples, n_features] :type X: pd.DataFrame or np.ndarray :param y: target data of length [n_samples] :type y: pd.Series, or np.ndarray :param problem_type: type of supervised learning problem. see evalml.problem_types.problemtype.all_problem_types for a full list. :type problem_type: str or ProblemTypes :param problem_configuration: Additional parameters needed to configure the search. For example, in time series problems, values should be passed in for the time_index, gap, and max_delay variables. :type problem_configuration: dict :param test_size: What percentage of data points should be included in the test set. Defaults to 0.2 (20%) for non-timeseries problems and 0.1 (10%) for timeseries problems. :type test_size: float :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int :returns: Feature and target data each split into train and test sets. :rtype: pd.DataFrame, pd.DataFrame, pd.Series, pd.Series :raises ValueError: If the problem_configuration is missing or does not contain both a time_index and series_id for multiseries problems. .. rubric:: Examples >>> X = pd.DataFrame([1, 2, 3, 4, 5, 6], columns=["First"]) >>> y = pd.Series([8, 9, 10, 11, 12, 13]) ... >>> X_train, X_validation, y_train, y_validation = split_data(X, y, "regression", random_seed=42) >>> X_train First 5 6 2 3 4 5 3 4 >>> X_validation First 0 1 1 2 >>> y_train 5 13 2 10 4 12 3 11 dtype: int64 >>> y_validation 0 8 1 9 dtype: int64 .. py:function:: split_multiseries_data(X, y, series_id, time_index, **kwargs) Split stacked multiseries data into train and test sets. Unstacked data can use `split_data`. :param X: The input training data of shape [n_samples*n_series, n_features]. :type X: pd.DataFrame :param y: The target training targets of length [n_samples*n_series]. :type y: pd.Series :param series_id: Name of column containing series id. :type series_id: str :param time_index: Name of column containing time index. :type time_index: str :param \*\*kwargs: Additional keyword arguments to pass to the split_data function. :returns: Feature and target data each split into train and test sets. :rtype: pd.DataFrame, pd.DataFrame, pd.Series, pd.Series .. py:function:: target_distribution(targets) Get the target distributions. :param targets: Target data. :type targets: pd.Series :returns: Target data and their frequency distribution as percentages. :rtype: pd.Series .. rubric:: Examples >>> y = pd.Series([1, 2, 4, 1, 3, 3, 1, 2]) >>> print(target_distribution(y).to_string()) Targets 1 37.50% 2 25.00% 3 25.00% 4 12.50% >>> y = pd.Series([True, False, False, False, True]) >>> print(target_distribution(y).to_string()) Targets False 60.00% True 40.00% .. py:class:: TimeSeriesSplit(max_delay=0, gap=0, forecast_horizon=None, time_index=None, n_series=None, n_splits=3) Rolling Origin Cross Validation for time series problems. The max_delay, gap, and forecast_horizon parameters are only used to validate that the requested split size is not too small given these parameters. :param max_delay: Max delay value for feature engineering. Time series pipelines create delayed features from existing features. This process will introduce NaNs into the first max_delay number of rows. The splitter uses the last max_delay number of rows from the previous split as the first max_delay number of rows of the current split to avoid "throwing out" more data than in necessary. Defaults to 0. :type max_delay: int :param gap: Number of time units separating the data used to generate features and the data to forecast on. Defaults to 0. :type gap: int :param forecast_horizon: Number of time units to forecast. Used for parameter validation. If an integer, will set the size of the cv splits. Defaults to None. :type forecast_horizon: int, None :param time_index: Name of the column containing the datetime information used to order the data. Defaults to None. :type time_index: str :param n_splits: number of data splits to make. Defaults to 3. :type n_splits: int .. rubric:: Example >>> import numpy as np >>> import pandas as pd ... >>> X = pd.DataFrame([i for i in range(10)], columns=["First"]) >>> y = pd.Series([i for i in range(10)]) ... >>> ts_split = TimeSeriesSplit(n_splits=4) >>> generator_ = ts_split.split(X, y) ... >>> first_split = next(generator_) >>> assert (first_split[0] == np.array([0, 1])).all() >>> assert (first_split[1] == np.array([2, 3])).all() ... ... >>> second_split = next(generator_) >>> assert (second_split[0] == np.array([0, 1, 2, 3])).all() >>> assert (second_split[1] == np.array([4, 5])).all() ... ... >>> third_split = next(generator_) >>> assert (third_split[0] == np.array([0, 1, 2, 3, 4, 5])).all() >>> assert (third_split[1] == np.array([6, 7])).all() ... ... >>> fourth_split = next(generator_) >>> assert (fourth_split[0] == np.array([0, 1, 2, 3, 4, 5, 6, 7])).all() >>> assert (fourth_split[1] == np.array([8, 9])).all() **Methods** .. autoapisummary:: :nosignatures: evalml.preprocessing.TimeSeriesSplit.get_metadata_routing evalml.preprocessing.TimeSeriesSplit.get_n_splits evalml.preprocessing.TimeSeriesSplit.is_cv evalml.preprocessing.TimeSeriesSplit.split .. py:method:: get_metadata_routing(self) Get metadata routing of this object. Please check :ref:`User Guide ` on how the routing mechanism works. :returns: **routing** -- A :class:`~sklearn.utils.metadata_routing.MetadataRequest` encapsulating routing information. :rtype: MetadataRequest .. py:method:: get_n_splits(self, X=None, y=None, groups=None) Get the number of data splits. :param X: Features to split. :type X: pd.DataFrame, None :param y: Target variable to split. Defaults to None. :type y: pd.DataFrame, None :param groups: Ignored but kept for compatibility with sklearn API. Defaults to None. :returns: Number of splits. .. py:method:: is_cv(self) :property: Returns whether or not the data splitter is a cross-validation data splitter. :returns: If the splitter is a cross-validation data splitter :rtype: bool .. py:method:: split(self, X, y=None, groups=None) Get the time series splits. X and y are assumed to be sorted in ascending time order. This method can handle passing in empty or None X and y data but note that X and y cannot be None or empty at the same time. :param X: Features to split. :type X: pd.DataFrame, None :param y: Target variable to split. Defaults to None. :type y: pd.DataFrame, None :param groups: Ignored but kept for compatibility with sklearn API. Defaults to None. :Yields: Iterator of (train, test) indices tuples. :raises ValueError: If one of the proposed splits would be empty. .. py:class:: TrainingValidationSplit(test_size=None, train_size=None, shuffle=False, stratify=None, random_seed=0) Split the training data into training and validation sets. :param test_size: What percentage of data points should be included in the validation set. Defalts to the complement of `train_size` if `train_size` is set, and 0.25 otherwise. :type test_size: float :param train_size: What percentage of data points should be included in the training set. Defaults to the complement of `test_size` :type train_size: float :param shuffle: Whether to shuffle the data before splitting. Defaults to False. :type shuffle: boolean :param stratify: Splits the data in a stratified fashion, using this argument as class labels. Defaults to None. :type stratify: list :param random_seed: The seed to use for random sampling. Defaults to 0. :type random_seed: int .. rubric:: Examples >>> import numpy as np >>> import pandas as pd ... >>> X = pd.DataFrame([i for i in range(10)], columns=["First"]) >>> y = pd.Series([i for i in range(10)]) ... >>> tv_split = TrainingValidationSplit() >>> split_ = next(tv_split.split(X, y)) >>> assert (split_[0] == np.array([0, 1, 2, 3, 4, 5, 6])).all() >>> assert (split_[1] == np.array([7, 8, 9])).all() ... ... >>> tv_split = TrainingValidationSplit(test_size=0.5) >>> split_ = next(tv_split.split(X, y)) >>> assert (split_[0] == np.array([0, 1, 2, 3, 4])).all() >>> assert (split_[1] == np.array([5, 6, 7, 8, 9])).all() ... ... >>> tv_split = TrainingValidationSplit(shuffle=True) >>> split_ = next(tv_split.split(X, y)) >>> assert (split_[0] == np.array([9, 1, 6, 7, 3, 0, 5])).all() >>> assert (split_[1] == np.array([2, 8, 4])).all() ... ... >>> y = pd.Series([i % 3 for i in range(10)]) >>> tv_split = TrainingValidationSplit(shuffle=True, stratify=y) >>> split_ = next(tv_split.split(X, y)) >>> assert (split_[0] == np.array([1, 9, 3, 2, 8, 6, 7])).all() >>> assert (split_[1] == np.array([0, 4, 5])).all() **Methods** .. autoapisummary:: :nosignatures: evalml.preprocessing.TrainingValidationSplit.get_metadata_routing evalml.preprocessing.TrainingValidationSplit.get_n_splits evalml.preprocessing.TrainingValidationSplit.is_cv evalml.preprocessing.TrainingValidationSplit.split .. py:method:: get_metadata_routing(self) Get metadata routing of this object. Please check :ref:`User Guide ` on how the routing mechanism works. :returns: **routing** -- A :class:`~sklearn.utils.metadata_routing.MetadataRequest` encapsulating routing information. :rtype: MetadataRequest .. py:method:: get_n_splits() :staticmethod: Return the number of splits of this object. :returns: Always returns 1. :rtype: int .. py:method:: is_cv(self) :property: Returns whether or not the data splitter is a cross-validation data splitter. :returns: If the splitter is a cross-validation data splitter :rtype: bool .. py:method:: split(self, X, y=None) Divide the data into training and testing sets. :param X: Dataframe of points to split :type X: pd.DataFrame :param y: Series of points to split :type y: pd.Series :returns: Indices to split data into training and test set :rtype: list