data_splitters

Data splitter classes.

Package Contents

Classes Summary

TimeSeriesSplit

Rolling Origin Cross Validation for time series problems.

TrainingValidationSplit

Split the training data into training and validation sets.

Contents

class evalml.preprocessing.data_splitters.TimeSeriesSplit(max_delay=0, gap=0, date_index=None, n_splits=3)[source]

Rolling Origin Cross Validation for time series problems.

This class uses max_delay and gap values to take into account that evalml time series pipelines perform some feature and target engineering, e.g delaying input features and shifting the target variable by the desired amount. If the data that will be split already has all the features and appropriate target values, and then set max_delay and gap to 0.

Parameters
  • max_delay (int) – Max delay value for feature engineering. Time series pipelines create delayed features from existing features. This process will introduce NaNs into the first max_delay number of rows. The splitter uses the last max_delay number of rows from the previous split as the first max_delay number of rows of the current split to avoid “throwing out” more data than in necessary. Defaults to 0.

  • gap (int) – Gap used in time series problem. Time series pipelines shift the target variable by gap rows. Defaults to 0.

  • date_index (str) – Name of the column containing the datetime information used to order the data. Defaults to None.

  • n_splits (int) – number of data splits to make. Defaults to 3.

Methods

get_n_splits

Get the number of data splits.

split

Get the time series splits.

get_n_splits(self, X=None, y=None, groups=None)[source]

Get the number of data splits.

Parameters
  • X (pd.DataFrame, None) – Features to split.

  • y (pd.DataFrame, None) – Target variable to split. Defaults to None.

  • groups – Ignored but kept for compatibility with sklearn API. Defaults to None.

Returns

Number of splits.

split(self, X, y=None, groups=None)[source]

Get the time series splits.

X and y are assumed to be sorted in ascending time order. This method can handle passing in empty or None X and y data but note that X and y cannot be None or empty at the same time.

Parameters
  • X (pd.DataFrame, None) – Features to split.

  • y (pd.DataFrame, None) – Target variable to split. Defaults to None.

  • groups – Ignored but kept for compatibility with sklearn API. Defaults to None.

Yields

Iterator of (train, test) indices tuples.

Raises

ValueError – If one of the proposed splits would be empty.

class evalml.preprocessing.data_splitters.TrainingValidationSplit(test_size=None, train_size=None, shuffle=False, stratify=None, random_seed=0)[source]

Split the training data into training and validation sets.

Parameters
  • test_size (float) – What percentage of data points should be included in the validation set. Defalts to the complement of train_size if train_size is set, and 0.25 otherwise.

  • train_size (float) – What percentage of data points should be included in the training set. Defaults to the complement of test_size

  • shuffle (boolean) – Whether to shuffle the data before splitting. Defaults to False.

  • stratify (list) – Splits the data in a stratified fashion, using this argument as class labels. Defaults to None.

  • random_seed (int) – The seed to use for random sampling. Defaults to 0.

Methods

get_n_splits

Return the number of splits of this object.

split

Divide the data into training and testing sets.

static get_n_splits()[source]

Return the number of splits of this object.

Returns

Always returns 1.

Return type

int

split(self, X, y=None)[source]

Divide the data into training and testing sets.

Parameters
  • X (pd.DataFrame) – Dataframe of points to split

  • y (pd.Series) – Series of points to split

Returns

Indices to split data into training and test set

Return type

list