preprocessing ================================================================ .. py:module:: evalml.pipelines.components.transformers.preprocessing .. autoapi-nested-parse:: Preprocessing transformer components. Submodules ---------- .. toctree:: :titlesonly: :maxdepth: 1 datetime_featurizer/index.rst decomposer/index.rst drop_nan_rows_transformer/index.rst drop_null_columns/index.rst drop_rows_transformer/index.rst featuretools/index.rst log_transformer/index.rst lsa/index.rst natural_language_featurizer/index.rst polynomial_decomposer/index.rst replace_nullable_types/index.rst stl_decomposer/index.rst text_transformer/index.rst time_series_featurizer/index.rst time_series_regularizer/index.rst transform_primitive_components/index.rst Package Contents ---------------- Classes Summary ~~~~~~~~~~~~~~~ .. autoapisummary:: evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer evalml.pipelines.components.transformers.preprocessing.Decomposer evalml.pipelines.components.transformers.preprocessing.DFSTransformer evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer evalml.pipelines.components.transformers.preprocessing.DropNullColumns evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer evalml.pipelines.components.transformers.preprocessing.LogTransformer evalml.pipelines.components.transformers.preprocessing.LSA evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes evalml.pipelines.components.transformers.preprocessing.STLDecomposer evalml.pipelines.components.transformers.preprocessing.TextTransformer evalml.pipelines.components.transformers.preprocessing.TimeSeriesFeaturizer evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer evalml.pipelines.components.transformers.preprocessing.URLFeaturizer Contents ~~~~~~~~~~~~~~~~~~~ .. py:class:: DateTimeFeaturizer(features_to_extract=None, encode_as_categories=False, time_index=None, random_seed=0, **kwargs) Transformer that can automatically extract features from datetime columns. :param features_to_extract: List of features to extract. Valid options include "year", "month", "day_of_week", "hour". Defaults to None. :type features_to_extract: list :param encode_as_categories: Whether day-of-week and month features should be encoded as pandas "category" dtype. This allows OneHotEncoders to encode these features. Defaults to False. :type encode_as_categories: bool :param time_index: Name of the column containing the datetime information used to order the data. Ignored. :type time_index: str :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - True * - **modifies_target** - False * - **name** - DateTime Featurizer * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.clone evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.default_parameters evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.describe evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.fit evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.fit_transform evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.get_feature_names evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.load evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.needs_fitting evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.parameters evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.save evalml.pipelines.components.transformers.preprocessing.DateTimeFeaturizer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fit the datetime featurizer component. :param X: Input features. :type X: pd.DataFrame :param y: Target data. Ignored. :type y: pd.Series, optional :returns: self .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: get_feature_names(self) Gets the categories of each datetime feature. :returns: Dictionary, where each key-value pair is a column name and a dictionary mapping the unique feature values to their integer encoding. :rtype: dict .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Transforms data X by creating new features using existing DateTime columns, and then dropping those DateTime columns. :param X: Input features. :type X: pd.DataFrame :param y: Ignored. :type y: pd.Series, optional :returns: Transformed X :rtype: pd.DataFrame .. py:class:: Decomposer(component_obj=None, random_seed: int = 0, degree: int = 1, seasonal_period: int = -1, time_index: str = None, **kwargs) Component that removes trends and seasonality from time series and returns the decomposed components. :param parameters: Dictionary of parameters to pass to component object. :type parameters: dict :param component_obj: Instance of a detrender/deseasonalizer class. :type component_obj: class :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int :param degree: Currently the degree of the PolynomialDecomposer, not used for STLDecomposer. :type degree: int :param seasonal_period: The best guess, in units, for the period of the seasonal signal. :type seasonal_period: int :param time_index: The column name of the feature matrix (X) that the datetime information should be pulled from. :type time_index: str **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - None * - **invalid_frequencies** - [] * - **modifies_features** - False * - **modifies_target** - True * - **name** - Decomposer * - **needs_fitting** - True * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.Decomposer.clone evalml.pipelines.components.transformers.preprocessing.Decomposer.default_parameters evalml.pipelines.components.transformers.preprocessing.Decomposer.describe evalml.pipelines.components.transformers.preprocessing.Decomposer.determine_periodicity evalml.pipelines.components.transformers.preprocessing.Decomposer.fit evalml.pipelines.components.transformers.preprocessing.Decomposer.fit_transform evalml.pipelines.components.transformers.preprocessing.Decomposer.get_trend_dataframe evalml.pipelines.components.transformers.preprocessing.Decomposer.inverse_transform evalml.pipelines.components.transformers.preprocessing.Decomposer.is_freq_valid evalml.pipelines.components.transformers.preprocessing.Decomposer.load evalml.pipelines.components.transformers.preprocessing.Decomposer.parameters evalml.pipelines.components.transformers.preprocessing.Decomposer.plot_decomposition evalml.pipelines.components.transformers.preprocessing.Decomposer.save evalml.pipelines.components.transformers.preprocessing.Decomposer.set_seasonal_period evalml.pipelines.components.transformers.preprocessing.Decomposer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: determine_periodicity(self, X: pandas.DataFrame, y: pandas.Series, method: str = 'autocorrelation') Function that uses autocorrelative methods to determine the first, signficant period of the seasonal signal. :param X: The feature data of the time series problem. :type X: pandas.DataFrame :param y: The target data of a time series problem. :type y: pandas.Series :param method: Either "autocorrelation" or "partial-autocorrelation". The method by which to determine the first period of the seasonal part of the target signal. "partial-autocorrelation" should currently not be used. Defaults to "autocorrelation". :type method: str :returns: The integer numbers of entries in time series data over which the seasonal part of the target data repeats. If the time series data is in days, then this is the number of days that it takes the target's seasonal signal to repeat. Note: the target data can contain multiple seasonal signals. This function will only return the first, and thus, shortest period. E.g. if the target has both weekly and yearly seasonality, the function will only return "7" and not return "365". If no period is detected, returns [None]. :rtype: (list[int]) .. py:method:: fit(self, X, y=None) Fits component to data. :param X: The input training data of shape [n_samples, n_features] :type X: pd.DataFrame :param y: The target training data of length [n_samples] :type y: pd.Series, optional :returns: self :raises MethodPropertyNotFoundError: If component does not have a fit method or a component_obj that implements fit. .. py:method:: fit_transform(self, X: pandas.DataFrame, y: pandas.Series = None) -> tuple[pandas.DataFrame, pandas.Series] Removes fitted trend and seasonality from target variable. :param X: Ignored. :type X: pd.DataFrame, optional :param y: Target variable to detrend and deseasonalize. :type y: pd.Series :returns: The first element are the input features returned without modification. The second element is the target variable y with the fitted trend removed. :rtype: tuple of pd.DataFrame, pd.Series .. py:method:: get_trend_dataframe(self, y: pandas.Series) :abstractmethod: Return a list of dataframes, each with 3 columns: trend, seasonality, residual. .. py:method:: inverse_transform(self, y: pandas.Series) :abstractmethod: Add the trend + seasonality back to y. .. py:method:: is_freq_valid(self, freq: str) :classmethod: Determines if the given string represents a valid frequency for this decomposer. :param freq: A frequency to validate. See the pandas docs at https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for options. :type freq: str :returns: boolean representing whether the frequency is valid or not. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: plot_decomposition(self, X: pandas.DataFrame, y: pandas.Series, show: bool = False) -> tuple[matplotlib.pyplot.Figure, list] Plots the decomposition of the target signal. :param X: Input data with time series data in index. :type X: pd.DataFrame :param y: Target variable data provided as a Series for univariate problems or a DataFrame for multivariate problems. :type y: pd.Series or pd.DataFrame :param show: Whether to display the plot or not. Defaults to False. :type show: bool :returns: The figure and axes that have the decompositions plotted on them :rtype: matplotlib.pyplot.Figure, list[matplotlib.pyplot.Axes] .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: set_seasonal_period(self, X: pandas.DataFrame, y: pandas.Series) Function to set the component's seasonal period based on the target's seasonality. :param X: The feature data of the time series problem. :type X: pandas.DataFrame :param y: The target data of a time series problem. :type y: pandas.Series .. py:method:: transform(self, X, y=None) :abstractmethod: Transforms data X. :param X: Data to transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series, optional :returns: Transformed X :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:class:: DFSTransformer(index='index', features=None, random_seed=0, **kwargs) Featuretools DFS component that generates features for the input features. :param index: The name of the column that contains the indices. If no column with this name exists, then featuretools.EntitySet() creates a column with this name to serve as the index column. Defaults to 'index'. :type index: string :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int :param features: List of features to run DFS on. Defaults to None. Features will only be computed if the columns used by the feature exist in the input and if the feature itself is not in input. :type features: list **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - True * - **modifies_target** - False * - **name** - DFS Transformer * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.DFSTransformer.clone evalml.pipelines.components.transformers.preprocessing.DFSTransformer.default_parameters evalml.pipelines.components.transformers.preprocessing.DFSTransformer.describe evalml.pipelines.components.transformers.preprocessing.DFSTransformer.fit evalml.pipelines.components.transformers.preprocessing.DFSTransformer.fit_transform evalml.pipelines.components.transformers.preprocessing.DFSTransformer.load evalml.pipelines.components.transformers.preprocessing.DFSTransformer.needs_fitting evalml.pipelines.components.transformers.preprocessing.DFSTransformer.parameters evalml.pipelines.components.transformers.preprocessing.DFSTransformer.save evalml.pipelines.components.transformers.preprocessing.DFSTransformer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits the DFSTransformer Transformer component. :param X: The input data to transform, of shape [n_samples, n_features]. :type X: pd.DataFrame, np.array :param y: The target training data of length [n_samples]. :type y: pd.Series :returns: self .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Computes the feature matrix for the input X using featuretools' dfs algorithm. :param X: The input training data to transform. Has shape [n_samples, n_features] :type X: pd.DataFrame or np.ndarray :param y: Ignored. :type y: pd.Series, optional :returns: Feature matrix :rtype: pd.DataFrame .. py:class:: DropNaNRowsTransformer(parameters=None, component_obj=None, random_seed=0, **kwargs) Transformer to drop rows with NaN values. :param random_seed: Seed for the random number generator. Is not used by this component. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - True * - **modifies_target** - True * - **name** - Drop NaN Rows Transformer * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer.clone evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer.default_parameters evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer.describe evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer.fit evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer.fit_transform evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer.load evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer.needs_fitting evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer.parameters evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer.save evalml.pipelines.components.transformers.preprocessing.DropNaNRowsTransformer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits component to data. :param X: The input training data of shape [n_samples, n_features]. :type X: pd.DataFrame :param y: The target training data of length [n_samples]. :type y: pd.Series, optional :returns: self .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Transforms data using fitted component. :param X: Features. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series, optional :returns: Data with NaN rows dropped. :rtype: (pd.DataFrame, pd.Series) .. py:class:: DropNullColumns(pct_null_threshold=1.0, random_seed=0, **kwargs) Transformer to drop features whose percentage of NaN values exceeds a specified threshold. :param pct_null_threshold: The percentage of NaN values in an input feature to drop. Must be a value between [0, 1] inclusive. If equal to 0.0, will drop columns with any null values. If equal to 1.0, will drop columns with all null values. Defaults to 0.95. :type pct_null_threshold: float :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - True * - **modifies_target** - False * - **name** - Drop Null Columns Transformer * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.DropNullColumns.clone evalml.pipelines.components.transformers.preprocessing.DropNullColumns.default_parameters evalml.pipelines.components.transformers.preprocessing.DropNullColumns.describe evalml.pipelines.components.transformers.preprocessing.DropNullColumns.fit evalml.pipelines.components.transformers.preprocessing.DropNullColumns.fit_transform evalml.pipelines.components.transformers.preprocessing.DropNullColumns.load evalml.pipelines.components.transformers.preprocessing.DropNullColumns.needs_fitting evalml.pipelines.components.transformers.preprocessing.DropNullColumns.parameters evalml.pipelines.components.transformers.preprocessing.DropNullColumns.save evalml.pipelines.components.transformers.preprocessing.DropNullColumns.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits component to data. :param X: The input training data of shape [n_samples, n_features]. :type X: pd.DataFrame :param y: The target training data of length [n_samples]. :type y: pd.Series, optional :returns: self .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Transforms data X by dropping columns that exceed the threshold of null values. :param X: Data to transform :type X: pd.DataFrame :param y: Ignored. :type y: pd.Series, optional :returns: Transformed X :rtype: pd.DataFrame .. py:class:: DropRowsTransformer(indices_to_drop=None, random_seed=0) Transformer to drop rows specified by row indices. :param indices_to_drop: List of indices to drop in the input data. Defaults to None. :type indices_to_drop: list :param random_seed: Seed for the random number generator. Is not used by this component. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - True * - **modifies_target** - True * - **name** - Drop Rows Transformer * - **training_only** - True **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer.clone evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer.default_parameters evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer.describe evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer.fit evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer.fit_transform evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer.load evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer.needs_fitting evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer.parameters evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer.save evalml.pipelines.components.transformers.preprocessing.DropRowsTransformer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits component to data. :param X: The input training data of shape [n_samples, n_features]. :type X: pd.DataFrame :param y: The target training data of length [n_samples]. :type y: pd.Series, optional :returns: self :raises ValueError: If indices to drop do not exist in input features or target. .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Transforms data using fitted component. :param X: Features. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series, optional :returns: Data with row indices dropped. :rtype: (pd.DataFrame, pd.Series) .. py:class:: EmailFeaturizer(random_seed=0, **kwargs) Transformer that can automatically extract features from emails. :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - True * - **modifies_target** - False * - **name** - Email Featurizer * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer.clone evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer.default_parameters evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer.describe evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer.fit evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer.fit_transform evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer.load evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer.needs_fitting evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer.parameters evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer.save evalml.pipelines.components.transformers.preprocessing.EmailFeaturizer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits component to data. :param X: The input training data of shape [n_samples, n_features] :type X: pd.DataFrame :param y: The target training data of length [n_samples] :type y: pd.Series, optional :returns: self :raises MethodPropertyNotFoundError: If component does not have a fit method or a component_obj that implements fit. .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Transforms data X. :param X: Data to transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series, optional :returns: Transformed X :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:class:: LogTransformer(random_seed=0) Applies a log transformation to the target data. **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - False * - **modifies_target** - True * - **name** - Log Transformer * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.LogTransformer.clone evalml.pipelines.components.transformers.preprocessing.LogTransformer.default_parameters evalml.pipelines.components.transformers.preprocessing.LogTransformer.describe evalml.pipelines.components.transformers.preprocessing.LogTransformer.fit evalml.pipelines.components.transformers.preprocessing.LogTransformer.fit_transform evalml.pipelines.components.transformers.preprocessing.LogTransformer.inverse_transform evalml.pipelines.components.transformers.preprocessing.LogTransformer.load evalml.pipelines.components.transformers.preprocessing.LogTransformer.needs_fitting evalml.pipelines.components.transformers.preprocessing.LogTransformer.parameters evalml.pipelines.components.transformers.preprocessing.LogTransformer.save evalml.pipelines.components.transformers.preprocessing.LogTransformer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits the LogTransformer. :param X: Ignored. :type X: pd.DataFrame or np.ndarray :param y: Ignored. :type y: pd.Series, optional :returns: self .. py:method:: fit_transform(self, X, y=None) Log transforms the target variable. :param X: Ignored. :type X: pd.DataFrame, optional :param y: Target variable to log transform. :type y: pd.Series :returns: The input features are returned without modification. The target variable y is log transformed. :rtype: tuple of pd.DataFrame, pd.Series .. py:method:: inverse_transform(self, y) Apply exponential to target data. :param y: Target variable. :type y: pd.Series :returns: Target with exponential applied. :rtype: pd.Series .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Log transforms the target variable. :param X: Ignored. :type X: pd.DataFrame, optional :param y: Target data to log transform. :type y: pd.Series :returns: The input features are returned without modification. The target variable y is log transformed. :rtype: tuple of pd.DataFrame, pd.Series .. py:class:: LSA(random_seed=0, **kwargs) Transformer to calculate the Latent Semantic Analysis Values of text input. :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - True * - **modifies_target** - False * - **name** - LSA Transformer * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.LSA.clone evalml.pipelines.components.transformers.preprocessing.LSA.default_parameters evalml.pipelines.components.transformers.preprocessing.LSA.describe evalml.pipelines.components.transformers.preprocessing.LSA.fit evalml.pipelines.components.transformers.preprocessing.LSA.fit_transform evalml.pipelines.components.transformers.preprocessing.LSA.load evalml.pipelines.components.transformers.preprocessing.LSA.needs_fitting evalml.pipelines.components.transformers.preprocessing.LSA.parameters evalml.pipelines.components.transformers.preprocessing.LSA.save evalml.pipelines.components.transformers.preprocessing.LSA.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits the input data. :param X: The data to transform. :type X: pd.DataFrame :param y: Ignored. :type y: pd.Series, optional :returns: self .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Transforms data X by applying the LSA pipeline. :param X: The data to transform. :type X: pd.DataFrame :param y: Ignored. :type y: pd.Series, optional :returns: Transformed X. The original column is removed and replaced with two columns of the format `LSA(original_column_name)[feature_number]`, where `feature_number` is 0 or 1. :rtype: pd.DataFrame .. py:class:: NaturalLanguageFeaturizer(random_seed=0, **kwargs) Transformer that can automatically featurize text columns using featuretools' nlp_primitives. Since models cannot handle non-numeric data, any text must be broken down into features that provide useful information about that text. This component splits each text column into several informative features: Diversity Score, Mean Characters per Word, Polarity Score, LSA (Latent Semantic Analysis), Number of Characters, and Number of Words. Calling transform on this component will replace any text columns in the given dataset with these numeric columns. :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - True * - **modifies_target** - False * - **name** - Natural Language Featurizer * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer.clone evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer.default_parameters evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer.describe evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer.fit evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer.fit_transform evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer.load evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer.needs_fitting evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer.parameters evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer.save evalml.pipelines.components.transformers.preprocessing.NaturalLanguageFeaturizer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits component to data. :param X: The input training data of shape [n_samples, n_features] :type X: pd.DataFrame or np.ndarray :param y: The target training data of length [n_samples] :type y: pd.Series :returns: self .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Transforms data X by creating new features using existing text columns. :param X: The data to transform. :type X: pd.DataFrame :param y: Ignored. :type y: pd.Series, optional :returns: Transformed X :rtype: pd.DataFrame .. py:class:: PolynomialDecomposer(time_index: str = None, degree: int = 1, seasonal_period: int = -1, random_seed: int = 0, **kwargs) Removes trends and seasonality from time series by fitting a polynomial and moving average to the data. Scikit-learn's PolynomialForecaster is used to generate the additive trend portion of the target data. A polynomial will be fit to the data during fit. That additive polynomial trend will be removed during fit so that statsmodel's seasonal_decompose can determine the addititve seasonality of the data by using rolling averages over the series' inferred periodicity. For example, daily time series data will generate rolling averages over the first week of data, normalize out the mean and return those 7 averages repeated over the entire length of the given series. Those seven averages, repeated as many times as necessary to match the length of the given target data, will be used as the seasonal signal of the data. :param time_index: Specifies the name of the column in X that provides the datetime objects. Defaults to None. :type time_index: str :param degree: Degree for the polynomial. If 1, linear model is fit to the data. If 2, quadratic model is fit, etc. Defaults to 1. :type degree: int :param seasonal_period: The number of entries in the time series data that corresponds to one period of a cyclic signal. For instance, if data is known to possess a weekly seasonal signal, and if the data is daily data, seasonal_period should be 7. For daily data with a yearly seasonal signal, seasonal_period should be 365. Defaults to -1, which uses the statsmodels libarary's freq_to_period function. https://github.com/statsmodels/statsmodels/blob/main/statsmodels/tsa/tsatools.py :type seasonal_period: int :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - { "degree": Integer(1, 3)} * - **invalid_frequencies** - [] * - **modifies_features** - False * - **modifies_target** - True * - **name** - Polynomial Decomposer * - **needs_fitting** - True * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.clone evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.default_parameters evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.describe evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.determine_periodicity evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.fit evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.fit_transform evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.get_trend_dataframe evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.inverse_transform evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.is_freq_valid evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.load evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.parameters evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.plot_decomposition evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.save evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.set_seasonal_period evalml.pipelines.components.transformers.preprocessing.PolynomialDecomposer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: determine_periodicity(self, X: pandas.DataFrame, y: pandas.Series, method: str = 'autocorrelation') Function that uses autocorrelative methods to determine the first, signficant period of the seasonal signal. :param X: The feature data of the time series problem. :type X: pandas.DataFrame :param y: The target data of a time series problem. :type y: pandas.Series :param method: Either "autocorrelation" or "partial-autocorrelation". The method by which to determine the first period of the seasonal part of the target signal. "partial-autocorrelation" should currently not be used. Defaults to "autocorrelation". :type method: str :returns: The integer numbers of entries in time series data over which the seasonal part of the target data repeats. If the time series data is in days, then this is the number of days that it takes the target's seasonal signal to repeat. Note: the target data can contain multiple seasonal signals. This function will only return the first, and thus, shortest period. E.g. if the target has both weekly and yearly seasonality, the function will only return "7" and not return "365". If no period is detected, returns [None]. :rtype: (list[int]) .. py:method:: fit(self, X: pandas.DataFrame, y: pandas.Series = None) -> PolynomialDecomposer Fits the PolynomialDecomposer and determine the seasonal signal. Currently only fits the polynomial detrender. The seasonality is determined by removing the trend from the signal and using statsmodels' seasonal_decompose(). Both the trend and seasonality are currently assumed to be additive. :param X: Conditionally used to build datetime index. :type X: pd.DataFrame, optional :param y: Target variable to detrend and deseasonalize. :type y: pd.Series :returns: self :raises NotImplementedError: If the input data has a frequency of "month-begin". This isn't supported by statsmodels decompose as the freqstr "MS" is misinterpreted as milliseconds. :raises ValueError: If y is None. :raises ValueError: If target data doesn't have DatetimeIndex AND no Datetime features in features data .. py:method:: fit_transform(self, X: pandas.DataFrame, y: pandas.Series = None) -> tuple[pandas.DataFrame, pandas.Series] Removes fitted trend and seasonality from target variable. :param X: Ignored. :type X: pd.DataFrame, optional :param y: Target variable to detrend and deseasonalize. :type y: pd.Series :returns: The first element are the input features returned without modification. The second element is the target variable y with the fitted trend removed. :rtype: tuple of pd.DataFrame, pd.Series .. py:method:: get_trend_dataframe(self, X: pandas.DataFrame, y: pandas.Series) -> list[pandas.DataFrame] Return a list of dataframes with 4 columns: signal, trend, seasonality, residual. Scikit-learn's PolynomialForecaster is used to generate the trend portion of the target data. statsmodel's seasonal_decompose is used to generate the seasonality of the data. :param X: Input data with time series data in index. :type X: pd.DataFrame :param y: Target variable data provided as a Series for univariate problems or a DataFrame for multivariate problems. :type y: pd.Series or pd.DataFrame :returns: Each DataFrame contains the columns "signal", "trend", "seasonality" and "residual," with the latter 3 column values being the decomposed elements of the target data. The "signal" column is simply the input target signal but reindexed with a datetime index to match the input features. :rtype: list of pd.DataFrame :raises TypeError: If X does not have time-series data in the index. :raises ValueError: If time series index of X does not have an inferred frequency. :raises ValueError: If the forecaster associated with the detrender has not been fit yet. :raises TypeError: If y is not provided as a pandas Series or DataFrame. .. py:method:: inverse_transform(self, y_t: pandas.Series) -> tuple[pandas.DataFrame, pandas.Series] Adds back fitted trend and seasonality to target variable. The polynomial trend is added back into the signal, calling the detrender's inverse_transform(). Then, the seasonality is projected forward to and added back into the signal. :param y_t: Target variable. :type y_t: pd.Series :returns: The first element are the input features returned without modification. The second element is the target variable y with the trend and seasonality added back in. :rtype: tuple of pd.DataFrame, pd.Series :raises ValueError: If y is None. .. py:method:: is_freq_valid(self, freq: str) :classmethod: Determines if the given string represents a valid frequency for this decomposer. :param freq: A frequency to validate. See the pandas docs at https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for options. :type freq: str :returns: boolean representing whether the frequency is valid or not. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: plot_decomposition(self, X: pandas.DataFrame, y: pandas.Series, show: bool = False) -> tuple[matplotlib.pyplot.Figure, list] Plots the decomposition of the target signal. :param X: Input data with time series data in index. :type X: pd.DataFrame :param y: Target variable data provided as a Series for univariate problems or a DataFrame for multivariate problems. :type y: pd.Series or pd.DataFrame :param show: Whether to display the plot or not. Defaults to False. :type show: bool :returns: The figure and axes that have the decompositions plotted on them :rtype: matplotlib.pyplot.Figure, list[matplotlib.pyplot.Axes] .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: set_seasonal_period(self, X: pandas.DataFrame, y: pandas.Series) Function to set the component's seasonal period based on the target's seasonality. :param X: The feature data of the time series problem. :type X: pandas.DataFrame :param y: The target data of a time series problem. :type y: pandas.Series .. py:method:: transform(self, X: pandas.DataFrame, y: pandas.Series = None) -> tuple[pandas.DataFrame, pandas.Series] Transforms the target data by removing the polynomial trend and rolling average seasonality. Applies the fit polynomial detrender to the target data, removing the additive polynomial trend. Then, utilizes the first period's worth of seasonal data determined in the .fit() function to extrapolate the seasonal signal of the data to be transformed. This seasonal signal is also assumed to be additive and is removed. :param X: Conditionally used to build datetime index. :type X: pd.DataFrame, optional :param y: Target variable to detrend and deseasonalize. :type y: pd.Series :returns: The input features are returned without modification. The target variable y is detrended and deseasonalized. :rtype: tuple of pd.DataFrame, pd.Series :raises ValueError: If target data doesn't have DatetimeIndex AND no Datetime features in features data .. py:class:: ReplaceNullableTypes(random_seed=0, **kwargs) Transformer to replace features with the new nullable dtypes with a dtype that is compatible in EvalML. **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - None * - **modifies_features** - True * - **modifies_target** - {} * - **name** - Replace Nullable Types Transformer * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes.clone evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes.default_parameters evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes.describe evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes.fit evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes.fit_transform evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes.load evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes.needs_fitting evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes.parameters evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes.save evalml.pipelines.components.transformers.preprocessing.ReplaceNullableTypes.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits component to data. :param X: The input training data of shape [n_samples, n_features]. :type X: pd.DataFrame :param y: The target training data of length [n_samples]. :type y: pd.Series, optional :returns: self .. py:method:: fit_transform(self, X, y=None) Substitutes non-nullable types for the new pandas nullable types in the data and target data. :param X: Input features. :type X: pd.DataFrame, optional :param y: Target data. :type y: pd.Series :returns: The input features and target data with the non-nullable types set. :rtype: tuple of pd.DataFrame, pd.Series .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Transforms data by replacing columns that contain nullable types with the appropriate replacement type. "float64" for nullable integers and "category" for nullable booleans. :param X: Data to transform :type X: pd.DataFrame :param y: Target data to transform :type y: pd.Series, optional :returns: Transformed X pd.Series: Transformed y :rtype: pd.DataFrame .. py:class:: STLDecomposer(time_index: str = None, degree: int = 1, seasonal_period: int = 7, random_seed: int = 0, **kwargs) Removes trends and seasonality from time series using the STL algorithm. https://www.statsmodels.org/dev/generated/statsmodels.tsa.seasonal.STL.html :param time_index: Specifies the name of the column in X that provides the datetime objects. Defaults to None. :type time_index: str :param degree: Not currently used. STL 3x "degree-like" values. None are able to be set at this time. Defaults to 1. :type degree: int :param seasonal_period: The number of entries in the time series data that corresponds to one period of a cyclic signal. For instance, if data is known to possess a weekly seasonal signal, and if the data is daily data, seasonal_period should be 7. For daily data with a yearly seasonal signal, seasonal_period should be 365. For compatibility with the underlying STL algorithm, must be odd. If an even number is provided, the next, highest odd number will be used. Defaults to 7. :type seasonal_period: int :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - None * - **invalid_frequencies** - ['SM', 'BM', 'SMS', 'BMS', 'BQ', 'BQS', 'T', 'S', 'L', 'U', 'N', 'A', 'BA', 'AS', 'BAS', 'BH'] * - **modifies_features** - False * - **modifies_target** - True * - **name** - STL Decomposer * - **needs_fitting** - True * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.STLDecomposer.clone evalml.pipelines.components.transformers.preprocessing.STLDecomposer.default_parameters evalml.pipelines.components.transformers.preprocessing.STLDecomposer.describe evalml.pipelines.components.transformers.preprocessing.STLDecomposer.determine_periodicity evalml.pipelines.components.transformers.preprocessing.STLDecomposer.fit evalml.pipelines.components.transformers.preprocessing.STLDecomposer.fit_transform evalml.pipelines.components.transformers.preprocessing.STLDecomposer.get_trend_dataframe evalml.pipelines.components.transformers.preprocessing.STLDecomposer.inverse_transform evalml.pipelines.components.transformers.preprocessing.STLDecomposer.is_freq_valid evalml.pipelines.components.transformers.preprocessing.STLDecomposer.load evalml.pipelines.components.transformers.preprocessing.STLDecomposer.parameters evalml.pipelines.components.transformers.preprocessing.STLDecomposer.plot_decomposition evalml.pipelines.components.transformers.preprocessing.STLDecomposer.save evalml.pipelines.components.transformers.preprocessing.STLDecomposer.set_seasonal_period evalml.pipelines.components.transformers.preprocessing.STLDecomposer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: determine_periodicity(self, X: pandas.DataFrame, y: pandas.Series, method: str = 'autocorrelation') Function that uses autocorrelative methods to determine the first, signficant period of the seasonal signal. :param X: The feature data of the time series problem. :type X: pandas.DataFrame :param y: The target data of a time series problem. :type y: pandas.Series :param method: Either "autocorrelation" or "partial-autocorrelation". The method by which to determine the first period of the seasonal part of the target signal. "partial-autocorrelation" should currently not be used. Defaults to "autocorrelation". :type method: str :returns: The integer numbers of entries in time series data over which the seasonal part of the target data repeats. If the time series data is in days, then this is the number of days that it takes the target's seasonal signal to repeat. Note: the target data can contain multiple seasonal signals. This function will only return the first, and thus, shortest period. E.g. if the target has both weekly and yearly seasonality, the function will only return "7" and not return "365". If no period is detected, returns [None]. :rtype: (list[int]) .. py:method:: fit(self, X: pandas.DataFrame, y: pandas.Series = None) -> STLDecomposer Fits the STLDecomposer and determine the seasonal signal. Instantiates a statsmodels STL decompose object with the component's stored parameters and fits it. Since the statsmodels object does not fit the sklearn api, it is not saved during __init__() in _component_obj and will be re-instantiated each time fit is called. To emulate the sklearn API, when the STL decomposer is fit, the full seasonal component, a single period sample of the seasonal component, the full trend-cycle component and the residual are saved. y(t) = S(t) + T(t) + R(t) :param X: Conditionally used to build datetime index. :type X: pd.DataFrame, optional :param y: Target variable to detrend and deseasonalize. :type y: pd.Series :returns: self :raises ValueError: If y is None. :raises ValueError: If target data doesn't have DatetimeIndex AND no Datetime features in features data .. py:method:: fit_transform(self, X: pandas.DataFrame, y: pandas.Series = None) -> tuple[pandas.DataFrame, pandas.Series] Removes fitted trend and seasonality from target variable. :param X: Ignored. :type X: pd.DataFrame, optional :param y: Target variable to detrend and deseasonalize. :type y: pd.Series :returns: The first element are the input features returned without modification. The second element is the target variable y with the fitted trend removed. :rtype: tuple of pd.DataFrame, pd.Series .. py:method:: get_trend_dataframe(self, X, y) Return a list of dataframes with 4 columns: signal, trend, seasonality, residual. :param X: Input data with time series data in index. :type X: pd.DataFrame :param y: Target variable data provided as a Series for univariate problems or a DataFrame for multivariate problems. :type y: pd.Series or pd.DataFrame :returns: Each DataFrame contains the columns "signal", "trend", "seasonality" and "residual," with the latter 3 column values being the decomposed elements of the target data. The "signal" column is simply the input target signal but reindexed with a datetime index to match the input features. :rtype: list of pd.DataFrame :raises TypeError: If X does not have time-series data in the index. :raises ValueError: If time series index of X does not have an inferred frequency. :raises ValueError: If the forecaster associated with the detrender has not been fit yet. :raises TypeError: If y is not provided as a pandas Series or DataFrame. .. py:method:: inverse_transform(self, y_t: pandas.Series) -> tuple[pandas.DataFrame, pandas.Series] Adds back fitted trend and seasonality to target variable. The STL trend is projected to cover the entire requested target range, then added back into the signal. Then, the seasonality is projected forward to and added back into the signal. :param y_t: Target variable. :type y_t: pd.Series :returns: The first element are the input features returned without modification. The second element is the target variable y with the trend and seasonality added back in. :rtype: tuple of pd.DataFrame, pd.Series :raises ValueError: If y is None. .. py:method:: is_freq_valid(self, freq: str) :classmethod: Determines if the given string represents a valid frequency for this decomposer. :param freq: A frequency to validate. See the pandas docs at https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for options. :type freq: str :returns: boolean representing whether the frequency is valid or not. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: plot_decomposition(self, X: pandas.DataFrame, y: pandas.Series, show: bool = False) -> tuple[matplotlib.pyplot.Figure, list] Plots the decomposition of the target signal. :param X: Input data with time series data in index. :type X: pd.DataFrame :param y: Target variable data provided as a Series for univariate problems or a DataFrame for multivariate problems. :type y: pd.Series or pd.DataFrame :param show: Whether to display the plot or not. Defaults to False. :type show: bool :returns: The figure and axes that have the decompositions plotted on them :rtype: matplotlib.pyplot.Figure, list[matplotlib.pyplot.Axes] .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: set_seasonal_period(self, X: pandas.DataFrame, y: pandas.Series) Function to set the component's seasonal period based on the target's seasonality. :param X: The feature data of the time series problem. :type X: pandas.DataFrame :param y: The target data of a time series problem. :type y: pandas.Series .. py:method:: transform(self, X: pandas.DataFrame, y: pandas.Series = None) -> tuple[pandas.DataFrame, pandas.Series] Transforms the target data by removing the STL trend and seasonality. Uses an ARIMA model to project forward the addititve trend and removes it. Then, utilizes the first period's worth of seasonal data determined in the .fit() function to extrapolate the seasonal signal of the data to be transformed. This seasonal signal is also assumed to be additive and is removed. :param X: Conditionally used to build datetime index. :type X: pd.DataFrame, optional :param y: Target variable to detrend and deseasonalize. :type y: pd.Series :returns: The input features are returned without modification. The target variable y is detrended and deseasonalized. :rtype: tuple of pd.DataFrame, pd.Series :raises ValueError: If target data doesn't have DatetimeIndex AND no Datetime features in features data .. py:class:: TextTransformer(component_obj=None, random_seed=0, **kwargs) Base class for all transformers working with text features. :param component_obj: Third-party objects useful in component implementation. Defaults to None. :type component_obj: obj :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **modifies_features** - True * - **modifies_target** - False * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.TextTransformer.clone evalml.pipelines.components.transformers.preprocessing.TextTransformer.default_parameters evalml.pipelines.components.transformers.preprocessing.TextTransformer.describe evalml.pipelines.components.transformers.preprocessing.TextTransformer.fit evalml.pipelines.components.transformers.preprocessing.TextTransformer.fit_transform evalml.pipelines.components.transformers.preprocessing.TextTransformer.load evalml.pipelines.components.transformers.preprocessing.TextTransformer.name evalml.pipelines.components.transformers.preprocessing.TextTransformer.needs_fitting evalml.pipelines.components.transformers.preprocessing.TextTransformer.parameters evalml.pipelines.components.transformers.preprocessing.TextTransformer.save evalml.pipelines.components.transformers.preprocessing.TextTransformer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits component to data. :param X: The input training data of shape [n_samples, n_features] :type X: pd.DataFrame :param y: The target training data of length [n_samples] :type y: pd.Series, optional :returns: self :raises MethodPropertyNotFoundError: If component does not have a fit method or a component_obj that implements fit. .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: name(cls) :property: Returns string name of this component. .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) :abstractmethod: Transforms data X. :param X: Data to transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series, optional :returns: Transformed X :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:class:: TimeSeriesFeaturizer(time_index=None, max_delay=2, gap=0, forecast_horizon=1, conf_level=0.05, rolling_window_size=0.25, delay_features=True, delay_target=True, random_seed=0, **kwargs) Transformer that delays input features and target variable for time series problems. This component uses an algorithm based on the autocorrelation values of the target variable to determine which lags to select from the set of all possible lags. The algorithm is based on the idea that the local maxima of the autocorrelation function indicate the lags that have the most impact on the present time. The algorithm computes the autocorrelation values and finds the local maxima, called "peaks", that are significant at the given conf_level. Since lags in the range [0, 10] tend to be predictive but not local maxima, the union of the peaks is taken with the significant lags in the range [0, 10]. At the end, only selected lags in the range [0, max_delay] are used. Parametrizing the algorithm by conf_level lets the AutoMLAlgorithm tune the set of lags chosen so that the chances of finding a good set of lags is higher. Using conf_level value of 1 selects all possible lags. :param time_index: Name of the column containing the datetime information used to order the data. Ignored. :type time_index: str :param max_delay: Maximum number of time units to delay each feature. Defaults to 2. :type max_delay: int :param forecast_horizon: The number of time periods the pipeline is expected to forecast. :type forecast_horizon: int :param conf_level: Float in range (0, 1] that determines the confidence interval size used to select which lags to compute from the set of [1, max_delay]. A delay of 1 will always be computed. If 1, selects all possible lags in the set of [1, max_delay], inclusive. :type conf_level: float :param rolling_window_size: Float in range (0, 1] that determines the size of the window used for rolling features. Size is computed as rolling_window_size * max_delay. :type rolling_window_size: float :param delay_features: Whether to delay the input features. Defaults to True. :type delay_features: bool :param delay_target: Whether to delay the target. Defaults to True. :type delay_target: bool :param gap: The number of time units between when the features are collected and when the target is collected. For example, if you are predicting the next time step's target, gap=1. This is only needed because when gap=0, we need to be sure to start the lagging of the target variable at 1. Defaults to 1. :type gap: int :param random_seed: Seed for the random number generator. This transformer performs the same regardless of the random seed provided. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - Real(0.001, 1.0), "rolling_window_size": Real(0.001, 1.0)}:type: {"conf_level" * - **modifies_features** - True * - **modifies_target** - False * - **name** - Time Series Featurizer * - **needs_fitting** - True * - **target_colname_prefix** - target_delay_{} * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.TimeSeriesFeaturizer.clone evalml.pipelines.components.transformers.preprocessing.TimeSeriesFeaturizer.default_parameters evalml.pipelines.components.transformers.preprocessing.TimeSeriesFeaturizer.describe evalml.pipelines.components.transformers.preprocessing.TimeSeriesFeaturizer.fit evalml.pipelines.components.transformers.preprocessing.TimeSeriesFeaturizer.fit_transform evalml.pipelines.components.transformers.preprocessing.TimeSeriesFeaturizer.load evalml.pipelines.components.transformers.preprocessing.TimeSeriesFeaturizer.parameters evalml.pipelines.components.transformers.preprocessing.TimeSeriesFeaturizer.save evalml.pipelines.components.transformers.preprocessing.TimeSeriesFeaturizer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits the DelayFeatureTransformer. :param X: The input training data of shape [n_samples, n_features] :type X: pd.DataFrame or np.ndarray :param y: The target training data of length [n_samples] :type y: pd.Series, optional :returns: self :raises ValueError: if self.time_index is None .. py:method:: fit_transform(self, X, y=None) Fit the component and transform the input data. :param X: Data to transform. :type X: pd.DataFrame :param y: Target. :type y: pd.Series, or None :returns: Transformed X. :rtype: pd.DataFrame .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Computes the delayed values and rolling means for X and y. The chosen delays are determined by the autocorrelation function of the target variable. See the class docstring for more information on how they are chosen. If y is None, all possible lags are chosen. If y is not None, it will also compute the delayed values for the target variable. The rolling means for all numeric features in X and y, if y is numeric, are also returned. :param X: Data to transform. None is expected when only the target variable is being used. :type X: pd.DataFrame or None :param y: Target. :type y: pd.Series, or None :returns: Transformed X. No original features are returned. :rtype: pd.DataFrame .. py:class:: TimeSeriesRegularizer(time_index=None, frequency_payload=None, window_length=4, threshold=0.4, random_seed=0, **kwargs) Transformer that regularizes an inconsistently spaced datetime column. If X is passed in to fit/transform, the column `time_index` will be checked for an inferrable offset frequency. If the `time_index` column is perfectly inferrable then this Transformer will do nothing and return the original X and y. If X does not have a perfectly inferrable frequency but one can be estimated, then X and y will be reformatted based on the estimated frequency for `time_index`. In the original X and y passed: - Missing datetime values will be added and will have their corresponding columns in X and y set to None. - Duplicate datetime values will be dropped. - Extra datetime values will be dropped. - If it can be determined that a duplicate or extra value is misaligned, then it will be repositioned to take the place of a missing value. This Transformer should be used before the `TimeSeriesImputer` in order to impute the missing values that were added to X and y (if passed). :param time_index: Name of the column containing the datetime information used to order the data, required. Defaults to None. :type time_index: string :param frequency_payload: Payload returned from Woodwork's infer_frequency function where debug is True. Defaults to None. :type frequency_payload: tuple :param window_length: The size of the rolling window over which inference is conducted to determine the prevalence of uninferrable frequencies. :type window_length: int :param Lower values make this component more sensitive to recognizing numerous faulty datetime values. Defaults to 5.: :param threshold: The minimum percentage of windows that need to have been able to infer a frequency. Lower values make this component more :type threshold: float :param sensitive to recognizing numerous faulty datetime values. Defaults to 0.8.: :param random_seed: Seed for the random number generator. This transformer performs the same regardless of the random seed provided. :type random_seed: int :param Defaults to 0.: :raises ValueError: if the frequency_payload parameter has not been passed a tuple **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - True * - **modifies_target** - True * - **name** - Time Series Regularizer * - **training_only** - True **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer.clone evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer.default_parameters evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer.describe evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer.fit evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer.fit_transform evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer.load evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer.needs_fitting evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer.parameters evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer.save evalml.pipelines.components.transformers.preprocessing.TimeSeriesRegularizer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits the TimeSeriesRegularizer. :param X: The input training data of shape [n_samples, n_features]. :type X: pd.DataFrame :param y: The target training data of length [n_samples]. :type y: pd.Series, optional :returns: self :raises ValueError: if self.time_index is None, if X and y have different lengths, if `time_index` in X does not have an offset frequency that can be estimated :raises TypeError: if the `time_index` column is not of type Datetime :raises KeyError: if the `time_index` column doesn't exist .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Regularizes a dataframe and target data to an inferrable offset frequency. A 'clean' X and y (if y was passed in) are created based on an inferrable offset frequency and matching datetime values with the original X and y are imputed into the clean X and y. Datetime values identified as misaligned are shifted into their appropriate position. :param X: The input training data of shape [n_samples, n_features]. :type X: pd.DataFrame :param y: The target training data of length [n_samples]. :type y: pd.Series, optional :returns: Data with an inferrable `time_index` offset frequency. :rtype: (pd.DataFrame, pd.Series) .. py:class:: URLFeaturizer(random_seed=0, **kwargs) Transformer that can automatically extract features from URL. :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int **Attributes** .. list-table:: :widths: 15 85 :header-rows: 0 * - **hyperparameter_ranges** - {} * - **modifies_features** - True * - **modifies_target** - False * - **name** - URL Featurizer * - **training_only** - False **Methods** .. autoapisummary:: :nosignatures: evalml.pipelines.components.transformers.preprocessing.URLFeaturizer.clone evalml.pipelines.components.transformers.preprocessing.URLFeaturizer.default_parameters evalml.pipelines.components.transformers.preprocessing.URLFeaturizer.describe evalml.pipelines.components.transformers.preprocessing.URLFeaturizer.fit evalml.pipelines.components.transformers.preprocessing.URLFeaturizer.fit_transform evalml.pipelines.components.transformers.preprocessing.URLFeaturizer.load evalml.pipelines.components.transformers.preprocessing.URLFeaturizer.needs_fitting evalml.pipelines.components.transformers.preprocessing.URLFeaturizer.parameters evalml.pipelines.components.transformers.preprocessing.URLFeaturizer.save evalml.pipelines.components.transformers.preprocessing.URLFeaturizer.transform .. py:method:: clone(self) Constructs a new component with the same parameters and random state. :returns: A new instance of this component with identical parameters and random state. .. py:method:: default_parameters(cls) Returns the default parameters for this component. Our convention is that Component.default_parameters == Component().parameters. :returns: Default parameters for this component. :rtype: dict .. py:method:: describe(self, print_name=False, return_dict=False) Describe a component and its parameters. :param print_name: whether to print name of component :type print_name: bool, optional :param return_dict: whether to return description as dictionary in the format {"name": name, "parameters": parameters} :type return_dict: bool, optional :returns: Returns dictionary if return_dict is True, else None. :rtype: None or dict .. py:method:: fit(self, X, y=None) Fits component to data. :param X: The input training data of shape [n_samples, n_features] :type X: pd.DataFrame :param y: The target training data of length [n_samples] :type y: pd.Series, optional :returns: self :raises MethodPropertyNotFoundError: If component does not have a fit method or a component_obj that implements fit. .. py:method:: fit_transform(self, X, y=None) Fits on X and transforms X. :param X: Data to fit and transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series :returns: Transformed X. :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform. .. py:method:: load(file_path) :staticmethod: Loads component at file path. :param file_path: Location to load file. :type file_path: str :returns: ComponentBase object .. py:method:: needs_fitting(self) Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing. :returns: True. .. py:method:: parameters(self) :property: Returns the parameters which were used to initialize the component. .. py:method:: save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves component at file path. :param file_path: Location to save file. :type file_path: str :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int .. py:method:: transform(self, X, y=None) Transforms data X. :param X: Data to transform. :type X: pd.DataFrame :param y: Target data. :type y: pd.Series, optional :returns: Transformed X :rtype: pd.DataFrame :raises MethodPropertyNotFoundError: If transformer does not have a transform method or a component_obj that implements transform.