API Reference#
Demo Datasets#
Load breast cancer dataset. Binary classification problem. |
|
Load churn dataset, which can be used for binary classification problems. |
|
Load diabetes dataset. Used for regression problems. |
|
Load credit card fraud dataset. |
|
Load the Australian daily-min-temperatures weather dataset. |
|
Load wine dataset. Multiclass problem. |
Preprocessing#
Preprocessing Utils#
Utilities to preprocess data before using evalml.
Load features and target from file. |
|
Get the number of features of each specific dtype in a DataFrame. |
|
Split data into train and test sets. |
|
Get the target distributions. |
Data Splitters#
Does not split the training data into training and validation sets. |
|
Wrapper class for sklearn's KFold splitter. |
|
Wrapper class for sklearn's Stratified KFold splitter. |
|
Split the training data into training and validation sets. |
|
Rolling Origin Cross Validation for time series problems. |
Exceptions#
Exception raised when all pipelines in an automl batch return a score of NaN for the primary objective. |
|
An exception to be raised when predict/predict_proba/transform is called on a component without fitting first. |
|
Exception raised when a data check can't initialize with the parameters given. |
|
Exception to raise when a class is does not have an expected method or property. |
|
An exception raised when a component is not found in all_components(). |
|
Exception when a particular classification label for the 'positive' class cannot be found in the column index or unique values. |
|
Exception when get_objective tries to instantiate an objective and required args are not provided. |
|
Exception to raise when specified objective does not exist. |
|
Exception raised for all errors that partial dependence can raise. |
|
Exception raised for errors that can be raised when applying a pipeline. |
|
An exception raised when a particular pipeline is not found in automl search results. |
|
An exception to be raised when predict/predict_proba/transform is called on a pipeline without fitting first. |
|
An exception raised when a pipeline errors while scoring any objective in a list of objectives. |
Warnings#
Warning thrown when there are null values in the column of interest. |
|
Warning thrown when a pipeline parameter isn't used in a defined pipeline's component graph during initialization. |
Error Codes#
Enum identifying the type of error encountered in partial dependence. |
|
Enum identifying the type of error encountered while applying a pipeline. |
|
Enum identifying the type of error encountered in holdout validation. |
AutoML#
AutoML Search Interface#
Automated Pipeline search. |
AutoML Utils#
Get the default primary search objective for a problem type. |
|
Determine for a given automl config and pipeline what the threshold tuning objective should be and whether or not training data should be further split to achieve proper threshold tuning. |
|
Given the training data and ML problem parameters, compute a data splitting method to use during AutoML search. |
|
Further split the training data for a given pipeline. This is needed for binary pipelines in order to properly tune the threshold. |
|
Given data and configuration, run an automl search. |
|
Given data and configuration, run an automl search. |
|
Tunes the threshold of a binary pipeline to the X and y thresholding data. |
AutoML Algorithm Classes#
Base class for the AutoML algorithms which power EvalML. |
|
An automl algorithm that consists of two modes: fast and long, where fast is a subset of long. |
|
An automl algorithm which first fits a base round of pipelines with default parameters, then does a round of parameter tuning on each pipeline in order of performance. |
AutoML Callbacks#
Logs the exception thrown as an error. |
|
Raises the exception thrown by the AutoMLSearch object. |
|
No-op. |
AutoML Engines#
The concurrent.futures (CF) engine. |
|
The dask engine. |
|
Base class for EvalML engines. |
|
The default engine for the AutoML search. |
Pipelines#
Pipeline Base Classes#
Pipeline subclass for all binary classification pipelines. |
|
Pipeline subclass for all classification pipelines. |
|
Pipeline subclass for all multiclass classification pipelines. |
|
Machine learning pipeline. |
|
Pipeline subclass for all regression pipelines. |
|
Pipeline base class for time series binary classification problems. |
|
Pipeline base class for time series classification problems. |
|
Pipeline base class for time series multiclass classification problems. |
|
Pipeline base class for time series regression problems. |
Pipeline Utils#
Returns a list of actions based on the defaults parameters of each option in the input DataCheckActionOption list. |
|
Creates and returns a string that contains the Python imports and code required for running the EvalML pipeline. |
|
Creates and returns a string that contains the Python imports and code required for running the EvalML pipeline. |
|
Given input data, target data, an estimator class and the problem type, generates a pipeline class with a preprocessing chain which was recommended based on the inputs. The pipeline will be a subclass of the appropriate pipeline base class for the specified problem_type. |
|
Creates a pipeline of components to address the input DataCheckAction list. |
|
Creates a pipeline of components to address warnings and errors output from running data checks. Uses all default suggestions. |
|
Get the row indices of the data that are closest to the threshold. Works only for binary classification problems and pipelines. |
Component Graphs#
Component graph for a pipeline as a directed acyclic graph (DAG). |
Components#
Component Base Classes#
Components represent a step in a pipeline.
Base class for all components. |
|
A component that may or may not need fitting that transforms data. These components are used before an estimator. |
|
A component that fits and predicts given data. |
Component Utils#
List the model types allowed for a particular problem type. |
|
If True, provided estimator class is unable to handle NaN values as an input. |
|
Creates and returns a string that contains the Python imports and code required for running the EvalML component. |
|
Returns the estimators allowed for a particular problem type. |
|
Standardizes input from a string name to a ComponentBase subclass if necessary. |
|
Makes dictionary for oversampler components. Find ratio of each class to the majority. If the ratio is smaller than the sampling_ratio, we want to oversample, otherwise, we don't want to sample at all, and we leave the data as is. |
Transformers#
Transformers are components that take in data as input and output transformed data.
Transformer that can automatically extract features from datetime columns. |
|
Featuretools DFS component that generates features for the input features. |
|
Drops specified columns in input data. |
|
Transformer to drop rows with NaN values. |
|
Transformer to drop features whose percentage of NaN values exceeds a specified threshold. |
|
Transformer to drop rows specified by row indices. |
|
Transformer that can automatically extract features from emails. |
|
Imputes missing data according to a specified imputation strategy. |
|
A transformer that encodes target labels using values between 0 and num_classes - 1. |
|
Reduces the number of features by using Linear Discriminant Analysis. |
|
Applies a log transformation to the target data. |
|
Transformer to calculate the Latent Semantic Analysis Values of text input. |
|
Transformer that can automatically featurize text columns using featuretools' nlp_primitives. |
|
A transformer that encodes categorical features in a one-hot numeric array. |
|
A transformer that encodes ordinal features as an array of ordinal integers representing the relative order of categories. |
|
SMOTE Oversampler component. Will automatically select whether to use SMOTE, SMOTEN, or SMOTENC based on inputs to the component. |
|
Reduces the number of features by using Principal Component Analysis (PCA). |
|
Imputes missing data according to a specified imputation strategy per column. |
|
Removes trends and seasonality from time series by fitting a polynomial and moving average to the data. |
|
Transformer to replace features with the new nullable dtypes with a dtype that is compatible in EvalML. |
|
Selects relevant features using recursive feature elimination with a Random Forest Classifier. |
|
Selects top features based on importance weights using a Random Forest classifier. |
|
Selects relevant features using recursive feature elimination with a Random Forest Regressor. |
|
Selects top features based on importance weights using a Random Forest regressor. |
|
Selects columns by specified Woodwork logical type or semantic tag in input data. |
|
Selects specified columns in input data. |
|
Imputes missing data according to a specified imputation strategy. Natural language columns are ignored. |
|
A transformer that standardizes input features by removing the mean and scaling to unit variance. |
|
Removes trends and seasonality from time series using the STL algorithm. |
|
A transformer that encodes categorical features into target encodings. |
|
Imputes missing target data according to a specified imputation strategy. |
|
Transformer that delays input features and target variable for time series problems. |
|
Imputes missing data according to a specified timeseries-specific imputation strategy. |
|
Transformer that regularizes an inconsistently spaced datetime column. |
|
Initializes an undersampling transformer to downsample the majority classes in the dataset. |
|
Transformer that can automatically extract features from URL. |
Estimators#
Classifiers#
Classifiers are components that output a predicted class label.
Classifier that predicts using the specified strategy. |
|
CatBoost Classifier, a classifier that uses gradient-boosting on decision trees. CatBoost is an open-source library and natively supports categorical features. |
|
Decision Tree Classifier. |
|
Elastic Net Classifier. Uses Logistic Regression with elasticnet penalty as the base estimator. |
|
Extra Trees Classifier. |
|
K-Nearest Neighbors Classifier. |
|
LightGBM Classifier. |
|
Logistic Regression Classifier. |
|
Random Forest Classifier. |
|
Stacked Ensemble Classifier. |
|
Support Vector Machine Classifier. |
|
Vowpal Wabbit Binary Classifier. |
|
Vowpal Wabbit Multiclass Classifier. |
|
XGBoost Classifier. |
Regressors#
Regressors are components that output a predicted target value.
Autoregressive Integrated Moving Average Model. The three parameters (p, d, q) are the AR order, the degree of differencing, and the MA order. More information here: https://www.statsmodels.org/devel/generated/statsmodels.tsa.arima.model.ARIMA.html. |
|
Baseline regressor that uses a simple strategy to make predictions. This is useful as a simple baseline regressor to compare with other regressors. |
|
CatBoost Regressor, a regressor that uses gradient-boosting on decision trees. CatBoost is an open-source library and natively supports categorical features. |
|
Decision Tree Regressor. |
|
Elastic Net Regressor. |
|
Holt-Winters Exponential Smoothing Forecaster. |
|
Extra Trees Regressor. |
|
LightGBM Regressor. |
|
Linear Regressor. |
|
Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well. |
|
Random Forest Regressor. |
|
Stacked Ensemble Regressor. |
|
Support Vector Machine Regressor. |
|
Time series estimator that predicts using the naive forecasting approach. |
|
Vowpal Wabbit Regressor. |
|
XGBoost Regressor. |
Model Understanding#
Metrics#
Computes objective score as a function of potential binary classification decision thresholds for a fitted binary classification pipeline. |
|
Calculates permutation importance for features. |
|
Calculates permutation importance for one column in the original dataframe. |
|
Confusion matrix for binary and multiclass classification. |
|
Gets the confusion matrix and histogram bins for each threshold as well as the best threshold per objective. Only works with Binary Classification Pipelines. |
|
Returns a dataframe showing the features with the greatest predictive power for a linear model. |
|
Combines y_true and y_pred into a single dataframe and adds a column for outliers. Used in graph_prediction_vs_actual(). |
|
Get the data needed for the prediction_vs_actual_over_time plot. |
|
Normalizes a confusion matrix. |
|
Calculates one or two-way partial dependence. |
|
Given labels and binary classifier predicted probabilities, compute and return the data representing a precision-recall curve. |
|
Given labels and classifier predicted probabilities, compute and return the data representing a Receiver Operating Characteristic (ROC) curve. Works with binary or multiclass problems. |
|
Get the transformed output after fitting X to the embedded space using t-SNE. |
|
Finds the most influential features as well as any detrimental features from a dataframe of feature importances. |
|
Outputs a human-readable explanation of trained pipeline behavior. |
Visualization Methods#
Generates a plot graphing objective score vs. decision thresholds for a fitted binary classification pipeline. |
|
Generate and display a confusion matrix plot. |
|
Create an one-way or two-way partial dependence plot. |
|
Generate a bar graph of the pipeline's permutation importance. |
|
Generate and display a precision-recall plot. |
|
Generate a scatter plot comparing the true and predicted values. Used for regression plotting. |
|
Plot the target values and predictions against time on the x-axis. |
|
Generate and display a Receiver Operating Characteristic (ROC) plot for binary and multiclass classification problems. |
|
Plot high dimensional data into lower dimensional space using t-SNE. |
Prediction Explanations#
Creates a report summarizing the top contributing features for each data point in the input features. |
|
Creates a report summarizing the top contributing features for the best and worst points in the dataset as measured by error to true labels. |
Objectives#
Objective Base Classes#
Base class for all objectives. |
|
Base class for all binary classification objectives. |
|
Base class for all multiclass classification objectives. |
|
Base class for all regression objectives. |
Domain-Specific Objectives#
Score using a cost-benefit matrix. Scores quantify the benefits of a given value, so greater numeric scores represents a better score. Costs and scores can be negative, indicating that a value is not beneficial. For example, in the case of monetary profit, a negative cost and/or score represents loss of cash flow. |
|
Score the percentage of money lost of the total transaction amount process due to fraud. |
|
Lead scoring. |
|
Sensitivity at Low Alert Rates. |
Classification Objectives#
Accuracy score for binary classification. |
|
Accuracy score for multiclass classification. |
|
AUC score for binary classification. |
|
AUC score for multiclass classification using macro averaging. |
|
AUC score for multiclass classification using micro averaging. |
|
AUC Score for multiclass classification using weighted averaging. |
|
Gini coefficient for binary classification. |
|
Balanced accuracy score for binary classification. |
|
Balanced accuracy score for multiclass classification. |
|
F1 score for binary classification. |
|
F1 score for multiclass classification using micro averaging. |
|
F1 score for multiclass classification using macro averaging. |
|
F1 score for multiclass classification using weighted averaging. |
|
Log Loss for binary classification. |
|
Log Loss for multiclass classification. |
|
Matthews correlation coefficient for binary classification. |
|
Matthews correlation coefficient for multiclass classification. |
|
Precision score for binary classification. |
|
Precision score for multiclass classification using micro averaging. |
|
Precision score for multiclass classification using macro-averaging. |
|
Precision score for multiclass classification using weighted averaging. |
|
Recall score for binary classification. |
|
Recall score for multiclass classification using micro averaging. |
|
Recall score for multiclass classification using macro averaging. |
|
Recall score for multiclass classification using weighted averaging. |
Regression Objectives#
Explained variance score for regression. |
|
Mean absolute error for regression. |
|
Mean absolute scaled error for time series regression. |
|
Mean absolute percentage error for time series regression. Scaled by 100 to return a percentage. |
|
Symmetric mean absolute percentage error for time series regression. Scaled by 100 to return a percentage. |
|
Mean squared error for regression. |
|
Mean squared log error for regression. |
|
Median absolute error for regression. |
|
Maximum residual error for regression. |
|
Coefficient of determination for regression. |
|
Root mean squared error for regression. |
|
Root mean squared log error for regression. |
Objective Utils#
Get a list of the names of all objectives. |
|
Returns all core objective instances associated with the given problem type. |
|
Get a list of all valid core objectives. |
|
Get the default recommendation score metrics for the given problem type. |
|
Get non-core objective classes. |
|
Returns the Objective class corresponding to a given objective name. |
|
Get objectives for optimization. |
|
Get objectives for pipeline rankings. |
|
Converts objectives from a [0, inf) scale to [0, 1] given a max and min for each objective. |
|
Generate objectives to consider, with optional modifications to the defaults. |
|
Get ranking-only objective classes. |
|
Computes a recommendation score for a model given scores for a group of objectives. |
Problem Types#
Determine the type of problem is being solved based on the targets (binary vs multiclass classification, regression). Ignores missing and null data. |
|
Handles problem_type by either returning the ProblemTypes or converting from a str. |
|
Determines if the provided problem_type is a binary classification problem type. |
|
Determines if the provided problem_type is a classification problem type. |
|
Determines if the provided problem_type is a multiclass classification problem type. |
|
Determines if the provided problem_type is a regression problem type. |
|
Determines if the provided problem_type is a time series problem type. |
|
Enum defining the supported types of machine learning problems. |
Model Family#
Handles model_family by either returning the ModelFamily or converting from a string. |
|
Enum for family of machine learning models. |
Tuners#
Base Tuner class. |
|
Bayesian Optimizer. |
|
Grid Search Optimizer, which generates all of the possible points to search for using a grid. |
|
Random Search Optimizer. |
Data Checks#
Data Check Classes#
Check if any of the target labels are imbalanced, or if the number of values for each target are below 2 times the number of CV folds. Use for classification problems. |
|
Check if the datetime column has equally spaced intervals and is monotonically increasing or decreasing in order to be supported by time series estimators. |
|
Check if any of the features are likely to be ID columns. |
|
Check if the target data is considered invalid. |
|
Check if any set features are likely to be multicollinear. |
|
Check if the target or any of the features have no variance. |
|
Check if there are any highly-null numerical, boolean, categorical, natural language, and unknown columns and rows in the input. |
|
Checks if there are any outliers in input data by using IQR to determine score anomalies. |
|
Check if there are any columns with sparsely populated values in the input. |
|
Check if the target data contains certain distributions that may need to be transformed prior training to improve model performance. Uses the Shapiro-Wilks test when the dataset is <=5000 samples, otherwise uses Jarque-Bera. |
|
Check if any of the features are highly correlated with the target by using mutual information, Pearson correlation, and other correlation metrics. |
|
Checks whether the time series parameters are compatible with data splitting. |
|
Checks whether the time series target data is compatible with splitting. |
|
Check if there are any columns in the input that are either too unique for classification problems or not unique enough for regression problems. |
|
Base class for all data checks. |
|
A collection of data checks. |
|
A collection of basic data checks that is used by AutoML by default. |
Data Check Messages#
Base class for a message returned by a DataCheck, tagged by name. |
|
DataCheckMessage subclass for errors returned by data checks. |
|
DataCheckMessage subclass for warnings returned by data checks. |
Data Check Message Types#
Enum for type of data check message: WARNING or ERROR. |
Data Check Message Codes#
Enum for data check message code. |
Data Check Actions#
A recommended action returned by a DataCheck. |
|
Enum for data check action code. |
|
A recommended action option returned by a DataCheck. |
Utils#
General Utils#
Converts a string describing a length of time to its length in seconds. |
|
Downcasts IntegerNullable, BooleanNullable types to Double, Boolean in order to support certain estimators like ARIMA, CatBoost, and LightGBM. |
|
Drop rows that have any NaNs in all dataframes or series. |
|
Get importable subclasses of a base class. Used to list all of our estimators, transformers, components and pipelines dynamically. |
|
Get the logger with the associated name. |
|
Determines the column in the given data that should be used as the time index. |
|
Attempts to import the requested library by name. If the import fails, raises an ImportError or warning. |
|
Create a Woodwork structure from the given list, pandas, or numpy input, with specified types for columns. If a column's type is not specified, it will be inferred by Woodwork. |
|
Checks if the given DataFrame contains only numeric values. |
|
Generates a numpy.random.RandomState instance using seed. |
|
Given a numpy.random.RandomState object, generate an int representing a seed value for another random number generator. Or, if given an int, return that int. |
|
Pad the beginning num_to_pad rows with nans. |
|
Convert the given value into a string that can safely be used for repr. |
|
Saves fig to filepath if specified, or to a default location if not. |