Automated Machine Learning (AutoML) Search#
Background#
Machine Learning#
Machine learning (ML) is the process of constructing a mathematical model of a system based on a sample dataset collected from that system.
One of the main goals of training an ML model is to teach the model to separate the signal present in the data from the noise inherent in system and in the data collection process. If this is done effectively, the model can then be used to make accurate predictions about the system when presented with new, similar data. Additionally, introspecting on an ML model can reveal key information about the system being modeled, such as which inputs and transformations of the inputs are most useful to the ML model for learning the signal in the data, and are therefore the most predictive.
There are a variety of ML problem types. Supervised learning describes the case where the collected data contains an output value to be modeled and a set of inputs with which to train the model. EvalML focuses on training supervised learning models.
EvalML supports three common supervised ML problem types. The first is regression, where the target value to model is a continuous numeric value. Next are binary and multiclass classification, where the target value to model consists of two or more discrete values or categories. The choice of which supervised ML problem type is most appropriate depends on domain expertise and on how the model will be evaluated and used.
EvalML is currently building support for supervised time series problems: time series regression, time series binary classification, and time series multiclass classification. While we’ve added some features to tackle these kinds of problems, our functionality is still being actively developed so please be mindful of that before using it.
AutoML and Search#
AutoML is the process of automating the construction, training and evaluation of ML models. Given a data and some configuration, AutoML searches for the most effective and accurate ML model or models to fit the dataset. During the search, AutoML will explore different combinations of model type, model parameters and model architecture.
An effective AutoML solution offers several advantages over constructing and tuning ML models by hand. AutoML can assist with many of the difficult aspects of ML, such as avoiding overfitting and underfitting, imbalanced data, detecting data leakage and other potential issues with the problem setup, and automatically applying best-practice data cleaning, feature engineering, feature selection and various modeling techniques. AutoML can also leverage search algorithms to optimally sweep the hyperparameter search space, resulting in model performance which would be difficult to achieve by manual training.
AutoML in EvalML#
EvalML supports all of the above and more.
In its simplest usage, the AutoML search interface requires only the input data, the target data and a problem_type
specifying what kind of supervised ML problem to model.
** Graphing methods, like verbose AutoMLSearch, on Jupyter Notebook and Jupyter Lab require ipywidgets to be installed.
** If graphing on Jupyter Lab, jupyterlab-plotly required. To download this, make sure you have npm installed.
[1]:
import evalml
from evalml.utils import infer_feature_types
X, y = evalml.demos.load_fraud(n_rows=650)
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:35: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _pt_shuffle_rec(i, indexes, index_mask, partition_tree, M, pos):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:54: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def delta_minimization_order(all_masks, max_swap_size=100, num_passes=2):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:63: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _reverse_window(order, start, length):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:69: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _reverse_window_score_gain(masks, order, start, length):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:77: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _mask_delta_score(m1, m2):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:5: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def identity(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _identity_inverse(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:15: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def logit(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:20: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _logit_inverse(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:363: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _build_fixed_single_output(averaged_outs, last_outs, outputs, batch_positions, varying_rows, num_varying_rows, link, linearizing_weights):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:385: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _build_fixed_multi_output(averaged_outs, last_outs, outputs, batch_positions, varying_rows, num_varying_rows, link, linearizing_weights):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:428: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _init_masks(cluster_matrix, M, indices_row_pos, indptr):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:439: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _rec_fill_masks(cluster_matrix, indices_row_pos, indptr, indices, M, ind):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_tabular.py:186: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _single_delta_mask(dind, masked_inputs, last_mask, data, x, noop_code):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_tabular.py:197: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _delta_masking(masks, x, curr_delta_inds, varying_rows_out,
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_image.py:175: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _jit_build_partition_tree(xmin, xmax, ymin, ymax, zmin, zmax, total_ywidth, total_zwidth, M, clustering, q):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/explainers/_partition.py:676: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def lower_credit(i, value, M, values, clustering):
The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
Number of Features
Boolean 1
Categorical 6
Numeric 5
Number of training examples: 650
Targets
False 86.31%
True 13.69%
Name: fraud, dtype: object
To provide data to EvalML, it is recommended that you initialize a Woodwork accessor on your data. This allows you to easily control how EvalML will treat each of your features before training a model.
EvalML also accepts pandas
input, and will run type inference on top of the input pandas
data. If you’d like to change the types inferred by EvalML, you can use the infer_feature_types
utility method, which takes pandas or numpy input and converts it to a Woodwork data structure. The feature_types
parameter can be used to specify what types specific columns should be.
Feature types such as Natural Language
must be specified in this way, otherwise Woodwork will infer it as Unknown
type and drop it during the AutoMLSearch.
In the example below, we reformat a couple features to make them easily consumable by the model, and then specify that the provider, which would have otherwise been inferred as a column with natural language, is a categorical column.
[2]:
X.ww["expiration_date"] = X["expiration_date"].apply(
lambda x: "20{}-01-{}".format(x.split("/")[1], x.split("/")[0])
)
X = infer_feature_types(
X,
feature_types={
"store_id": "categorical",
"expiration_date": "datetime",
"lat": "categorical",
"lng": "categorical",
"provider": "categorical",
},
)
In order to validate the results of the pipeline creation and optimization process, we will save some of our data as a holdout set.
[3]:
X_train, X_holdout, y_train, y_holdout = evalml.preprocessing.split_data(
X, y, problem_type="binary", test_size=0.2
)
Data Checks#
Before calling AutoMLSearch.search
, we should run some sanity checks on our data to ensure that the input data being passed will not run into some common issues before running a potentially time-consuming search. EvalML has various data checks that makes this easy. Each data check will return a collection of warnings and errors if it detects potential issues with the input data. This allows users to inspect their data to avoid confusing errors that may arise during the search process. You
can learn about each of the data checks available through our data checks guide.
Here, we will run the DefaultDataChecks
class, which contains a series of data checks that are generally useful.
[4]:
from evalml.data_checks import DefaultDataChecks
data_checks = DefaultDataChecks("binary", "log loss binary")
data_checks.validate(X_train, y_train)
[4]:
[]
Since there were no warnings or errors returned, we can safely continue with the search process.
Holdout Set for Pipeline Ranking#
If the holdout_set_size
parameter is set and the input dataset has more than 500 rows, AutoMLSearch will create a holdout set from holdout_set_size
of the training data. Alternatively, a holdout set can be manually specified by using the X_holdout
and y_holdout
parameters in AutoMLSearch()
. In this example, the holdout set created previously will be used by AutoML search.
During the AutoML search process, the mean of the objective scores of all cross validation folds (shown the “mean_cv_score” column in the pipeline rankings), is calculated. This score is passed to the AutoML search tuner to further optimize the hyperparameters of the next batch of pipelines.
After, the pipeline will be fitted on the entire training dataset and scored on this new holdout set. This score is represented under the “ranking_score” column on the pipeline rankings board and is used to rank pipeline performance.
If a dataset has less than 500 rows or holdout_set_size=0
(which is the default setting), the “mean_cv_score” will be used as the ranking_score instead.
[5]:
automl = evalml.automl.AutoMLSearch(
X_train=X_train,
y_train=y_train,
X_holdout=X_holdout,
y_holdout=y_holdout,
problem_type="binary",
verbose=True,
)
automl.search(interactive_plot=False)
AutoMLSearch will use the holdout set to score and rank pipelines.
Removing columns ['currency'] because they are of 'Unknown' type
Using default limit of max_batches=3.
*****************************
* Beginning pipeline search *
*****************************
Optimizing for Log Loss Binary.
Lower score is better.
Using SequentialEngine to train and score pipelines.
Searching up to 3 batches for a total of None pipelines.
Allowed model families:
Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 4.921
Starting holdout set scoring
Finished holdout set scoring - Log Loss Binary: 4.991
*****************************
* Evaluating Batch Number 1 *
*****************************
Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.259
Starting holdout set scoring
Finished holdout set scoring - Log Loss Binary: 0.224
*****************************
* Evaluating Batch Number 2 *
*****************************
Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler + RF Classifier Select From Model:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.254
Starting holdout set scoring
Finished holdout set scoring - Log Loss Binary: 0.219
*****************************
* Evaluating Batch Number 3 *
*****************************
Decision Tree Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 1.449
Starting holdout set scoring
Finished holdout set scoring - Log Loss Binary: 1.003
High coefficient of variation (cv >= 0.5) within cross validation scores.
Decision Tree Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler may not perform as estimated on unseen data.
LightGBM Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.300
Starting holdout set scoring
Finished holdout set scoring - Log Loss Binary: 0.161
Extra Trees Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.361
Starting holdout set scoring
Finished holdout set scoring - Log Loss Binary: 0.348
Elastic Net Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Standard Scaler + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Standard Scaler + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.375
Starting holdout set scoring
Finished holdout set scoring - Log Loss Binary: 0.400
CatBoost Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.569
Starting holdout set scoring
Finished holdout set scoring - Log Loss Binary: 0.557
XGBoost Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.257
Starting holdout set scoring
Finished holdout set scoring - Log Loss Binary: 0.142
Logistic Regression Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Standard Scaler + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Standard Scaler + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.374
Starting holdout set scoring
Finished holdout set scoring - Log Loss Binary: 0.402
Search finished after 41.09 seconds
Best pipeline: XGBoost Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler
Best pipeline Log Loss Binary: 0.142417
[5]:
{1: {'Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler': 4.39600133895874,
'Total time of batch': 4.520259141921997},
2: {'Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler + RF Classifier Select From Model': 5.180526494979858,
'Total time of batch': 5.310783624649048},
3: {'Decision Tree Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler': 3.33969783782959,
'LightGBM Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler': 3.600811719894409,
'Extra Trees Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler': 4.964298248291016,
'Elastic Net Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Standard Scaler + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Standard Scaler + Oversampler': 4.664109945297241,
'CatBoost Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + Oversampler': 2.491077184677124,
'XGBoost Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler': 4.163986444473267,
'Logistic Regression Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Standard Scaler + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Standard Scaler + Oversampler': 6.2491455078125,
'Total time of batch': 30.510870456695557}}
With the verbose
argument set to True, the AutoML search will log its progress, reporting each pipeline and parameter set evaluated during the search. The search iteration plot shown during AutoML search tracks the current pipeline’s validation score (tracked as the gray point) against the best pipeline validation score (tracked as the blue line).
There are a number of mechanisms to control the AutoML search time. One way is to set the max_batches
parameter which controls the maximum number of rounds of AutoML to evaluate, where each round may train and score a variable number of pipelines. Another way is to set the max_iterations
parameter which controls the maximum number of candidate models to be evaluated during AutoML. By default, AutoML will search for a single batch. The first pipeline to be evaluated will always be a
baseline model representing a trivial solution.
The AutoML interface supports a variety of other parameters. For a comprehensive list, please refer to the API reference.
We also provide a standalone search method which does all of the above in a single line, and returns the AutoMLSearch
instance and data check results. If there were data check errors, AutoML will not be run and no AutoMLSearch
instance will be returned.
Detecting Problem Type#
EvalML includes a simple method, detect_problem_type
, to help determine the problem type given the target data.
This function can return the predicted problem type as a ProblemType enum, choosing from ProblemType.BINARY, ProblemType.MULTICLASS, and ProblemType.REGRESSION. If the target data is invalid (for instance when there is only 1 unique label), the function will throw an error instead.
[6]:
import pandas as pd
from evalml.problem_types import detect_problem_type
y_binary = pd.Series([0, 1, 1, 0, 1, 1])
detect_problem_type(y_binary)
[6]:
<ProblemTypes.BINARY: 'binary'>
Objective parameter#
AutoMLSearch takes in an objective
parameter to determine which objective
to optimize for. By default, this parameter is set to auto
, which allows AutoML to choose LogLossBinary
for binary classification problems, LogLossMulticlass
for multiclass classification problems, and R2
for regression problems.
It should be noted that the objective
parameter is only used in ranking and helping choose the pipelines to iterate over, but is not used to optimize each individual pipeline during fit-time.
To get the default objective for each problem type, you can use the get_default_primary_search_objective
function.
[7]:
from evalml.automl import get_default_primary_search_objective
binary_objective = get_default_primary_search_objective("binary")
multiclass_objective = get_default_primary_search_objective("multiclass")
regression_objective = get_default_primary_search_objective("regression")
print(binary_objective.name)
print(multiclass_objective.name)
print(regression_objective.name)
Log Loss Binary
Log Loss Multiclass
R2
Using custom pipelines#
EvalML’s AutoML algorithm generates a set of pipelines to search with. To provide a custom set instead, set allowed_component_graphs to a dictionary of custom component graphs. AutoMLSearch
will use these to generate Pipeline
instances. Note: this will prevent AutoML from generating other pipelines to search over.
[8]:
from evalml.pipelines import MulticlassClassificationPipeline
automl_custom = evalml.automl.AutoMLSearch(
X_train=X_train,
y_train=y_train,
problem_type="multiclass",
verbose=True,
allowed_component_graphs={
"My_pipeline": ["Simple Imputer", "Random Forest Classifier"],
"My_other_pipeline": ["One Hot Encoder", "Random Forest Classifier"],
},
)
AutoMLSearch will use mean CV score to rank pipelines.
Removing columns ['currency'] because they are of 'Unknown' type
Using default limit of max_batches=3.
Stopping the search early#
To stop the search early, hit Ctrl-C
. This will bring up a prompt asking for confirmation. Responding with y
will immediately stop the search. Responding with n
will continue the search.
Callback functions#
AutoMLSearch
supports several callback functions, which can be specified as parameters when initializing an AutoMLSearch
object. They are:
start_iteration_callback
add_result_callback
error_callback
Start Iteration Callback#
Users can set start_iteration_callback
to set what function is called before each pipeline training iteration. This callback function must take three positional parameters: the pipeline class, the pipeline parameters, and the AutoMLSearch
object.
[9]:
## start_iteration_callback example function
def start_iteration_callback_example(pipeline_class, pipeline_params, automl_obj):
print("Training pipeline with the following parameters:", pipeline_params)
Add Result Callback#
Users can set add_result_callback
to set what function is called after each pipeline training iteration. This callback function must take three positional parameters: a dictionary containing the training results for the new pipeline, an untrained_pipeline containing the parameters used during training, and the AutoMLSearch
object.
[10]:
## add_result_callback example function
def add_result_callback_example(pipeline_results_dict, untrained_pipeline, automl_obj):
print(
"Results for trained pipeline with the following parameters:",
pipeline_results_dict,
)
Error Callback#
Users can set the error_callback
to set what function called when search()
errors and raises an Exception
. This callback function takes three positional parameters: the Exception raised
, the traceback, and the AutoMLSearch object
. This callback function must also accept kwargs
, so AutoMLSearch
is able to pass along other parameters used by default.
Evalml defines several error callback functions, which can be found under evalml.automl.callbacks
. They are:
silent_error_callback
raise_error_callback
log_and_save_error_callback
raise_and_save_error_callback
log_error_callback
(default used whenerror_callback
is None)
[11]:
# error_callback example; this is implemented in the evalml library
def raise_error_callback(exception, traceback, automl, **kwargs):
"""Raises the exception thrown by the AutoMLSearch object. Also logs the exception as an error."""
logger.error(f"AutoMLSearch raised a fatal exception: {str(exception)}")
logger.error("\n".join(traceback))
raise exception
View Rankings#
A summary of all the pipelines built can be returned as a pandas DataFrame which is sorted by the validation score.
For AutoML searches completed with a holdout set, the validation score is the holdout score of the pipeline fitted using the entire training dataset.
For AutoML searches completed without a holdout set, the validation score is the average score across all cross-validation folds.
[12]:
automl.rankings
[12]:
id | pipeline_name | search_order | ranking_score | holdout_score | mean_cv_score | standard_deviation_cv_score | percent_better_than_baseline | high_variance_cv | parameters | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 8 | XGBoost Classifier w/ Label Encoder + Select C... | 8 | 0.142417 | 0.142417 | 0.256950 | 0.137180 | 94.778757 | False | {'Label Encoder': {'positive_label': None}, 'N... |
1 | 4 | LightGBM Classifier w/ Label Encoder + Select ... | 4 | 0.160955 | 0.160955 | 0.299971 | 0.206176 | 93.904575 | False | {'Label Encoder': {'positive_label': None}, 'N... |
2 | 2 | Random Forest Classifier w/ Label Encoder + Dr... | 2 | 0.219145 | 0.219145 | 0.254382 | 0.045124 | 94.830946 | False | {'Label Encoder': {'positive_label': None}, 'D... |
3 | 1 | Random Forest Classifier w/ Label Encoder + Dr... | 1 | 0.224440 | 0.224440 | 0.259030 | 0.039520 | 94.736507 | False | {'Label Encoder': {'positive_label': None}, 'D... |
4 | 5 | Extra Trees Classifier w/ Label Encoder + Sele... | 5 | 0.348408 | 0.348408 | 0.361341 | 0.021758 | 92.657543 | False | {'Label Encoder': {'positive_label': None}, 'N... |
5 | 6 | Elastic Net Classifier w/ Label Encoder + Sele... | 6 | 0.400375 | 0.400375 | 0.374725 | 0.050027 | 92.385573 | False | {'Label Encoder': {'positive_label': None}, 'N... |
6 | 9 | Logistic Regression Classifier w/ Label Encode... | 9 | 0.401581 | 0.401581 | 0.374364 | 0.049925 | 92.392914 | False | {'Label Encoder': {'positive_label': None}, 'N... |
7 | 7 | CatBoost Classifier w/ Label Encoder + Select ... | 7 | 0.556655 | 0.556655 | 0.569010 | 0.013825 | 88.437683 | False | {'Label Encoder': {'positive_label': None}, 'N... |
8 | 3 | Decision Tree Classifier w/ Label Encoder + Se... | 3 | 1.002879 | 1.002879 | 1.448532 | 0.696497 | 70.565771 | True | {'Label Encoder': {'positive_label': None}, 'N... |
9 | 0 | Mode Baseline Binary Classification Pipeline | 0 | 4.990660 | 4.990660 | 4.921248 | 0.112910 | 0.000000 | False | {'Label Encoder': {'positive_label': None}, 'B... |
Recommendation Score#
If you would like a more robust evaluation of the performance of your models, EvalML additionally provides a recommendation score alongside the selected objective. The recommendation score is a weighted average of a number of default objectives for your problem type, normalized and scaled so that the final score can be interpreted as a percentage from 0 to 100. This weighted score provides a more holistic understanding of model performance, and prioritizes model generalizability rather than one single objective which may not completely serve your use case.
[13]:
automl.get_recommendation_scores(use_pipeline_names=True)
[13]:
{'Baseline Classifier': 25.0,
'Random Forest Classifier': 89.2028059447534,
'Decision Tree Classifier': 74.72820314710809,
'LightGBM Classifier': 91.29441485901573,
'Extra Trees Classifier': 76.48915094483688,
'Elastic Net Classifier': 64.98618569828929,
'CatBoost Classifier': 90.40009827909492,
'XGBoost Classifier': 93.1572081558569,
'Logistic Regression Classifier': 64.88094236798518}
[14]:
automl.get_recommendation_scores(priority="F1", use_pipeline_names=True)
[14]:
{'Baseline Classifier': 16.666666666666664,
'Random Forest Classifier': 87.42552654381409,
'Decision Tree Classifier': 70.77118305045302,
'LightGBM Classifier': 90.0296099060105,
'Extra Trees Classifier': 68.38407164438401,
'Elastic Net Classifier': 53.893229489916436,
'CatBoost Classifier': 90.56976248909359,
'XGBoost Classifier': 92.40783574026824,
'Logistic Regression Classifier': 53.8230672697137}
To see what objectives are included in the recommendation score, you can use:
[15]:
evalml.objectives.get_default_recommendation_objectives("binary")
[15]:
{'AUC', 'Balanced Accuracy Binary', 'F1', 'Log Loss Binary'}
If you would like to automatically rank your pipelines by this recommendation score, you can set use_recommendation=True
when initializing AutoMLSearch
.
[16]:
automl_recommendation = evalml.automl.AutoMLSearch(
X_train=X_train,
y_train=y_train,
X_holdout=X_holdout,
y_holdout=y_holdout,
problem_type="binary",
use_recommendation=True,
)
automl_recommendation.search(interactive_plot=False)
automl_recommendation.rankings[
[
"id",
"pipeline_name",
"search_order",
"recommendation_score",
"holdout_score",
"mean_cv_score",
]
]
High coefficient of variation (cv >= 0.5) within cross validation scores.
Decision Tree Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler may not perform as estimated on unseen data.
[16]:
id | pipeline_name | search_order | recommendation_score | holdout_score | mean_cv_score | |
---|---|---|---|---|---|---|
0 | 8 | XGBoost Classifier w/ Label Encoder + Select C... | 8 | 93.157208 | 0.142417 | 0.256950 |
1 | 4 | LightGBM Classifier w/ Label Encoder + Select ... | 4 | 91.294415 | 0.160955 | 0.299971 |
2 | 7 | CatBoost Classifier w/ Label Encoder + Select ... | 7 | 90.400098 | 0.556655 | 0.569010 |
3 | 1 | Random Forest Classifier w/ Label Encoder + Dr... | 1 | 89.535903 | 0.224440 | 0.259030 |
4 | 2 | Random Forest Classifier w/ Label Encoder + Dr... | 2 | 89.202806 | 0.219145 | 0.254382 |
5 | 5 | Extra Trees Classifier w/ Label Encoder + Sele... | 5 | 76.489151 | 0.348408 | 0.361341 |
6 | 3 | Decision Tree Classifier w/ Label Encoder + Se... | 3 | 74.728203 | 1.002879 | 1.448532 |
7 | 6 | Elastic Net Classifier w/ Label Encoder + Sele... | 6 | 64.986186 | 0.400375 | 0.374725 |
8 | 9 | Logistic Regression Classifier w/ Label Encode... | 9 | 64.880942 | 0.401581 | 0.374364 |
9 | 0 | Mode Baseline Binary Classification Pipeline | 0 | 25.000000 | 4.990660 | 4.921248 |
There is a helper function on the AutoMLSearch
object to help you understand how the recommendation score was calculated. It displays the raw scores of the objectives included within the score calculation. Here, we take a look at pipeline with id=9
, the Decision Tree pipeline:
[17]:
automl_recommendation.get_recommendation_score_breakdown(3)
[17]:
{'F1': 0.6285714285714287,
'AUC': 0.7827380952380953,
'Balanced Accuracy Binary': 0.7787698412698413,
'Log Loss Binary': 1.0028792511221052}
Describe Pipeline#
Each pipeline is given an id
. We can get more information about any particular pipeline using that id
. Here, we will get more information about the pipeline with id = 1
.
[18]:
automl.describe_pipeline(1)
****************************************************************************************************************************************
* Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler *
****************************************************************************************************************************************
Problem Type: binary
Model Family: Random Forest
Pipeline Steps
==============
1. Label Encoder
* positive_label : None
2. Drop Columns Transformer
* columns : ['currency']
3. DateTime Featurizer
* features_to_extract : ['year', 'month', 'day_of_week', 'hour']
* encode_as_categories : False
* time_index : None
4. Imputer
* categorical_impute_strategy : most_frequent
* numeric_impute_strategy : mean
* boolean_impute_strategy : most_frequent
* categorical_fill_value : None
* numeric_fill_value : None
* boolean_fill_value : None
5. One Hot Encoder
* top_n : 10
* features_to_encode : None
* categories : None
* drop : if_binary
* handle_unknown : ignore
* handle_missing : error
6. Oversampler
* sampling_ratio : 0.25
* k_neighbors_default : 5
* n_jobs : -1
* sampling_ratio_dict : None
* categorical_features : [3, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49]
* k_neighbors : 5
7. Random Forest Classifier
* n_estimators : 100
* max_depth : 6
* n_jobs : -1
Training
========
Training for binary problems.
Total training time (including CV): 4.4 seconds
Cross Validation
----------------
Log Loss Binary MCC Binary Gini AUC Precision F1 Balanced Accuracy Binary Accuracy Binary # Training # Validation
0 0.239 0.823 0.841 0.920 1.000 0.829 0.854 0.960 346 174
1 0.305 0.481 0.528 0.764 0.875 0.452 0.649 0.902 347 173
2 0.234 0.875 0.848 0.924 1.000 0.884 0.896 0.971 347 173
mean 0.259 0.726 0.739 0.869 0.958 0.722 0.800 0.944 - -
std 0.040 0.214 0.183 0.091 0.072 0.235 0.132 0.037 - -
coef of var 0.153 0.294 0.247 0.105 0.075 0.326 0.165 0.039 - -
Get Pipeline#
We can get the object of any pipeline via their id
as well:
[19]:
pipeline = automl.get_pipeline(1)
print(pipeline.name)
print(pipeline.parameters)
Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler
{'Label Encoder': {'positive_label': None}, 'Drop Columns Transformer': {'columns': ['currency']}, 'DateTime Featurizer': {'features_to_extract': ['year', 'month', 'day_of_week', 'hour'], 'encode_as_categories': False, 'time_index': None}, 'Imputer': {'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'One Hot Encoder': {'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Oversampler': {'sampling_ratio': 0.25, 'k_neighbors_default': 5, 'n_jobs': -1, 'sampling_ratio_dict': None, 'categorical_features': [3, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49], 'k_neighbors': 5}, 'Random Forest Classifier': {'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}}
Get best pipeline#
If you specifically want to get the best pipeline, there is a convenient accessor for that. The pipeline returned is already fitted on the input X, y data that we passed to AutoMLSearch. To turn off this default behavior, set train_best_pipeline=False
when initializing AutoMLSearch.
[20]:
best_pipeline = automl.best_pipeline
print(best_pipeline.name)
print(best_pipeline.parameters)
best_pipeline.predict(X_train)
XGBoost Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler
{'Label Encoder': {'positive_label': None}, 'Numeric Pipeline - Select Columns By Type Transformer': {'column_types': ['category', 'EmailAddress', 'URL'], 'exclude': True}, 'Numeric Pipeline - Label Encoder': {'positive_label': None}, 'Numeric Pipeline - Drop Columns Transformer': {'columns': ['currency']}, 'Numeric Pipeline - DateTime Featurizer': {'features_to_extract': ['year', 'month', 'day_of_week', 'hour'], 'encode_as_categories': False, 'time_index': None}, 'Numeric Pipeline - Imputer': {'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'Numeric Pipeline - Select Columns Transformer': {'columns': ['card_id', 'store_id', 'amount', 'customer_present', 'lat', 'lng', 'datetime_month', 'datetime_day_of_week', 'datetime_hour']}, 'Categorical Pipeline - Select Columns Transformer': {'columns': ['expiration_date', 'provider', 'region', 'country']}, 'Categorical Pipeline - Label Encoder': {'positive_label': None}, 'Categorical Pipeline - Imputer': {'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'Categorical Pipeline - One Hot Encoder': {'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Oversampler': {'sampling_ratio': 0.25, 'k_neighbors_default': 5, 'n_jobs': -1, 'sampling_ratio_dict': None, 'categorical_features': [3, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48], 'k_neighbors': 5}, 'XGBoost Classifier': {'eta': 0.1, 'max_depth': 6, 'min_child_weight': 1, 'n_estimators': 100, 'n_jobs': -1, 'eval_metric': 'logloss'}}
[20]:
id
144 False
253 True
221 False
432 False
384 False
...
128 False
98 False
472 False
642 False
494 False
Name: fraud, Length: 520, dtype: bool
Training and Scoring Multiple Pipelines using AutoMLSearch#
AutoMLSearch will automatically fit the best pipeline on the entire training data. It also provides an easy API for training and scoring other pipelines.
If you’d like to train one or more pipelines on the entire training data, you can use the train_pipelines
method.
Similarly, if you’d like to score one or more pipelines on a particular dataset, you can use the score_pipelines
method.
[21]:
trained_pipelines = automl.train_pipelines([automl.get_pipeline(i) for i in [0, 1, 2]])
trained_pipelines
[21]:
{'Mode Baseline Binary Classification Pipeline': pipeline = BinaryClassificationPipeline(component_graph={'Label Encoder': ['Label Encoder', 'X', 'y'], 'Baseline Classifier': ['Baseline Classifier', 'Label Encoder.x', 'Label Encoder.y']}, parameters={'Label Encoder':{'positive_label': None}, 'Baseline Classifier':{'strategy': 'mode'}}, custom_name='Mode Baseline Binary Classification Pipeline', random_seed=0),
'Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler': pipeline = BinaryClassificationPipeline(component_graph={'Label Encoder': ['Label Encoder', 'X', 'y'], 'Drop Columns Transformer': ['Drop Columns Transformer', 'X', 'Label Encoder.y'], 'DateTime Featurizer': ['DateTime Featurizer', 'Drop Columns Transformer.x', 'Label Encoder.y'], 'Imputer': ['Imputer', 'DateTime Featurizer.x', 'Label Encoder.y'], 'One Hot Encoder': ['One Hot Encoder', 'Imputer.x', 'Label Encoder.y'], 'Oversampler': ['Oversampler', 'One Hot Encoder.x', 'Label Encoder.y'], 'Random Forest Classifier': ['Random Forest Classifier', 'Oversampler.x', 'Oversampler.y']}, parameters={'Label Encoder':{'positive_label': None}, 'Drop Columns Transformer':{'columns': ['currency']}, 'DateTime Featurizer':{'features_to_extract': ['year', 'month', 'day_of_week', 'hour'], 'encode_as_categories': False, 'time_index': None}, 'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'One Hot Encoder':{'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Oversampler':{'sampling_ratio': 0.25, 'k_neighbors_default': 5, 'n_jobs': -1, 'sampling_ratio_dict': None, 'categorical_features': [3, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49], 'k_neighbors': 5}, 'Random Forest Classifier':{'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}}, random_seed=0),
'Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler + RF Classifier Select From Model': pipeline = BinaryClassificationPipeline(component_graph={'Label Encoder': ['Label Encoder', 'X', 'y'], 'Drop Columns Transformer': ['Drop Columns Transformer', 'X', 'Label Encoder.y'], 'DateTime Featurizer': ['DateTime Featurizer', 'Drop Columns Transformer.x', 'Label Encoder.y'], 'Imputer': ['Imputer', 'DateTime Featurizer.x', 'Label Encoder.y'], 'One Hot Encoder': ['One Hot Encoder', 'Imputer.x', 'Label Encoder.y'], 'Oversampler': ['Oversampler', 'One Hot Encoder.x', 'Label Encoder.y'], 'RF Classifier Select From Model': ['RF Classifier Select From Model', 'Oversampler.x', 'Oversampler.y'], 'Random Forest Classifier': ['Random Forest Classifier', 'RF Classifier Select From Model.x', 'Oversampler.y']}, parameters={'Label Encoder':{'positive_label': None}, 'Drop Columns Transformer':{'columns': ['currency']}, 'DateTime Featurizer':{'features_to_extract': ['year', 'month', 'day_of_week', 'hour'], 'encode_as_categories': False, 'time_index': None}, 'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'One Hot Encoder':{'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Oversampler':{'sampling_ratio': 0.25, 'k_neighbors_default': 5, 'n_jobs': -1, 'sampling_ratio_dict': None, 'categorical_features': [3, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49], 'k_neighbors': 5}, 'RF Classifier Select From Model':{'number_features': None, 'n_estimators': 10, 'max_depth': None, 'percent_features': 0.5, 'threshold': 'median', 'n_jobs': -1}, 'Random Forest Classifier':{'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}}, random_seed=0)}
[22]:
pipeline_holdout_scores = automl.score_pipelines(
[trained_pipelines[name] for name in trained_pipelines.keys()],
X_holdout,
y_holdout,
["Accuracy Binary", "F1", "AUC"],
)
pipeline_holdout_scores
[22]:
{'Mode Baseline Binary Classification Pipeline': OrderedDict([('Accuracy Binary',
0.8615384615384616),
('F1', 0.0),
('AUC', 0.5)]),
'Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler': OrderedDict([('Accuracy Binary',
0.9615384615384616),
('F1', 0.8387096774193548),
('AUC', 0.9265873015873015)]),
'Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler + RF Classifier Select From Model': OrderedDict([('Accuracy Binary',
0.9615384615384616),
('F1', 0.8387096774193548),
('AUC', 0.9122023809523809)])}
Saving AutoMLSearch and pipelines from AutoMLSearch#
There are two ways to save results from AutoMLSearch.
You can save the AutoMLSearch object itself, calling
.save(<filepath>)
to do so. This will allow you to save the AutoMLSearch state and reload all pipelines from this.If you want to save a pipeline from AutoMLSearch for future use, pipeline classes themselves have a
.save(<filepath>)
method.
[23]:
# saving the entire automl search
automl.save("automl.cloudpickle")
automl2 = evalml.automl.AutoMLSearch.load("automl.cloudpickle")
# saving the best pipeline using .save()
best_pipeline.save("pipeline.cloudpickle")
best_pipeline_copy = evalml.pipelines.PipelineBase.load("pipeline.cloudpickle")
Limiting the AutoML Search Space#
The AutoML search algorithm first trains each component in the pipeline with their default values. After the first iteration, it then tweaks the parameters of these components using the pre-defined hyperparameter ranges that these components have. To limit the search over certain hyperparameter ranges, you can specify a search_parameters
argument with your AutoMLSearch
parameters. These parameters will limit the hyperparameter search space or pipeline parameter space.
Hyperparameter ranges can be found through the API reference for each component. Parameter arguments must be specified as dictionaries, but the associated values must be skopt.space
Real, Integer, Categorical objects for setting hyperparameter ranges.
If however you’d like to specify certain values for the initial batch of the AutoML search algorithm, you can use the search_parameters
argument with non skopt.space
objects. This will set the initial batch’s component parameters to the values passed by this argument.
[24]:
from evalml import AutoMLSearch
from evalml.demos import load_fraud
from skopt.space import Categorical
from evalml.model_family import ModelFamily
import woodwork as ww
X, y = load_fraud(n_rows=1000)
# example of setting parameter to just one value
search_parameters = {"Imputer": {"numeric_impute_strategy": "mean"}}
# limit the numeric impute strategy to include only `median` and `most_frequent`
# `mean` is the default value for this argument, but it doesn't need to be included in the specified hyperparameter range for this to work
search_parameters = {
"Imputer": {"numeric_impute_strategy": Categorical(["median", "most_frequent"])}
}
# using this custom hyperparameter means that our Imputer components in these pipelines will only search through
# 'median' and 'most_frequent' strategies for 'numeric_impute_strategy'
automl_constrained = AutoMLSearch(
X_train=X,
y_train=y,
problem_type="binary",
search_parameters=search_parameters,
verbose=True,
)
Number of Features
Boolean 1
Categorical 6
Numeric 5
Number of training examples: 1000
Targets
False 85.90%
True 14.10%
Name: fraud, dtype: object
AutoMLSearch will use mean CV score to rank pipelines.
Using default limit of max_batches=3.
A skopt.space
Integer, Real, or Categorical will set the hyperparameter space explored during search. All other values will set the pipeline parameters directly. Setting pipeline parameters directly defines the initialization parameters that a pipeline starts with during the first batch of AutoMLSearch. the hyperparameter range then defines the space of possible new parameter values, which the tuner chooses.
Let’s walk through some examples to explain this. For instance,
search_parameters = {'Imputer': {
'numeric_impute_strategy': 'mean'
}}
then in the initial search, the algorithm would use mean
as the impute strategy in batch 1. However, since Imputer.numeric_impute_strategy
has a valid hyperparameter range, if the algorithm suggests a different strategy, it can and will change this value. To limit this to using mean
only for the duration of the search, it is necessary to use the skopt.space
:
search_parameters = {'Imputer': {
'numeric_impute_strategy': Categorical(['mean'])
}}
However, if a value has no hyperparameter range associated, then the algorithm will use this value as the only parameter. For instance,
search_parameters = {'Label Encoder': {
'positive_label': True
}}
Since Label Encoder.positive_label
has no associated hyperparameter range, the algorithm will use this parameter for the entire duration of the search.
Imbalanced Data#
The AutoML search algorithm now has functionality to handle imbalanced data during classification! AutoMLSearch now provides two additional parameters, sampler_method
and sampler_balanced_ratio
, that allow you to let AutoMLSearch know whether to sample imbalanced data, and how to do so. sampler_method
takes in either Undersampler
, Oversampler
, auto
, or None as the sampler to use, and sampler_balanced_ratio
specifies the minority/majority
ratio that you want to
sample to. Details on the Undersampler and Oversampler components can be found in the documentation.
This can be used for imbalanced datasets, like the fraud dataset, which has a ‘minority:majority’ ratio of < 0.2.
[25]:
automl_auto = AutoMLSearch(
X_train=X, y_train=y, problem_type="binary", automl_algorithm="iterative"
)
automl_auto.allowed_pipelines[-1]
[25]:
pipeline = BinaryClassificationPipeline(component_graph={'Label Encoder': ['Label Encoder', 'X', 'y'], 'DateTime Featurizer': ['DateTime Featurizer', 'X', 'Label Encoder.y'], 'Imputer': ['Imputer', 'DateTime Featurizer.x', 'Label Encoder.y'], 'One Hot Encoder': ['One Hot Encoder', 'Imputer.x', 'Label Encoder.y'], 'Oversampler': ['Oversampler', 'One Hot Encoder.x', 'Label Encoder.y'], 'Extra Trees Classifier': ['Extra Trees Classifier', 'Oversampler.x', 'Oversampler.y']}, parameters={'Label Encoder':{'positive_label': None}, 'DateTime Featurizer':{'features_to_extract': ['year', 'month', 'day_of_week', 'hour'], 'encode_as_categories': False, 'time_index': None}, 'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'One Hot Encoder':{'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Oversampler':{'sampling_ratio': 0.25, 'k_neighbors_default': 5, 'n_jobs': -1, 'sampling_ratio_dict': None}, 'Extra Trees Classifier':{'n_estimators': 100, 'max_features': 'auto', 'max_depth': 6, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_jobs': -1}}, random_seed=0)
The Oversampler is chosen as the default sampling component here, since the sampler_balanced_ratio = 0.25
. If you specified a lower ratio, for instance sampler_balanced_ratio = 0.1
, then there would be no sampling component added here. This is because if a ratio of 0.1 would be considered balanced, then a ratio of 0.2 would also be balanced.
The Oversampler uses SMOTE under the hood, and automatically selects whether to use SMOTE, SMOTEN, or SMOTENC based on the data it receives.
[26]:
automl_auto_ratio = AutoMLSearch(
X_train=X,
y_train=y,
problem_type="binary",
sampler_balanced_ratio=0.1,
automl_algorithm="iterative",
)
automl_auto_ratio.allowed_pipelines[-1]
[26]:
pipeline = BinaryClassificationPipeline(component_graph={'Label Encoder': ['Label Encoder', 'X', 'y'], 'DateTime Featurizer': ['DateTime Featurizer', 'X', 'Label Encoder.y'], 'Imputer': ['Imputer', 'DateTime Featurizer.x', 'Label Encoder.y'], 'One Hot Encoder': ['One Hot Encoder', 'Imputer.x', 'Label Encoder.y'], 'Extra Trees Classifier': ['Extra Trees Classifier', 'One Hot Encoder.x', 'Label Encoder.y']}, parameters={'Label Encoder':{'positive_label': None}, 'DateTime Featurizer':{'features_to_extract': ['year', 'month', 'day_of_week', 'hour'], 'encode_as_categories': False, 'time_index': None}, 'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'One Hot Encoder':{'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Extra Trees Classifier':{'n_estimators': 100, 'max_features': 'auto', 'max_depth': 6, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_jobs': -1}}, random_seed=0)
Additionally, you can add more fine-grained sampling ratios by passing in a sampling_ratio_dict
in pipeline parameters. For this dictionary, AutoMLSearch expects the keys to be int values from 0 to n-1
for the classes, and the values would be the sampler_balanced__ratio
associated with each target. This dictionary would override the AutoML argument sampler_balanced_ratio
. Below, you can see the scenario for Oversampler component on this dataset. Note that the logic for
Undersamplers is included in the commented section.
[27]:
# In this case, the majority class is the negative class
# for the oversampler, we don't want to oversample this class, so class 0 (majority) will have a ratio of 1 to itself
# for the minority class 1, we want to oversample it to have a minority/majority ratio of 0.5, which means we want minority to have 1/2 the samples as the minority
sampler_ratio_dict = {0: 1, 1: 0.5}
search_parameters = {"Oversampler": {"sampler_balanced_ratio": sampler_ratio_dict}}
automl_auto_ratio_dict = AutoMLSearch(
X_train=X,
y_train=y,
problem_type="binary",
search_parameters=search_parameters,
automl_algorithm="iterative",
)
automl_auto_ratio_dict.allowed_pipelines[-1]
# Undersampler case
# we don't want to undersample this class, so class 1 (minority) will have a ratio of 1 to itself
# for the majority class 0, we want to undersample it to have a minority/majority ratio of 0.5, which means we want majority to have 2x the samples as the minority
# sampler_ratio_dict = {0: 0.5, 1: 1}
# search_parameters = {"Oversampler": {"sampler_balanced_ratio": sampler_ratio_dict}}
# automl_auto_ratio_dict = AutoMLSearch(X_train=X, y_train=y, problem_type='binary', search_parameters=search_parameters)
[27]:
pipeline = BinaryClassificationPipeline(component_graph={'Label Encoder': ['Label Encoder', 'X', 'y'], 'DateTime Featurizer': ['DateTime Featurizer', 'X', 'Label Encoder.y'], 'Imputer': ['Imputer', 'DateTime Featurizer.x', 'Label Encoder.y'], 'One Hot Encoder': ['One Hot Encoder', 'Imputer.x', 'Label Encoder.y'], 'Oversampler': ['Oversampler', 'One Hot Encoder.x', 'Label Encoder.y'], 'Extra Trees Classifier': ['Extra Trees Classifier', 'Oversampler.x', 'Oversampler.y']}, parameters={'Label Encoder':{'positive_label': None}, 'DateTime Featurizer':{'features_to_extract': ['year', 'month', 'day_of_week', 'hour'], 'encode_as_categories': False, 'time_index': None}, 'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'One Hot Encoder':{'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Oversampler':{'sampling_ratio': 0.25, 'k_neighbors_default': 5, 'n_jobs': -1, 'sampling_ratio_dict': None, 'sampler_balanced_ratio': {0: 1, 1: 0.5}}, 'Extra Trees Classifier':{'n_estimators': 100, 'max_features': 'auto', 'max_depth': 6, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_jobs': -1}}, random_seed=0)
Adding ensemble methods to AutoML#
Stacking#
Stacking is an ensemble machine learning algorithm that involves training a model to best combine the predictions of several base learning algorithms. First, each base learning algorithms is trained using the given data. Then, the combining algorithm or meta-learner is trained on the predictions made by those base learning algorithms to make a final prediction.
AutoML enables stacking using the ensembling
flag during initalization; this is set to False
by default. How ensembling runs is defined by the AutoML algorithm you choose. In the IterativeAlgorithm
, the stacking ensemble pipeline runs in its own batch after a whole cycle of training has occurred (each allowed pipeline trains for one batch). Note that this means a large number of iterations may need to run before the stacking ensemble runs. It is also important to note that only
the first CV fold is calculated for stacking ensembles because the model internally uses CV folds. See below in the AutoML Algorithms section to see how ensembling is run for DefaultAlgorithm
. Please do note that ensembling is currently unavailable for time series problems.
[28]:
X, y = evalml.demos.load_breast_cancer()
automl_with_ensembling = AutoMLSearch(
X_train=X,
y_train=y,
problem_type="binary",
allowed_model_families=[ModelFamily.LINEAR_MODEL],
max_batches=4,
ensembling=True,
automl_algorithm="iterative",
verbose=True,
)
automl_with_ensembling.search(interactive_plot=False)
Number of Features
Numeric 30
Number of training examples: 569
Targets
benign 62.74%
malignant 37.26%
Name: target, dtype: object
AutoMLSearch will use mean CV score to rank pipelines.
Generating pipelines to search over...
Ensembling will run every 3 batches.
2 pipelines ready for search.
*****************************
* Beginning pipeline search *
*****************************
Optimizing for Log Loss Binary.
Lower score is better.
Using SequentialEngine to train and score pipelines.
Searching up to 4 batches for a total of 14 pipelines.
Allowed model families: linear_model, linear_model
Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 13.429
*****************************
* Evaluating Batch Number 1 *
*****************************
Elastic Net Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.077
Logistic Regression Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.077
*****************************
* Evaluating Batch Number 2 *
*****************************
Logistic Regression Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.090
Logistic Regression Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.085
Logistic Regression Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.081
Logistic Regression Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.097
Logistic Regression Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.093
*****************************
* Evaluating Batch Number 3 *
*****************************
Elastic Net Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.075
Elastic Net Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.076
Elastic Net Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.075
Elastic Net Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.079
Elastic Net Classifier w/ Label Encoder + Imputer + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.075
*****************************
* Evaluating Batch Number 4 *
*****************************
Stacked Ensemble Classification Pipeline:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.103
Search finished after 21.27 seconds
Best pipeline: Elastic Net Classifier w/ Label Encoder + Imputer + Standard Scaler
Best pipeline Log Loss Binary: 0.075391
[28]:
{1: {'Elastic Net Classifier w/ Label Encoder + Imputer + Standard Scaler': 1.6308603286743164,
'Logistic Regression Classifier w/ Label Encoder + Imputer + Standard Scaler': 1.4973294734954834,
'Total time of batch': 3.338399887084961},
2: {'Logistic Regression Classifier w/ Label Encoder + Imputer + Standard Scaler': 1.499079704284668,
'Total time of batch': 8.187111854553223},
3: {'Elastic Net Classifier w/ Label Encoder + Imputer + Standard Scaler': 1.415229082107544,
'Total time of batch': 7.84323263168335},
4: {'Stacked Ensemble Classification Pipeline': 1.2307462692260742,
'Total time of batch': 1.346553087234497}}
We can view more information about the stacking ensemble pipeline (which was the best performing pipeline) by calling .describe()
.
[29]:
automl_with_ensembling.best_pipeline.describe()
***********************************************************************
* Elastic Net Classifier w/ Label Encoder + Imputer + Standard Scaler *
***********************************************************************
Problem Type: binary
Model Family: Linear
Number of features: 30
Pipeline Steps
==============
1. Label Encoder
* positive_label : None
2. Imputer
* categorical_impute_strategy : most_frequent
* numeric_impute_strategy : knn
* boolean_impute_strategy : most_frequent
* categorical_fill_value : None
* numeric_fill_value : None
* boolean_fill_value : None
3. Standard Scaler
4. Elastic Net Classifier
* penalty : elasticnet
* C : 8.474044870453413
* l1_ratio : 0.6235636967859725
* n_jobs : -1
* multi_class : auto
* solver : saga
AutoML Algorithms#
EvalML currently has two algorithms available for users to choose from. Below, we will run through how each algorithm works and how to access them through AutoMLSearch
as well as the top level search methods.
IterativeAlgorithm#
IterativeAlgorithm
is the first AutoML Algorithm created in EvalML and can be acessed with the search_iterative
method or specifiying AutoMLSearch(automl_algorithm='iterative')
. The algorithm works as follows:
Every batch (after the initial baseline model) contains pipelines of all available estimators for the specified problem type
Pipelines contain preprocessing (imputing, encoding, etc.) needed for machine learning but no feature selection is applied
Ensembling can be turned on by passing in the
ensembling=True
parameter and will be run after a whole cycle of training has occurred (each allowed pipeline trains for one batch)
[30]:
import evalml
X, y = evalml.demos.load_fraud(n_rows=250)
Number of Features
Boolean 1
Categorical 6
Numeric 5
Number of training examples: 250
Targets
False 88.40%
True 11.60%
Name: fraud, dtype: object
[31]:
from evalml.automl import search_iterative
# top level search method will run `AutoMLSearch` with `IterativeAlgorithm` as well as apply our default data checks
auto_iterative, messages_iterative = search_iterative(X, y, problem_type="binary")
[32]:
from evalml import AutoMLSearch
auto_iterative = AutoMLSearch(
X_train=X,
y_train=y,
problem_type="binary",
automl_algorithm="iterative",
verbose=True,
)
auto_iterative.search(interactive_plot=False)
AutoMLSearch will use mean CV score to rank pipelines.
Removing columns ['currency', 'expiration_date'] because they are of 'Unknown' type
Generating pipelines to search over...
8 pipelines ready for search.
Using default limit of max_batches=1.
*****************************
* Beginning pipeline search *
*****************************
Optimizing for Log Loss Binary.
Lower score is better.
Using SequentialEngine to train and score pipelines.
Searching up to 1 batches for a total of None pipelines.
Allowed model families: linear_model, linear_model, xgboost, lightgbm, catboost, random_forest, decision_tree, extra_trees
Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 4.181
*****************************
* Evaluating Batch Number 1 *
*****************************
Elastic Net Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.429
Logistic Regression Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler + Standard Scaler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.429
XGBoost Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.266
LightGBM Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.325
CatBoost Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.596
Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.287
Decision Tree Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 4.640
High coefficient of variation (cv >= 0.5) within cross validation scores.
Decision Tree Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler may not perform as estimated on unseen data.
Extra Trees Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.347
Search finished after 21.14 seconds
Best pipeline: XGBoost Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler
Best pipeline Log Loss Binary: 0.266464
[32]:
{1: {'Elastic Net Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler + Standard Scaler': 2.6987531185150146,
'Logistic Regression Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler + Standard Scaler': 2.5843775272369385,
'XGBoost Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler': 2.5880749225616455,
'LightGBM Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler': 2.1669623851776123,
'CatBoost Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Oversampler': 1.8540968894958496,
'Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler': 2.7945504188537598,
'Decision Tree Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler': 1.9360074996948242,
'Extra Trees Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler': 3.238663911819458,
'Total time of batch': 20.673182487487793}}
DefaultAlgorithm#
DefaultAlgorithm
was designed to do three main things:
Abstract out more parameters and decisions from the user.
Perform deeper tuning for high performing pipelines.
Create a platform to introduce feature selection as well as other potential techniques/heuristics for AutoML.
DefaultAlgorithm
does this by creating the concept of two modes: fast
and long
, where fast
is a subset of long. The algorithm runs as follows:
Run naive pipelines:
a linear model with the default preprocessing pipeline
a random forest pipeline with the default preprocessing pipeline
Run the same pipelines, this time with feature selection. Subsequent pipelines will use the selected features with a SelectedColumns transformer.
Run all pipelines with preprocessing components:
scan rest of estimators (
IterativeAlgorithm
batch 1).
First ensembling run
Fast mode ends here. Begin long mode.
Run top 3 estimators:
Generate 50 random parameter sets. Run all 150 in one batch
Second ensembling run
Repeat 8a and 8b indefinitely until the specified time in
AutoMLSearch
is met:For each of the previous top 3 estimators, sample 10 parameters from the tuner. Run all 30 in one batch
Run ensembling
To this end, it is recommended to use the top level search()
method to run DefaultAlgorithm
. This allows users to specify running search with just the mode
parameter, where fast
is recommended for users who want a fast scan at how EvalML pipelines will perform on their problem and where long
is reserved for a deeper dive into high performing pipelines. If one needs finer control over AutoML parameters, one can also specify automl_algorithm='default'
using AutoMLSearch
and it will default to using fast
mode. However, in this case ensembling will be defined by the ensembling
flag (if ensembling=False
the abovementioned ensembling batches will be skipped). Users are welcome to select max_batches
according to the algorithm above (or other stopping criteria) but should be aware that results may not be optimal if the algorithm does not run for the full length of fast
mode.
[33]:
from evalml.automl import search
# top level search method will run `AutoMLSearch` with `IterativeAlgorithm` as well as apply our default data checks
auto_default, messages_default = search(X, y, problem_type="binary", mode="fast")
High coefficient of variation (cv >= 0.5) within cross validation scores.
Decision Tree Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler may not perform as estimated on unseen data.
[34]:
from evalml import AutoMLSearch
auto_default = AutoMLSearch(
X_train=X,
y_train=y,
problem_type="binary",
automl_algorithm="default",
ensembling=True,
verbose=True,
)
auto_default.search(interactive_plot=False)
AutoMLSearch will use mean CV score to rank pipelines.
Removing columns ['currency', 'expiration_date'] because they are of 'Unknown' type
Using default limit of max_batches=4.
*****************************
* Beginning pipeline search *
*****************************
Optimizing for Log Loss Binary.
Lower score is better.
Using SequentialEngine to train and score pipelines.
Searching up to 4 batches for a total of None pipelines.
Allowed model families:
Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 4.181
*****************************
* Evaluating Batch Number 1 *
*****************************
Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.287
*****************************
* Evaluating Batch Number 2 *
*****************************
Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler + RF Classifier Select From Model:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.282
*****************************
* Evaluating Batch Number 3 *
*****************************
Decision Tree Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 2.363
High coefficient of variation (cv >= 0.5) within cross validation scores.
Decision Tree Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler may not perform as estimated on unseen data.
LightGBM Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.325
Extra Trees Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.348
Elastic Net Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Standard Scaler + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Standard Scaler + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.422
CatBoost Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.596
XGBoost Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.266
Logistic Regression Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Standard Scaler + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Standard Scaler + Oversampler:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.422
*****************************
* Evaluating Batch Number 4 *
*****************************
Stacked Ensemble Classification Pipeline:
Starting cross validation
Finished cross validation - mean Log Loss Binary: 0.237
Search finished after 38.06 seconds
Best pipeline: Stacked Ensemble Classification Pipeline
Best pipeline Log Loss Binary: 0.236593
[34]:
{1: {'Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler': 2.7942471504211426,
'Total time of batch': 2.918593406677246},
2: {'Random Forest Classifier w/ Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + One Hot Encoder + Oversampler + RF Classifier Select From Model': 3.302170991897583,
'Total time of batch': 3.4316160678863525},
3: {'Decision Tree Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler': 1.9890117645263672,
'LightGBM Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler': 2.5335586071014404,
'Extra Trees Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler': 3.403079032897949,
'Elastic Net Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Standard Scaler + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Standard Scaler + Oversampler': 2.9113550186157227,
'CatBoost Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + Oversampler': 2.090397834777832,
'XGBoost Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Oversampler': 2.6184298992156982,
'Logistic Regression Classifier w/ Label Encoder + Select Columns By Type Transformer + Label Encoder + Drop Columns Transformer + DateTime Featurizer + Imputer + Standard Scaler + Select Columns Transformer + Select Columns Transformer + Label Encoder + Imputer + One Hot Encoder + Standard Scaler + Oversampler': 3.1419928073883057,
'Total time of batch': 19.722355127334595},
4: {'Stacked Ensemble Classification Pipeline': 11.37096619606018,
'Total time of batch': 11.526757955551147}}
Pipeline differences#
Through the search output above, we can see how pipelines differ between IterativeAlgorithm
and DefaultAlgorithm
. This is because DefaultAlgorithm
utilizes new components such as RFRegressorSelectFromModel
and other column selectors for feature selection as well as a new pipeline structure to handle feature selection for categorical and non-categorical features.
[35]:
auto_iterative.get_pipeline(4).graph()
[35]:
[36]:
auto_default.get_pipeline(6).graph()
[36]:
Access raw results#
The AutoMLSearch
class records detailed results information under the results
field, including information about the cross-validation scoring and parameters.
[37]:
import pprint
pp = pprint.PrettyPrinter(indent=0, width=100, depth=3, compact=True, sort_dicts=False)
pp.pprint(automl.results)
{'pipeline_results': {0: {'id': 0,
'pipeline_name': 'Mode Baseline Binary Classification Pipeline',
'pipeline_class': <class 'evalml.pipelines.binary_classification_pipeline.BinaryClassificationPipeline'>,
'pipeline_summary': 'Baseline Classifier w/ Label Encoder',
'parameters': {...},
'mean_cv_score': 4.921248270190403,
'standard_deviation_cv_score': 0.11291020093698304,
'high_variance_cv': False,
'training_time': 0.5915412902832031,
'cv_data': [...],
'percent_better_than_baseline_all_objectives': {...},
'percent_better_than_baseline': 0,
'ranking_score': 4.990659700031606,
'ranking_additional_objectives': {...},
'holdout_score': 4.990659700031606},
1: {'id': 1,
'pipeline_name': 'Random Forest Classifier w/ Label Encoder + Drop Columns '
'Transformer + DateTime Featurizer + Imputer + One Hot '
'Encoder + Oversampler',
'pipeline_class': <class 'evalml.pipelines.binary_classification_pipeline.BinaryClassificationPipeline'>,
'pipeline_summary': 'Random Forest Classifier w/ Label Encoder + Drop '
'Columns Transformer + DateTime Featurizer + Imputer + '
'One Hot Encoder + Oversampler',
'parameters': {...},
'mean_cv_score': 0.2590295567892626,
'standard_deviation_cv_score': 0.03951977772267373,
'high_variance_cv': False,
'training_time': 4.384795665740967,
'cv_data': [...],
'percent_better_than_baseline_all_objectives': {...},
'percent_better_than_baseline': 94.73650702895262,
'ranking_score': 0.22443984692595487,
'ranking_additional_objectives': {...},
'holdout_score': 0.22443984692595487},
2: {'id': 2,
'pipeline_name': 'Random Forest Classifier w/ Label Encoder + Drop Columns '
'Transformer + DateTime Featurizer + Imputer + One Hot '
'Encoder + Oversampler + RF Classifier Select From Model',
'pipeline_class': <class 'evalml.pipelines.binary_classification_pipeline.BinaryClassificationPipeline'>,
'pipeline_summary': 'Random Forest Classifier w/ Label Encoder + Drop '
'Columns Transformer + DateTime Featurizer + Imputer + '
'One Hot Encoder + Oversampler + RF Classifier Select '
'From Model',
'parameters': {...},
'mean_cv_score': 0.25438195931603735,
'standard_deviation_cv_score': 0.04512409395105498,
'high_variance_cv': False,
'training_time': 5.16019344329834,
'cv_data': [...],
'percent_better_than_baseline_all_objectives': {...},
'percent_better_than_baseline': 94.83094643168255,
'ranking_score': 0.21914451718965428,
'ranking_additional_objectives': {...},
'holdout_score': 0.21914451718965428},
3: {'id': 3,
'pipeline_name': 'Decision Tree Classifier w/ Label Encoder + Select '
'Columns By Type Transformer + Label Encoder + Drop '
'Columns Transformer + DateTime Featurizer + Imputer + '
'Select Columns Transformer + Select Columns Transformer + '
'Label Encoder + Imputer + One Hot Encoder + Oversampler',
'pipeline_class': <class 'evalml.pipelines.binary_classification_pipeline.BinaryClassificationPipeline'>,
'pipeline_summary': 'Decision Tree Classifier w/ Label Encoder + Select '
'Columns By Type Transformer + Label Encoder + Drop '
'Columns Transformer + DateTime Featurizer + Imputer + '
'Select Columns Transformer + Select Columns '
'Transformer + Label Encoder + Imputer + One Hot '
'Encoder + Oversampler',
'parameters': {...},
'mean_cv_score': 1.4485315095019227,
'standard_deviation_cv_score': 0.6964965411823029,
'high_variance_cv': True,
'training_time': 3.3195202350616455,
'cv_data': [...],
'percent_better_than_baseline_all_objectives': {...},
'percent_better_than_baseline': 70.56577051240947,
'ranking_score': 1.0028792511221052,
'ranking_additional_objectives': {...},
'holdout_score': 1.0028792511221052},
4: {'id': 4,
'pipeline_name': 'LightGBM Classifier w/ Label Encoder + Select Columns By '
'Type Transformer + Label Encoder + Drop Columns '
'Transformer + DateTime Featurizer + Imputer + Select '
'Columns Transformer + Select Columns Transformer + Label '
'Encoder + Imputer + One Hot Encoder + Oversampler',
'pipeline_class': <class 'evalml.pipelines.binary_classification_pipeline.BinaryClassificationPipeline'>,
'pipeline_summary': 'LightGBM Classifier w/ Label Encoder + Select Columns '
'By Type Transformer + Label Encoder + Drop Columns '
'Transformer + DateTime Featurizer + Imputer + Select '
'Columns Transformer + Select Columns Transformer + '
'Label Encoder + Imputer + One Hot Encoder + '
'Oversampler',
'parameters': {...},
'mean_cv_score': 0.2999710030621828,
'standard_deviation_cv_score': 0.2061756997312182,
'high_variance_cv': False,
'training_time': 3.58290433883667,
'cv_data': [...],
'percent_better_than_baseline_all_objectives': {...},
'percent_better_than_baseline': 93.90457488440069,
'ranking_score': 0.1609546813582899,
'ranking_additional_objectives': {...},
'holdout_score': 0.1609546813582899},
5: {'id': 5,
'pipeline_name': 'Extra Trees Classifier w/ Label Encoder + Select Columns '
'By Type Transformer + Label Encoder + Drop Columns '
'Transformer + DateTime Featurizer + Imputer + Select '
'Columns Transformer + Select Columns Transformer + Label '
'Encoder + Imputer + One Hot Encoder + Oversampler',
'pipeline_class': <class 'evalml.pipelines.binary_classification_pipeline.BinaryClassificationPipeline'>,
'pipeline_summary': 'Extra Trees Classifier w/ Label Encoder + Select '
'Columns By Type Transformer + Label Encoder + Drop '
'Columns Transformer + DateTime Featurizer + Imputer + '
'Select Columns Transformer + Select Columns '
'Transformer + Label Encoder + Imputer + One Hot '
'Encoder + Oversampler',
'parameters': {...},
'mean_cv_score': 0.3613405427437813,
'standard_deviation_cv_score': 0.021758185101253748,
'high_variance_cv': False,
'training_time': 4.945972681045532,
'cv_data': [...],
'percent_better_than_baseline_all_objectives': {...},
'percent_better_than_baseline': 92.65754290567824,
'ranking_score': 0.3484078428021002,
'ranking_additional_objectives': {...},
'holdout_score': 0.3484078428021002},
6: {'id': 6,
'pipeline_name': 'Elastic Net Classifier w/ Label Encoder + Select Columns '
'By Type Transformer + Label Encoder + Drop Columns '
'Transformer + DateTime Featurizer + Imputer + Standard '
'Scaler + Select Columns Transformer + Select Columns '
'Transformer + Label Encoder + Imputer + One Hot Encoder + '
'Standard Scaler + Oversampler',
'pipeline_class': <class 'evalml.pipelines.binary_classification_pipeline.BinaryClassificationPipeline'>,
'pipeline_summary': 'Elastic Net Classifier w/ Label Encoder + Select '
'Columns By Type Transformer + Label Encoder + Drop '
'Columns Transformer + DateTime Featurizer + Imputer + '
'Standard Scaler + Select Columns Transformer + Select '
'Columns Transformer + Label Encoder + Imputer + One '
'Hot Encoder + Standard Scaler + Oversampler',
'parameters': {...},
'mean_cv_score': 0.37472485974788244,
'standard_deviation_cv_score': 0.050026569255638,
'high_variance_cv': False,
'training_time': 4.645443439483643,
'cv_data': [...],
'percent_better_than_baseline_all_objectives': {...},
'percent_better_than_baseline': 92.38557294461829,
'ranking_score': 0.4003754206567058,
'ranking_additional_objectives': {...},
'holdout_score': 0.4003754206567058},
7: {'id': 7,
'pipeline_name': 'CatBoost Classifier w/ Label Encoder + Select Columns By '
'Type Transformer + Label Encoder + Drop Columns '
'Transformer + DateTime Featurizer + Imputer + Select '
'Columns Transformer + Select Columns Transformer + Label '
'Encoder + Imputer + Oversampler',
'pipeline_class': <class 'evalml.pipelines.binary_classification_pipeline.BinaryClassificationPipeline'>,
'pipeline_summary': 'CatBoost Classifier w/ Label Encoder + Select Columns '
'By Type Transformer + Label Encoder + Drop Columns '
'Transformer + DateTime Featurizer + Imputer + Select '
'Columns Transformer + Select Columns Transformer + '
'Label Encoder + Imputer + Oversampler',
'parameters': {...},
'mean_cv_score': 0.569010310977581,
'standard_deviation_cv_score': 0.01382528112459369,
'high_variance_cv': False,
'training_time': 2.47408127784729,
'cv_data': [...],
'percent_better_than_baseline_all_objectives': {...},
'percent_better_than_baseline': 88.43768329217892,
'ranking_score': 0.5566549833272478,
'ranking_additional_objectives': {...},
'holdout_score': 0.5566549833272478},
8: {'id': 8,
'pipeline_name': 'XGBoost Classifier w/ Label Encoder + Select Columns By '
'Type Transformer + Label Encoder + Drop Columns '
'Transformer + DateTime Featurizer + Imputer + Select '
'Columns Transformer + Select Columns Transformer + Label '
'Encoder + Imputer + One Hot Encoder + Oversampler',
'pipeline_class': <class 'evalml.pipelines.binary_classification_pipeline.BinaryClassificationPipeline'>,
'pipeline_summary': 'XGBoost Classifier w/ Label Encoder + Select Columns '
'By Type Transformer + Label Encoder + Drop Columns '
'Transformer + DateTime Featurizer + Imputer + Select '
'Columns Transformer + Select Columns Transformer + '
'Label Encoder + Imputer + One Hot Encoder + '
'Oversampler',
'parameters': {...},
'mean_cv_score': 0.2569503163235051,
'standard_deviation_cv_score': 0.13717967037488366,
'high_variance_cv': False,
'training_time': 4.1454174518585205,
'cv_data': [...],
'percent_better_than_baseline_all_objectives': {...},
'percent_better_than_baseline': 94.77875729456821,
'ranking_score': 0.14241700777377544,
'ranking_additional_objectives': {...},
'holdout_score': 0.14241700777377544},
9: {'id': 9,
'pipeline_name': 'Logistic Regression Classifier w/ Label Encoder + Select '
'Columns By Type Transformer + Label Encoder + Drop '
'Columns Transformer + DateTime Featurizer + Imputer + '
'Standard Scaler + Select Columns Transformer + Select '
'Columns Transformer + Label Encoder + Imputer + One Hot '
'Encoder + Standard Scaler + Oversampler',
'pipeline_class': <class 'evalml.pipelines.binary_classification_pipeline.BinaryClassificationPipeline'>,
'pipeline_summary': 'Logistic Regression Classifier w/ Label Encoder + '
'Select Columns By Type Transformer + Label Encoder + '
'Drop Columns Transformer + DateTime Featurizer + '
'Imputer + Standard Scaler + Select Columns Transformer '
'+ Select Columns Transformer + Label Encoder + Imputer '
'+ One Hot Encoder + Standard Scaler + Oversampler',
'parameters': {...},
'mean_cv_score': 0.3743635904964204,
'standard_deviation_cv_score': 0.04992524856036527,
'high_variance_cv': False,
'training_time': 6.231616497039795,
'cv_data': [...],
'percent_better_than_baseline_all_objectives': {...},
'percent_better_than_baseline': 92.39291395307035,
'ranking_score': 0.40158056138768455,
'ranking_additional_objectives': {...},
'holdout_score': 0.40158056138768455}},
'search_order': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]}
If there are errors, such as with the Iterative Algorithm example above, we can examine these closer by accessing the errors
field. There is one dictionary entry per pipeline fold that failed, and each entry contains the pipeline parameters with the error that was thrown and its full traceback.
[38]:
auto_iterative.errors
[38]:
{}
Parallel AutoML#
By default, all pipelines in an AutoML batch are evaluated in series. Pipelines can be evaluated in parallel to improve performance during AutoML search. This is accomplished by a futures style submission and evaluation of pipelines in a batch. As of this writing, the pipelines use a threaded model for concurrent evaluation. This is similar to the currently implemented n_jobs
parameter in the estimators, which uses increased numbers of threads to train and evaluate estimators.
Quick Start#
To quickly use some parallelism to enhance the pipeline searching, a string can be passed through to AutoMLSearch during initialization to setup the parallel engine and client within the AutoMLSearch object. The current options are “cf_threaded”, “cf_process”, “dask_threaded” and “dask_process” and indicate the futures backend to use and whether to use threaded- or process-level parallelism.
[39]:
automl_cf_threaded = AutoMLSearch(
X_train=X,
y_train=y,
problem_type="binary",
allowed_model_families=[ModelFamily.LINEAR_MODEL],
engine="cf_threaded",
)
automl_cf_threaded.search(interactive_plot=False)
automl_cf_threaded.close_engine()
Parallelism with Concurrent Futures#
The EngineBase
class is robust and extensible enough to support futures-like implementations from a variety of libraries. The CFEngine
extends the EngineBase
to use the native Python concurrent.futures library. The CFEngine
supports both thread- and process-level parallelism. The type of parallelism can be chosen using either the ThreadPoolExecutor
or the ProcessPoolExecutor
. If either executor is passed a
max_workers
parameter, it will set the number of processes and threads spawned. If not, the default number of processes will be equal to the number of processors available and the number of threads set to five times the number of processors available.
Here, the CFEngine is invoked with default parameters, which is threaded parallelism using all available threads.
[40]:
from concurrent.futures import ThreadPoolExecutor
from evalml.automl.engine.cf_engine import CFEngine, CFClient
cf_engine = CFEngine(CFClient(ThreadPoolExecutor(max_workers=4)))
automl_cf_threaded = AutoMLSearch(
X_train=X,
y_train=y,
problem_type="binary",
allowed_model_families=[ModelFamily.LINEAR_MODEL],
engine=cf_engine,
)
automl_cf_threaded.search(interactive_plot=False)
automl_cf_threaded.close_engine()
Note: the cell demonstrating process-level parallelism is a markdown due to incompatibility with our ReadTheDocs build. It can be run successfully locally.
from concurrent.futures import ProcessPoolExecutor
# Repeat the process but using process-level parallelism\
cf_engine = CFEngine(CFClient(ProcessPoolExecutor(max_workers=2)))
automl_cf_process = AutoMLSearch(X_train=X, y_train=y,
problem_type="binary",
engine="cf_process")
automl_cf_process.search(interactive_plot = False)
automl_cf_process.close_engine()
Parallelism with Dask#
Thread or process level parallelism can be explicitly invoked for the DaskEngine
(as well as the CFEngine
). The processes
can be set to True
and the number of processes set using n_workers
. If processes
is set to False
, then the resulting parallelism will be threaded and n_workers
will represent the threads used. Examples of both follow.
[41]:
from dask.distributed import LocalCluster
from evalml.automl.engine import DaskEngine
dask_engine_p2 = DaskEngine(cluster=LocalCluster(processes=True, n_workers=2))
automl_dask_p2 = AutoMLSearch(
X_train=X,
y_train=y,
problem_type="binary",
allowed_model_families=[ModelFamily.LINEAR_MODEL],
engine=dask_engine_p2,
)
automl_dask_p2.search(interactive_plot=False)
# Explicitly shutdown the automl object's LocalCluster
automl_dask_p2.close_engine()
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:35: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _pt_shuffle_rec(i, indexes, index_mask, partition_tree, M, pos):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:54: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def delta_minimization_order(all_masks, max_swap_size=100, num_passes=2):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:63: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _reverse_window(order, start, length):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:69: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _reverse_window_score_gain(masks, order, start, length):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:77: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _mask_delta_score(m1, m2):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:5: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def identity(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _identity_inverse(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:15: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def logit(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:20: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _logit_inverse(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:363: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _build_fixed_single_output(averaged_outs, last_outs, outputs, batch_positions, varying_rows, num_varying_rows, link, linearizing_weights):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:385: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _build_fixed_multi_output(averaged_outs, last_outs, outputs, batch_positions, varying_rows, num_varying_rows, link, linearizing_weights):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:428: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _init_masks(cluster_matrix, M, indices_row_pos, indptr):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:439: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _rec_fill_masks(cluster_matrix, indices_row_pos, indptr, indices, M, ind):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_tabular.py:186: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _single_delta_mask(dind, masked_inputs, last_mask, data, x, noop_code):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:35: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _pt_shuffle_rec(i, indexes, index_mask, partition_tree, M, pos):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:54: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def delta_minimization_order(all_masks, max_swap_size=100, num_passes=2):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:63: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _reverse_window(order, start, length):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:69: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _reverse_window_score_gain(masks, order, start, length):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:77: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _mask_delta_score(m1, m2):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:5: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def identity(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _identity_inverse(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:15: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def logit(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:20: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _logit_inverse(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:363: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _build_fixed_single_output(averaged_outs, last_outs, outputs, batch_positions, varying_rows, num_varying_rows, link, linearizing_weights):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:385: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _build_fixed_multi_output(averaged_outs, last_outs, outputs, batch_positions, varying_rows, num_varying_rows, link, linearizing_weights):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:428: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _init_masks(cluster_matrix, M, indices_row_pos, indptr):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:439: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _rec_fill_masks(cluster_matrix, indices_row_pos, indptr, indices, M, ind):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_tabular.py:186: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _single_delta_mask(dind, masked_inputs, last_mask, data, x, noop_code):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_tabular.py:197: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _delta_masking(masks, x, curr_delta_inds, varying_rows_out,
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_image.py:175: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _jit_build_partition_tree(xmin, xmax, ymin, ymax, zmin, zmax, total_ywidth, total_zwidth, M, clustering, q):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/explainers/_partition.py:676: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def lower_credit(i, value, M, values, clustering):
The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_tabular.py:197: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _delta_masking(masks, x, curr_delta_inds, varying_rows_out,
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_image.py:175: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _jit_build_partition_tree(xmin, xmax, ymin, ymax, zmin, zmax, total_ywidth, total_zwidth, M, clustering, q):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/explainers/_partition.py:676: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def lower_credit(i, value, M, values, clustering):
The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
Inside a Dask worker with daemon=True, setting n_jobs=1.
Possible work-arounds:
- dask.config.set({'distributed.worker.daemon': False})
- set the environment variable DASK_DISTRIBUTED__WORKER__DAEMON=False
before creating your Dask cluster.
[42]:
dask_engine_t4 = DaskEngine(cluster=LocalCluster(processes=False, n_workers=4))
automl_dask_t4 = AutoMLSearch(
X_train=X,
y_train=y,
problem_type="binary",
allowed_model_families=[ModelFamily.LINEAR_MODEL],
engine=dask_engine_t4,
)
automl_dask_t4.search(interactive_plot=False)
automl_dask_t4.close_engine()
As we can see, a significant performance gain can result from simply using something other than the default SequentialEngine
, ranging from a 100% speed up with multiple processes to 500% speedup with multiple threads!
[43]:
print("Sequential search duration: %s" % str(automl.search_duration))
print(
"Concurrent futures (threaded) search duration: %s"
% str(automl_cf_threaded.search_duration)
)
print("Dask (two processes) search duration: %s" % str(automl_dask_p2.search_duration))
print("Dask (four threads)search duration: %s" % str(automl_dask_t4.search_duration))
Sequential search duration: 41.09007525444031
Concurrent futures (threaded) search duration: 14.432372808456421
Dask (two processes) search duration: 21.91235613822937
Dask (four threads)search duration: 16.849042654037476