Setting up pipeline search¶
Designing the right machine learning pipeline and picking the best parameters is a time-consuming process that relies on a mix of data science intuition as well as trial and error. EvalML streamlines the process of selecting the best modeling algorithms and parameters, so data scientists can focus their energy where it is most needed.
How it works¶
EvalML selects and tunes machine learning pipelines built of numerous steps. This includes encoding categorical data, missing value imputation, feature selection, feature scaling, and finally machine learning. As EvalML tunes pipelines, it uses the objective function selected and configured by the user to guide its search.
At each iteration, EvalML uses cross-validation to generate an estimate of the pipeline’s performances. If a pipeline has high variance across cross-validation folds, it will provide a warning. In this case, the pipeline may not perform reliably in the future.
EvalML is designed to work well out of the box. However, it provides numerous methods for you to control the search described below.
Selecting problem type¶
EvalML supports both classification and regression problems. You select your problem type by importing the appropriate class.
[1]:
import evalml
from evalml import AutoClassificationSearch, AutoRegressionSearch
[2]:
AutoClassificationSearch()
[2]:
<evalml.automl.auto_classification_search.AutoClassificationSearch at 0x7f9915de7610>
[3]:
AutoRegressionSearch()
[3]:
<evalml.automl.auto_regression_search.AutoRegressionSearch at 0x7f98efe49710>
Setting the Objective Function¶
The only required parameter to start searching for pipelines is the objective function. Most domain-specific objective functions require you to specify parameters based on your business assumptions. You can do this before you initialize your pipeline search. For example
[4]:
from evalml.objectives import FraudCost
fraud_objective = FraudCost(
retry_percentage=.5,
interchange_fee=.02,
fraud_payout_percentage=.75,
amount_col='amount'
)
AutoClassificationSearch(objective=fraud_objective)
[4]:
<evalml.automl.auto_classification_search.AutoClassificationSearch at 0x7f98efdf0290>
Evaluate on Additional Objectives¶
Additional objectives can be scored on during the evaluation process. To add another objective, use the additional_objectives
parameter in AutoClassificationSearch or AutoRegressionSearch. The results of these additional objectives will then appear in the results of describe_pipeline
.
[5]:
from evalml.objectives import FraudCost
fraud_objective = FraudCost(
retry_percentage=.5,
interchange_fee=.02,
fraud_payout_percentage=.75,
amount_col='amount'
)
AutoClassificationSearch(objective='AUC', additional_objectives=[fraud_objective])
[5]:
<evalml.automl.auto_classification_search.AutoClassificationSearch at 0x7f98efdfca90>
Selecting Model Types¶
By default, all model types are considered. You can control which model types to search with the allowed_model_families
parameters
[6]:
automl = AutoClassificationSearch(objective="f1",
allowed_model_families=["random_forest"])
you can see the possible pipelines that will be searched after initialization
[7]:
automl.possible_pipelines
[7]:
[evalml.pipelines.classification.random_forest.RFClassificationPipeline]
you can see a list of all supported models like this
[8]:
evalml.list_model_families("binary") # `binary` for binary classification and `multiclass` for multiclass classification
[8]:
[<ModelFamily.XGBOOST: 'xgboost'>,
<ModelFamily.CATBOOST: 'catboost'>,
<ModelFamily.LINEAR_MODEL: 'linear_model'>,
<ModelFamily.RANDOM_FOREST: 'random_forest'>]
[9]:
evalml.list_model_families("regression")
[9]:
[<ModelFamily.CATBOOST: 'catboost'>,
<ModelFamily.LINEAR_MODEL: 'linear_model'>,
<ModelFamily.RANDOM_FOREST: 'random_forest'>]
Limiting Search Time¶
You can limit the search time by specifying a maximum number of pipelines and/or a maximum amount of time. EvalML won’t build new pipelines after the maximum time has passed or the maximum number of pipelines have been built. If a limit is not set, then a maximum of 5 pipelines will be built.
The maximum search time can be specified as a integer in seconds or as a string in seconds, minutes, or hours.
[10]:
AutoClassificationSearch(objective="f1",
max_pipelines=5,
max_time=60)
AutoClassificationSearch(objective="f1",
max_time="1 minute")
[10]:
<evalml.automl.auto_classification_search.AutoClassificationSearch at 0x7f98efe11e90>
To start, EvalML samples 10 sets of hyperparameters chosen randomly for each possible pipeline. Therefore, we recommend setting max_pipelines
at least 10 times the number of possible pipelines.
[11]:
n_possible_pipelines = len(AutoClassificationSearch(objective="f1").possible_pipelines)
[12]:
AutoClassificationSearch(objective="f1",
max_time=60)
[12]:
<evalml.automl.auto_classification_search.AutoClassificationSearch at 0x7f98efda0810>
Early Stopping¶
You can also limit search time by providing a patience value for early stopping. With a patience value, EvalML will stop searching when the best objective score has not been improved upon for n iterations. The patience value must be a positive integer. You can also provide a tolerance value where EvalML will only consider a score as an improvement over the best score if the difference was greater than the tolerance percentage.
[13]:
from evalml.demos import load_diabetes
X, y = load_diabetes()
automl = AutoRegressionSearch(objective="MSE", patience=2, tolerance=0.01, max_pipelines=10)
automl.search(X, y)
*****************************
* Beginning pipeline search *
*****************************
Optimizing for MSE. Lower score is better.
Searching up to 10 pipelines.
✔ Linear Regression Pipeline: 10%|█ | Elapsed:00:00
✔ Random Forest Regression Pipeline: 20%|██ | Elapsed:00:05
✔ Random Forest Regression Pipeline: 30%|███ | Elapsed:00:15
✔ Random Forest Regression Pipeline: 40%|████ | Elapsed:00:24
✔ Linear Regression Pipeline: 50%|█████ | Elapsed:00:24
2 iterations without improvement. Stopping search early...
✔ Optimization finished 50%|█████ | Elapsed:00:24
[14]:
automl.rankings
[14]:
id | pipeline_name | score | high_variance_cv | parameters | |
---|---|---|---|---|---|
0 | 2 | Random Forest Regression Pipeline | 3614.391928 | False | {'impute_strategy': 'most_frequent', 'percent_... |
1 | 1 | Random Forest Regression Pipeline | 3760.685056 | False | {'impute_strategy': 'median', 'percent_feature... |
2 | 3 | Random Forest Regression Pipeline | 3769.284869 | False | {'impute_strategy': 'median', 'percent_feature... |
3 | 0 | Linear Regression Pipeline | 26225.781788 | False | {'impute_strategy': 'median', 'fit_intercept':... |
4 | 4 | Linear Regression Pipeline | 26225.781788 | False | {'impute_strategy': 'most_frequent', 'fit_inte... |
Control Cross Validation¶
EvalML cross-validates each model it tests during its search. By default it uses 3-fold cross-validation. You can optionally provide your own cross-validation method.
[15]:
from sklearn.model_selection import StratifiedKFold
automl = AutoClassificationSearch(objective="f1",
cv=StratifiedKFold(5))