Setting up pipeline search¶
Designing the right machine learning pipeline and picking the best parameters is a time-consuming process that relies on a mix of data science intuition as well as trial and error. EvalML streamlines the process of selecting the best modeling algorithms and parameters, so data scientists can focus their energy where it is most needed.
How it works¶
EvalML selects and tunes machine learning pipelines built of numerous steps. This includes encoding categorical data, missing value imputation, feature selection, feature scaling, and finally machine learning. As EvalML tunes pipelines, it uses the objective function selected and configured by the user to guide its search.
At each iteration, EvalML uses cross-validation to generate an estimate of the pipeline’s performances. If a pipeline has high variance across cross-validation folds, it will provide a warning. In this case, the pipeline may not perform reliably in the future.
EvalML is designed to work well out of the box. However, it provides numerous methods for you to control the search described below.
Selecting problem type¶
EvalML supports both classification and regression problems. You select your problem type by importing the appropriate class.
[1]:
import evalml
[2]:
evalml.AutoClassifier()
[2]:
<evalml.models.auto_classifier.AutoClassifier at 0x11d782750>
[3]:
evalml.AutoRegressor()
[3]:
<evalml.models.auto_regressor.AutoRegressor at 0x11d792950>
Setting the Objective Function¶
The only required parameter to start searching for pipelines is the objective function. Most domain-specific objective functions require you to specify parameters based on your business assumptions. You can do this before you initialize your pipeline search. For example
[4]:
from evalml.objectives import FraudCost
fraud_objective = FraudCost(
retry_percentage=.5,
interchange_fee=.02,
fraud_payout_percentage=.75,
amount_col='amount'
)
evalml.AutoClassifier(objective=fraud_objective)
[4]:
<evalml.models.auto_classifier.AutoClassifier at 0x11d7abf50>
Evaluate on Additional Objectives¶
Additional objectives can be scored on during the evaluation process. To add another objective, use the additional_objectives
parameter in AutoClassifier or AutoRegressor. The results of these additional objectives will then appear in the results of describe_pipeline
.
[5]:
from evalml.objectives import FraudCost
fraud_objective = FraudCost(
retry_percentage=.5,
interchange_fee=.02,
fraud_payout_percentage=.75,
amount_col='amount'
)
evalml.AutoClassifier(objective='AUC', additional_objectives=[fraud_objective])
[5]:
<evalml.models.auto_classifier.AutoClassifier at 0x11d7b3e50>
Selecting Model Types¶
By default, all model types are considered. You can control which model types to search with the model_types
parameters
[6]:
clf = evalml.AutoClassifier(objective="f1",
model_types=["random_forest"])
you can see the possible pipelines that will be searched after initialization
[7]:
clf.possible_pipelines
[7]:
[evalml.pipelines.classification.random_forest.RFClassificationPipeline]
you can see a list of all supported models like this
[8]:
evalml.list_model_types("binary") # `binary` for binary classification and `multiclass` for multiclass classification
[8]:
[<ModelTypes.LINEAR_MODEL: 'linear_model'>,
<ModelTypes.XGBOOST: 'xgboost'>,
<ModelTypes.RANDOM_FOREST: 'random_forest'>]
[9]:
evalml.list_model_types("regression")
[9]:
[<ModelTypes.LINEAR_MODEL: 'linear_model'>,
<ModelTypes.RANDOM_FOREST: 'random_forest'>]
Limiting Search Time¶
You can limit the search time by specifying a maximum number of pipelines and/or a maximum amount of time. EvalML won’t build new pipelines after the maximum time has passed or the maximum number of pipelines have been built. If a limit is not set, then a maximum of 5 pipelines will be built.
The maximum search time can be specified as a integer in seconds or as a string in seconds, minutes, or hours.
[10]:
evalml.AutoClassifier(objective="f1",
max_time=60)
evalml.AutoClassifier(objective="f1",
max_time="1 minute")
[10]:
<evalml.models.auto_classifier.AutoClassifier at 0x11d74c250>
To start, EvalML samples 10 sets of hyperparameters chosen randomly for each possible pipeline. Therefore, we recommend setting max_pipelines
at least 10 times the number of possible pipelines.
[11]:
n_possible_pipelines = len(evalml.AutoClassifier(objective="f1").possible_pipelines)
[12]:
evalml.AutoClassifier(objective="f1",
max_time=60,
max_pipelines=n_possible_pipelines*10)
[12]:
<evalml.models.auto_classifier.AutoClassifier at 0x11d7ce750>
Control Cross Validation¶
EvalML cross-validates each model it tests during its search. By default it uses 3-fold cross-validation. You can optionally provide your own cross-validation method.
[13]:
from sklearn.model_selection import StratifiedKFold
clf = evalml.AutoClassifier(objective="f1",
cv=StratifiedKFold(5))