Pipelines#
EvalML pipelines represent a sequence of operations to be applied to data, where each operation is either a data transformation or an ML modeling algorithm.
A pipeline holds a combination of one or more components, which will be applied to new input data in sequence.
Each component and pipeline supports a set of parameters which configure its behavior. The AutoML search process seeks to find the combination of pipeline structure and pipeline parameters which perform the best on the data.
Defining a Pipeline Instance#
Pipeline instances can be instantiated using any of the following classes:
RegressionPipeline
BinaryClassificationPipeline
MulticlassClassificationPipeline
TimeSeriesRegressionPipeline
TimeSeriesBinaryClassificationPipeline
TimeSeriesMulticlassClassificationPipeline
The class you want to use will depend on your problem type. The only required parameter input for instantiating a pipeline instance is component_graph
, which can be a ComponentGraph
instance, a list, or a dictionary containing a sequence of components to be fit and evaluated.
A component_graph
list is the default representation, which represents a linear order of transforming components with an estimator as the final component. A component_graph
dictionary is used to represent a non-linear graph of components, where the key is a unique name for each component and the value is a list with the component’s class as the first element and any parents of the component as the following element(s). For these two component_graph
formats, each component can be
provided as a reference to the component class for custom components, and as either a string name or as a reference to the component class for components defined in EvalML.
If you choose to provide a ComponentGraph
instance and want to set custom parameters for your pipeline, set it through the pipeline initialization rather than ComponentGraph.instantiate()
.
[1]:
from evalml.pipelines import MulticlassClassificationPipeline, ComponentGraph
component_graph_as_list = ["Imputer", "Random Forest Classifier"]
MulticlassClassificationPipeline(component_graph=component_graph_as_list)
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:35: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _pt_shuffle_rec(i, indexes, index_mask, partition_tree, M, pos):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:54: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def delta_minimization_order(all_masks, max_swap_size=100, num_passes=2):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:63: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _reverse_window(order, start, length):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:69: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _reverse_window_score_gain(masks, order, start, length):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_clustering.py:77: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _mask_delta_score(m1, m2):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:5: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def identity(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _identity_inverse(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:15: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def logit(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/links.py:20: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _logit_inverse(x):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:363: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _build_fixed_single_output(averaged_outs, last_outs, outputs, batch_positions, varying_rows, num_varying_rows, link, linearizing_weights):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:385: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _build_fixed_multi_output(averaged_outs, last_outs, outputs, batch_positions, varying_rows, num_varying_rows, link, linearizing_weights):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:428: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _init_masks(cluster_matrix, M, indices_row_pos, indptr):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/utils/_masked_model.py:439: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _rec_fill_masks(cluster_matrix, indices_row_pos, indptr, indices, M, ind):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_tabular.py:186: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _single_delta_mask(dind, masked_inputs, last_mask, data, x, noop_code):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_tabular.py:197: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _delta_masking(masks, x, curr_delta_inds, varying_rows_out,
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/maskers/_image.py:175: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def _jit_build_partition_tree(xmin, xmax, ymin, ymax, zmin, zmax, total_ywidth, total_zwidth, M, clustering, q):
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-evalml/envs/v0.77.0/lib/python3.8/site-packages/shap/explainers/_partition.py:676: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def lower_credit(i, value, M, values, clustering):
The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
[1]:
pipeline = MulticlassClassificationPipeline(component_graph={'Imputer': ['Imputer', 'X', 'y'], 'Random Forest Classifier': ['Random Forest Classifier', 'Imputer.x', 'y']}, parameters={'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'Random Forest Classifier':{'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}}, random_seed=0)
[2]:
component_graph_as_dict = {
"Imputer": ["Imputer", "X", "y"],
"Encoder": ["One Hot Encoder", "Imputer.x", "y"],
"Random Forest Clf": ["Random Forest Classifier", "Encoder.x", "y"],
"Elastic Net Clf": ["Elastic Net Classifier", "Encoder.x", "y"],
"Final Estimator": [
"Logistic Regression Classifier",
"Random Forest Clf.x",
"Elastic Net Clf.x",
"y",
],
}
MulticlassClassificationPipeline(component_graph=component_graph_as_dict)
[2]:
pipeline = MulticlassClassificationPipeline(component_graph={'Imputer': ['Imputer', 'X', 'y'], 'Encoder': ['One Hot Encoder', 'Imputer.x', 'y'], 'Random Forest Clf': ['Random Forest Classifier', 'Encoder.x', 'y'], 'Elastic Net Clf': ['Elastic Net Classifier', 'Encoder.x', 'y'], 'Final Estimator': ['Logistic Regression Classifier', 'Random Forest Clf.x', 'Elastic Net Clf.x', 'y']}, parameters={'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'Encoder':{'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Random Forest Clf':{'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}, 'Elastic Net Clf':{'penalty': 'elasticnet', 'C': 1.0, 'l1_ratio': 0.15, 'n_jobs': -1, 'multi_class': 'auto', 'solver': 'saga'}, 'Final Estimator':{'penalty': 'l2', 'C': 1.0, 'n_jobs': -1, 'multi_class': 'auto', 'solver': 'lbfgs'}}, random_seed=0)
[3]:
cg = ComponentGraph(component_graph_as_dict)
# set parameters in the pipeline rather than through cg.instantiate()
MulticlassClassificationPipeline(component_graph=cg, parameters={})
[3]:
pipeline = MulticlassClassificationPipeline(component_graph={'Imputer': ['Imputer', 'X', 'y'], 'Encoder': ['One Hot Encoder', 'Imputer.x', 'y'], 'Random Forest Clf': ['Random Forest Classifier', 'Encoder.x', 'y'], 'Elastic Net Clf': ['Elastic Net Classifier', 'Encoder.x', 'y'], 'Final Estimator': ['Logistic Regression Classifier', 'Random Forest Clf.x', 'Elastic Net Clf.x', 'y']}, parameters={'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'Encoder':{'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Random Forest Clf':{'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}, 'Elastic Net Clf':{'penalty': 'elasticnet', 'C': 1.0, 'l1_ratio': 0.15, 'n_jobs': -1, 'multi_class': 'auto', 'solver': 'saga'}, 'Final Estimator':{'penalty': 'l2', 'C': 1.0, 'n_jobs': -1, 'multi_class': 'auto', 'solver': 'lbfgs'}}, random_seed=0)
If you’re using your own custom components you can refer to them like so:
[4]:
from evalml.pipelines.components import Transformer
class NewTransformer(Transformer):
name = "New Transformer"
hyperparameter_ranges = {"parameter_1": ["a", "b", "c"]}
def __init__(self, parameter_1=1, random_seed=0):
parameters = {"parameter_1": parameter_1}
super().__init__(parameters=parameters, random_seed=random_seed)
def transform(self, X, y=None):
# Your code here!
return X
MulticlassClassificationPipeline([NewTransformer, "Random Forest Classifier"])
[4]:
pipeline = MulticlassClassificationPipeline(component_graph={'New Transformer': [NewTransformer, 'X', 'y'], 'Random Forest Classifier': ['Random Forest Classifier', 'New Transformer.x', 'y']}, parameters={'New Transformer':{'parameter_1': 1}, 'Random Forest Classifier':{'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}}, random_seed=0)
Pipeline Usage#
All pipelines define the following methods:
fit
fits each component on the provided training data, in order.predict
computes the predictions of the component graph on the provided data.score
computes the value of an objective on the provided data.
[5]:
from evalml.demos import load_wine
X, y = load_wine()
pipeline = MulticlassClassificationPipeline(
component_graph={
"Label Encoder": ["Label Encoder", "X", "y"],
"Imputer": ["Imputer", "X", "Label Encoder.y"],
"Random Forest Classifier": [
"Random Forest Classifier",
"Imputer.x",
"Label Encoder.y",
],
}
)
pipeline.fit(X, y)
print(pipeline.predict(X))
print(pipeline.score(X, y, objectives=["log loss multiclass"]))
Number of Features
Numeric 13
Number of training examples: 178
Targets
class_1 39.89%
class_0 33.15%
class_2 26.97%
Name: target, dtype: object
0 class_0
1 class_0
2 class_0
3 class_0
4 class_0
...
173 class_2
174 class_2
175 class_2
176 class_2
177 class_2
Length: 178, dtype: category
Categories (3, object): ['class_0', 'class_1', 'class_2']
OrderedDict([('Log Loss Multiclass', 0.04132737017536072)])
Custom Name#
By default, a pipeline’s name is created using the component graph that makes up the pipeline. E.g. A pipeline with an imputer, one-hot encoder, and logistic regression classifier will have the name ‘Logistic Regression Classifier w/ Imputer + One Hot Encoder’.
If you’d like to override the pipeline’s name attribute, you can set the custom_name
parameter when initalizing a pipeline, like so:
[6]:
component_graph = ["Imputer", "One Hot Encoder", "Logistic Regression Classifier"]
pipeline = MulticlassClassificationPipeline(component_graph)
print("Pipeline with default name:", pipeline.name)
pipeline_with_name = MulticlassClassificationPipeline(
component_graph, custom_name="My cool custom pipeline"
)
print("Pipeline with custom name:", pipeline_with_name.name)
Pipeline with default name: Logistic Regression Classifier w/ Imputer + One Hot Encoder
Pipeline with custom name: My cool custom pipeline
Pipeline Parameters#
You can also pass in custom parameters by using the parameters
parameter, which will then be used when instantiating each component in component_graph
. The parameters dictionary needs to be in the format of a two-layered dictionary where the key-value pairs are the component name and corresponding component parameters dictionary. The component parameters dictionary consists of (parameter name, parameter values) key-value pairs.
An example will be shown below. The API reference for component parameters can also be found here.
[7]:
parameters = {
"Imputer": {
"categorical_impute_strategy": "most_frequent",
"numeric_impute_strategy": "median",
},
"Logistic Regression Classifier": {
"penalty": "l2",
"C": 1.0,
},
}
component_graph = [
"Imputer",
"One Hot Encoder",
"Standard Scaler",
"Logistic Regression Classifier",
]
MulticlassClassificationPipeline(component_graph=component_graph, parameters=parameters)
[7]:
pipeline = MulticlassClassificationPipeline(component_graph={'Imputer': ['Imputer', 'X', 'y'], 'One Hot Encoder': ['One Hot Encoder', 'Imputer.x', 'y'], 'Standard Scaler': ['Standard Scaler', 'One Hot Encoder.x', 'y'], 'Logistic Regression Classifier': ['Logistic Regression Classifier', 'Standard Scaler.x', 'y']}, parameters={'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'median', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'One Hot Encoder':{'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Logistic Regression Classifier':{'penalty': 'l2', 'C': 1.0, 'n_jobs': -1, 'multi_class': 'auto', 'solver': 'lbfgs'}}, random_seed=0)
Pipeline Description#
You can call .graph()
to see each component and its parameters. Each component takes in data and feeds it to the next.
[8]:
component_graph = [
"Imputer",
"One Hot Encoder",
"Standard Scaler",
"Logistic Regression Classifier",
]
pipeline = MulticlassClassificationPipeline(
component_graph=component_graph, parameters=parameters
)
pipeline.graph()
[8]:
[9]:
component_graph_as_dict = {
"Imputer": ["Imputer", "X", "y"],
"Encoder": ["One Hot Encoder", "Imputer.x", "y"],
"Random Forest Clf": ["Random Forest Classifier", "Encoder.x", "y"],
"Elastic Net Clf": ["Elastic Net Classifier", "Encoder.x", "y"],
"Final Estimator": [
"Logistic Regression Classifier",
"Random Forest Clf.x",
"Elastic Net Clf.x",
"y",
],
}
nonlinear_pipeline = MulticlassClassificationPipeline(
component_graph=component_graph_as_dict
)
nonlinear_pipeline.graph()
[9]:
You can see a textual representation of the pipeline by calling .describe()
:
[10]:
pipeline.describe()
*********************************************************************************
* Logistic Regression Classifier w/ Imputer + One Hot Encoder + Standard Scaler *
*********************************************************************************
Problem Type: multiclass
Model Family: Linear
Pipeline Steps
==============
1. Imputer
* categorical_impute_strategy : most_frequent
* numeric_impute_strategy : median
* boolean_impute_strategy : most_frequent
* categorical_fill_value : None
* numeric_fill_value : None
* boolean_fill_value : None
2. One Hot Encoder
* top_n : 10
* features_to_encode : None
* categories : None
* drop : if_binary
* handle_unknown : ignore
* handle_missing : error
3. Standard Scaler
4. Logistic Regression Classifier
* penalty : l2
* C : 1.0
* n_jobs : -1
* multi_class : auto
* solver : lbfgs
[11]:
nonlinear_pipeline.describe()
*******************************************************************************************************************
* Logistic Regression Classifier w/ Imputer + One Hot Encoder + Random Forest Classifier + Elastic Net Classifier *
*******************************************************************************************************************
Problem Type: multiclass
Model Family: Linear
Pipeline Steps
==============
1. Imputer
* categorical_impute_strategy : most_frequent
* numeric_impute_strategy : mean
* boolean_impute_strategy : most_frequent
* categorical_fill_value : None
* numeric_fill_value : None
* boolean_fill_value : None
2. One Hot Encoder
* top_n : 10
* features_to_encode : None
* categories : None
* drop : if_binary
* handle_unknown : ignore
* handle_missing : error
3. Random Forest Classifier
* n_estimators : 100
* max_depth : 6
* n_jobs : -1
4. Elastic Net Classifier
* penalty : elasticnet
* C : 1.0
* l1_ratio : 0.15
* n_jobs : -1
* multi_class : auto
* solver : saga
5. Logistic Regression Classifier
* penalty : l2
* C : 1.0
* n_jobs : -1
* multi_class : auto
* solver : lbfgs
Component Graph#
You can use pipeline.get_component(name)
and provide the component name to access any component (API reference here):
[12]:
pipeline.get_component("Imputer")
[12]:
Imputer(categorical_impute_strategy='most_frequent', numeric_impute_strategy='median', boolean_impute_strategy='most_frequent', categorical_fill_value=None, numeric_fill_value=None, boolean_fill_value=None)
[13]:
nonlinear_pipeline.get_component("Elastic Net Clf")
[13]:
ElasticNetClassifier(penalty='elasticnet', C=1.0, l1_ratio=0.15, n_jobs=-1, multi_class='auto', solver='saga')
Alternatively, you can index directly into the pipeline to get a component
[14]:
first_component = pipeline[0]
print(first_component.name)
Imputer
[15]:
nonlinear_pipeline["Final Estimator"]
[15]:
LogisticRegressionClassifier(penalty='l2', C=1.0, n_jobs=-1, multi_class='auto', solver='lbfgs')
Pipeline Estimator#
EvalML enforces that the last component of a linear pipeline is an estimator. You can access this estimator directly by using pipeline.estimator
.
[16]:
pipeline.estimator
[16]:
LogisticRegressionClassifier(penalty='l2', C=1.0, n_jobs=-1, multi_class='auto', solver='lbfgs')
Input Feature Names#
After a pipeline is fitted, you can access a pipeline’s input_feature_names
attribute to obtain a dictionary containing a list of feature names passed to each component of the pipeline. This could be especially useful for debugging where a feature might have been dropped or detecting unexpected behavior.
[17]:
pipeline = MulticlassClassificationPipeline(["Imputer", "Random Forest Classifier"])
pipeline.fit(X, y)
pipeline.input_feature_names
[17]:
{'Imputer': ['alcohol',
'malic_acid',
'ash',
'alcalinity_of_ash',
'magnesium',
'total_phenols',
'flavanoids',
'nonflavanoid_phenols',
'proanthocyanins',
'color_intensity',
'hue',
'od280/od315_of_diluted_wines',
'proline'],
'Random Forest Classifier': ['alcohol',
'malic_acid',
'ash',
'alcalinity_of_ash',
'magnesium',
'total_phenols',
'flavanoids',
'nonflavanoid_phenols',
'proanthocyanins',
'color_intensity',
'hue',
'od280/od315_of_diluted_wines',
'proline']}
Binary Classification Pipeline Thresholds#
For binary classification pipelines, you can choose to tune the decision boundary threshold, which allows the pipeline to distinguish predictions from positive to negative. The default boundary, if none is set, is 0.5, which means that predictions with a probability of >= 0.5
are classified as the positive class, while all others are negative.
You can use the binary classification pipeline’s optimize_thresholds
method to choose the best threshold for an objective, or it can be manually set. EvalML’s AutoMLSearch uses optimize_thresholds
by default for binary
problems, and it uses F1
as the default objective to optimize on. This can be turned off by passing in optimize_thresholds=False
, or you can changed the objective used by changing the
objective
or alternate_thresholding_objective
arguments.
[18]:
from evalml.demos import load_breast_cancer
from evalml.pipelines import BinaryClassificationPipeline
X, y = load_breast_cancer()
X_to_predict = X.tail(10)
bcp = BinaryClassificationPipeline(
{
"Imputer": ["Imputer", "X", "y"],
"Label Encoder": ["Label Encoder", "Imputer.x", "y"],
"RFC": ["Random Forest Classifier", "Imputer.x", "Label Encoder.y"],
}
)
bcp.fit(X, y)
predict_proba = bcp.predict_proba(X_to_predict)
predict_proba
Number of Features
Numeric 30
Number of training examples: 569
Targets
benign 62.74%
malignant 37.26%
Name: target, dtype: object
[18]:
benign | malignant | |
---|---|---|
559 | 0.925711 | 0.074289 |
560 | 0.939512 | 0.060488 |
561 | 0.991177 | 0.008823 |
562 | 0.010155 | 0.989845 |
563 | 0.000155 | 0.999845 |
564 | 0.000100 | 0.999900 |
565 | 0.000155 | 0.999845 |
566 | 0.011528 | 0.988472 |
567 | 0.000155 | 0.999845 |
568 | 0.994452 | 0.005548 |
[19]:
# view the current threshold
print("The threshold is {}".format(bcp.threshold))
# view the first few predictions
print(bcp.predict(X_to_predict))
The threshold is None
559 benign
560 benign
561 benign
562 malignant
563 malignant
564 malignant
565 malignant
566 malignant
567 malignant
568 benign
dtype: category
Categories (2, object): ['benign', 'malignant']
Note that the default threshold above is None
, which means that the pipeline defaults to using 0.5 as the threshold.
You can manually set the threshold as well:
[20]:
# you can manually set the threshold
bcp.threshold = 0.99
# view the threshold
print("The threshold is {}".format(bcp.threshold))
# view the first few predictions
print(bcp.predict(X_to_predict))
The threshold is 0.99
559 benign
560 benign
561 benign
562 benign
563 malignant
564 malignant
565 malignant
566 benign
567 malignant
568 benign
Name: malignant, dtype: category
Categories (2, object): ['benign', 'malignant']
However, the best way to set the threshold is by using the pipeline’s optimize_threshold
method. This takes in the predicted values, as well as the true values and objective to optimize with, and it finds the best threshold to maximize this objective value.
This method is best used with validation data, since optimizing on training data could lead to overfitting and optimizing on test data would introduce large biases.
Below walks through threshold tuning using the F1
objective.
[21]:
from evalml.objectives import F1
# get predictions for positive class only
predict_proba = predict_proba.iloc[:, -1]
bcp.optimize_threshold(X_to_predict, y.tail(10), predict_proba, F1())
print("The new threshold is {}".format(bcp.threshold))
# view the first few predictions
print(bcp.predict(X_to_predict))
The new threshold is 0.13521817340545206
559 benign
560 benign
561 benign
562 malignant
563 malignant
564 malignant
565 malignant
566 malignant
567 malignant
568 benign
Name: malignant, dtype: category
Categories (2, object): ['benign', 'malignant']
Grabbing rows near the decision boundary#
For binary classification problems, you can also look at the rows closest to the decision boundary by using rows_of_interest
. This method returns the indices of interest, which can then be used to obtain the subset of the data that falls closest to the decision boundary. This can help with further analysis of the model, and can give you better understanding of what rows the model could be having trouble with.
rows_of_interest
takes in an epsilon
parameter (defaulted to 0.1
), which determines which rows to return. The rows that are returned are the rows where the probability of it being in the positive class fall between the threshold +- epsilon
range. Increase the epsilon
value to get more rows, and decrease it to get fewer rows.
Below is a walkthrough of using rows_of_interest
, building off the previous pipeline which is already thresholded.
[22]:
from evalml.pipelines.utils import rows_of_interest
indices = rows_of_interest(bcp, X, y, types="all")
X.iloc[indices].head()
[22]:
mean radius | mean texture | mean perimeter | mean area | mean smoothness | mean compactness | mean concavity | mean concave points | mean symmetry | mean fractal dimension | ... | worst radius | worst texture | worst perimeter | worst area | worst smoothness | worst compactness | worst concavity | worst concave points | worst symmetry | worst fractal dimension | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
375 | 16.17 | 16.07 | 106.30 | 788.5 | 0.09880 | 0.14380 | 0.06651 | 0.05397 | 0.1990 | 0.06572 | ... | 16.97 | 19.14 | 113.10 | 861.5 | 0.12350 | 0.25500 | 0.21140 | 0.12510 | 0.3153 | 0.08960 |
472 | 14.92 | 14.93 | 96.45 | 686.9 | 0.08098 | 0.08549 | 0.05539 | 0.03221 | 0.1687 | 0.05669 | ... | 17.18 | 18.22 | 112.00 | 906.6 | 0.10650 | 0.27910 | 0.31510 | 0.11470 | 0.2688 | 0.08273 |
191 | 12.77 | 21.41 | 82.02 | 507.4 | 0.08749 | 0.06601 | 0.03112 | 0.02864 | 0.1694 | 0.06287 | ... | 13.75 | 23.50 | 89.04 | 579.5 | 0.09388 | 0.08978 | 0.05186 | 0.04773 | 0.2179 | 0.06871 |
290 | 14.41 | 19.73 | 96.03 | 651.0 | 0.08757 | 0.16760 | 0.13620 | 0.06602 | 0.1714 | 0.07192 | ... | 15.77 | 22.13 | 101.70 | 767.3 | 0.09983 | 0.24720 | 0.22200 | 0.10210 | 0.2272 | 0.08799 |
413 | 14.99 | 22.11 | 97.53 | 693.7 | 0.08515 | 0.10250 | 0.06859 | 0.03876 | 0.1944 | 0.05913 | ... | 16.76 | 31.55 | 110.20 | 867.1 | 0.10770 | 0.33450 | 0.31140 | 0.13080 | 0.3163 | 0.09251 |
5 rows × 30 columns
You can see what the probabilities are for these rows to determine how close they are to the new pipeline threshold. X is used here for brevity.
[23]:
pred_proba = bcp.predict_proba(X)
pos_value_proba = pred_proba.iloc[:, -1]
pos_value_proba.iloc[indices].head()
[23]:
375 0.133328
472 0.130808
191 0.128998
290 0.127939
413 0.149718
Name: malignant, dtype: float64
Saving and Loading Pipelines#
You can save and load trained or untrained pipeline instances using the Python pickle format, like so:
[24]:
import pickle
pipeline_to_pickle = BinaryClassificationPipeline(
["Imputer", "Random Forest Classifier"]
)
with open("pipeline.pkl", "wb") as f:
pickle.dump(pipeline_to_pickle, f)
pickled_pipeline = None
with open("pipeline.pkl", "rb") as f:
pickled_pipeline = pickle.load(f)
assert pickled_pipeline == pipeline_to_pickle
pickled_pipeline.fit(X, y)
[24]:
pipeline = BinaryClassificationPipeline(component_graph={'Imputer': ['Imputer', 'X', 'y'], 'Random Forest Classifier': ['Random Forest Classifier', 'Imputer.x', 'y']}, parameters={'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'Random Forest Classifier':{'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}}, random_seed=0)
Generate Code#
Once you have instantiated a pipeline, you can generate string Python code to recreate this pipeline, which can then be saved and run elsewhere with EvalML. generate_pipeline_code
requires a pipeline instance as the input. It can also handle custom components, but it won’t return the code required to define the component. Note that any external libraries used in creating the pipeline instance will also need to be imported to execute the returned code.
Code generation is not yet supported for nonlinear pipelines.
[25]:
from evalml.pipelines.utils import generate_pipeline_code
from evalml.pipelines import BinaryClassificationPipeline
import pandas as pd
from evalml.utils import infer_feature_types
from skopt.space import Integer
class MyDropNullColumns(Transformer):
"""Transformer to drop features whose percentage of NaN values exceeds a specified threshold"""
name = "My Drop Null Columns Transformer"
hyperparameter_ranges = {}
def __init__(self, pct_null_threshold=1.0, random_seed=0, **kwargs):
"""Initalizes an transformer to drop features whose percentage of NaN values exceeds a specified threshold.
Args:
pct_null_threshold(float): The percentage of NaN values in an input feature to drop.
Must be a value between [0, 1] inclusive. If equal to 0.0, will drop columns with any null values.
If equal to 1.0, will drop columns with all null values. Defaults to 0.95.
"""
if pct_null_threshold < 0 or pct_null_threshold > 1:
raise ValueError(
"pct_null_threshold must be a float between 0 and 1, inclusive."
)
parameters = {"pct_null_threshold": pct_null_threshold}
parameters.update(kwargs)
self._cols_to_drop = None
super().__init__(
parameters=parameters, component_obj=None, random_seed=random_seed
)
def fit(self, X, y=None):
pct_null_threshold = self.parameters["pct_null_threshold"]
X = infer_feature_types(X)
percent_null = X.isnull().mean()
if pct_null_threshold == 0.0:
null_cols = percent_null[percent_null > 0]
else:
null_cols = percent_null[percent_null >= pct_null_threshold]
self._cols_to_drop = list(null_cols.index)
return self
def transform(self, X, y=None):
"""Transforms data X by dropping columns that exceed the threshold of null values.
Args:
X (pd.DataFrame): Data to transform
y (pd.Series, optional): Targets
Returns:
pd.DataFrame: Transformed X
"""
X = infer_feature_types(X)
return X.drop(columns=self._cols_to_drop)
pipeline_instance = BinaryClassificationPipeline(
[
"Imputer",
MyDropNullColumns,
"DateTime Featurizer",
"Natural Language Featurizer",
"One Hot Encoder",
"Random Forest Classifier",
],
custom_name="Pipeline with Custom Component",
random_seed=20,
)
code = generate_pipeline_code(pipeline_instance)
print(code)
# This string can then be pasted into a separate window and run, although since the pipeline has custom component `MyDropNullColumns`,
# the code for that component must also be included
from evalml.demos import load_fraud
X, y = load_fraud(1000)
exec(code)
pipeline.fit(X, y)
from evalml.pipelines.binary_classification_pipeline import BinaryClassificationPipeline
pipeline = BinaryClassificationPipeline(
component_graph={
"Imputer": ["Imputer", "X", "y"],
"My Drop Null Columns Transformer": [MyDropNullColumns, "Imputer.x", "y"],
"DateTime Featurizer": [
"DateTime Featurizer",
"My Drop Null Columns Transformer.x",
"y",
],
"Natural Language Featurizer": [
"Natural Language Featurizer",
"DateTime Featurizer.x",
"y",
],
"One Hot Encoder": ["One Hot Encoder", "Natural Language Featurizer.x", "y"],
"Random Forest Classifier": [
"Random Forest Classifier",
"One Hot Encoder.x",
"y",
],
},
parameters={
"Imputer": {
"categorical_impute_strategy": "most_frequent",
"numeric_impute_strategy": "mean",
"boolean_impute_strategy": "most_frequent",
"categorical_fill_value": None,
"numeric_fill_value": None,
"boolean_fill_value": None,
},
"My Drop Null Columns Transformer": {"pct_null_threshold": 1.0},
"DateTime Featurizer": {
"features_to_extract": ["year", "month", "day_of_week", "hour"],
"encode_as_categories": False,
"time_index": None,
},
"One Hot Encoder": {
"top_n": 10,
"features_to_encode": None,
"categories": None,
"drop": "if_binary",
"handle_unknown": "ignore",
"handle_missing": "error",
},
"Random Forest Classifier": {"n_estimators": 100, "max_depth": 6, "n_jobs": -1},
},
custom_name="Pipeline with Custom Component",
random_seed=20,
)
Number of Features
Boolean 1
Categorical 6
Numeric 5
Number of training examples: 1000
Targets
False 85.90%
True 14.10%
Name: fraud, dtype: object
[25]:
pipeline = BinaryClassificationPipeline(component_graph={'Imputer': ['Imputer', 'X', 'y'], 'My Drop Null Columns Transformer': [MyDropNullColumns, 'Imputer.x', 'y'], 'DateTime Featurizer': ['DateTime Featurizer', 'My Drop Null Columns Transformer.x', 'y'], 'Natural Language Featurizer': ['Natural Language Featurizer', 'DateTime Featurizer.x', 'y'], 'One Hot Encoder': ['One Hot Encoder', 'Natural Language Featurizer.x', 'y'], 'Random Forest Classifier': ['Random Forest Classifier', 'One Hot Encoder.x', 'y']}, parameters={'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'boolean_impute_strategy': 'most_frequent', 'categorical_fill_value': None, 'numeric_fill_value': None, 'boolean_fill_value': None}, 'My Drop Null Columns Transformer':{'pct_null_threshold': 1.0}, 'DateTime Featurizer':{'features_to_extract': ['year', 'month', 'day_of_week', 'hour'], 'encode_as_categories': False, 'time_index': None}, 'One Hot Encoder':{'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': 'if_binary', 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Random Forest Classifier':{'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}}, custom_name='Pipeline with Custom Component', random_seed=20)