Using Text Data with EvalML

In this demo, we will show you how to use EvalML to build models which use text data.

[1]:
import evalml
from evalml import AutoMLSearch

Dataset

We will be utilizing a dataset of SMS text messages, some of which are categorized as spam, and others which are not (“ham”). This dataset is originally from Kaggle, but modified to produce a slightly more even distribution of spam to ham.

[2]:
from urllib.request import urlopen
import pandas as pd

input_data = urlopen('https://featurelabs-static.s3.amazonaws.com/spam_text_messages_modified.csv')
data = pd.read_csv(input_data)[:750]

X = data.drop(['Category'], axis=1)
y = data['Category']

display(X.head())
Message
0 Free entry in 2 a wkly comp to win FA Cup fina...
1 FreeMsg Hey there darling it's been 3 week's n...
2 WINNER!! As a valued network customer you have...
3 Had your mobile 11 months or more? U R entitle...
4 SIX chances to win CASH! From 100 to 20,000 po...

The ham vs spam distribution of the data is 3:1, so any machine learning model must get above 75% accuracy in order to perform better than a trivial baseline model which simply classifies everything as ham.

[3]:
y.value_counts(normalize=True)
[3]:
spam    0.593333
ham     0.406667
Name: Category, dtype: float64

Search for best pipeline

In order to validate the results of the pipeline creation and optimization process, we will save some of our data as a holdout set.

[4]:
X_train, X_holdout, y_train, y_holdout = evalml.preprocessing.split_data(X, y, problem_type='binary', test_size=0.2, random_seed=0)

EvalML uses Woodwork to automatically detect which columns are text columns, so you can run search normally, as you would if there was no text data. We can print out the logical type of the Message column and assert that it is indeed inferred as a natural language column.

[5]:
X_train.ww
[5]:
Physical Type Logical Type Semantic Tag(s)
Column
Message string NaturalLanguage []

Because the spam/ham labels are binary, we will use AutoMLSearch(X_train=X_train, y_train=y_train, problem_type='binary'). When we call .search(), the search for the best pipeline will begin.

[6]:
automl = AutoMLSearch(X_train=X_train, y_train=y_train,
                      problem_type='binary',
                      max_batches=1,
                      optimize_thresholds=True)

automl.search()
Generating pipelines to search over...
8 pipelines ready for search.

*****************************
* Beginning pipeline search *
*****************************

Optimizing for Log Loss Binary.
Lower score is better.

Using SequentialEngine to train and score pipelines.
Searching up to 1 batches for a total of 9 pipelines.
Allowed model families: catboost, lightgbm, random_forest, xgboost, decision_tree, extra_trees, linear_model

Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 14.046

*****************************
* Evaluating Batch Number 1 *
*****************************

Elastic Net Classifier w/ Text Featurization Component + Standard Scaler:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.350
        High coefficient of variation (cv >= 0.2) within cross validation scores.
        Elastic Net Classifier w/ Text Featurization Component + Standard Scaler may not perform as estimated on unseen data.
Decision Tree Classifier w/ Text Featurization Component:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 3.386
Random Forest Classifier w/ Text Featurization Component:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.221
LightGBM Classifier w/ Text Featurization Component:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.292
        High coefficient of variation (cv >= 0.2) within cross validation scores.
        LightGBM Classifier w/ Text Featurization Component may not perform as estimated on unseen data.
Logistic Regression Classifier w/ Text Featurization Component + Standard Scaler:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.350
        High coefficient of variation (cv >= 0.2) within cross validation scores.
        Logistic Regression Classifier w/ Text Featurization Component + Standard Scaler may not perform as estimated on unseen data.
XGBoost Classifier w/ Text Featurization Component:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.266
        High coefficient of variation (cv >= 0.2) within cross validation scores.
        XGBoost Classifier w/ Text Featurization Component may not perform as estimated on unseen data.
Extra Trees Classifier w/ Text Featurization Component:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.292
CatBoost Classifier w/ Text Featurization Component:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.589

Search finished after 00:23
Best pipeline: Random Forest Classifier w/ Text Featurization Component
Best pipeline Log Loss Binary: 0.221422

View rankings and select pipeline

Once the fitting process is done, we can see all of the pipelines that were searched.

[7]:
automl.rankings
[7]:
id pipeline_name search_order mean_cv_score standard_deviation_cv_score validation_score percent_better_than_baseline high_variance_cv parameters
0 3 Random Forest Classifier w/ Text Featurization... 3 0.221422 0.040958 0.221587 98.423568 False {'Random Forest Classifier': {'n_estimators': ...
1 6 XGBoost Classifier w/ Text Featurization Compo... 6 0.266164 0.106501 0.242896 98.105025 True {'XGBoost Classifier': {'eta': 0.1, 'max_depth...
2 4 LightGBM Classifier w/ Text Featurization Comp... 4 0.291768 0.114862 0.291521 97.922737 True {'LightGBM Classifier': {'boosting_type': 'gbd...
3 7 Extra Trees Classifier w/ Text Featurization C... 7 0.292373 0.029893 0.325764 97.918427 False {'Extra Trees Classifier': {'n_estimators': 10...
4 5 Logistic Regression Classifier w/ Text Featuri... 5 0.350340 0.074833 0.349271 97.505728 True {'Logistic Regression Classifier': {'penalty':...
5 1 Elastic Net Classifier w/ Text Featurization C... 1 0.350471 0.074886 0.349437 97.504795 True {'Elastic Net Classifier': {'penalty': 'elasti...
6 8 CatBoost Classifier w/ Text Featurization Comp... 8 0.588944 0.004016 0.592259 95.806967 False {'CatBoost Classifier': {'n_estimators': 10, '...
7 2 Decision Tree Classifier w/ Text Featurization... 2 3.385551 0.672118 3.708759 75.896294 False {'Decision Tree Classifier': {'criterion': 'gi...
8 0 Mode Baseline Binary Classification Pipeline 0 14.045769 0.099705 13.988204 0.000000 False {'Baseline Classifier': {'strategy': 'mode'}}

To select the best pipeline we can call automl.best_pipeline.

[8]:
best_pipeline = automl.best_pipeline

Describe pipeline

You can get more details about any pipeline, including how it performed on other objective functions.

[9]:
automl.describe_pipeline(automl.rankings.iloc[0]["id"])

************************************************************
* Random Forest Classifier w/ Text Featurization Component *
************************************************************

Problem Type: binary
Model Family: Random Forest

Pipeline Steps
==============
1. Text Featurization Component
2. Random Forest Classifier
         * n_estimators : 100
         * max_depth : 6
         * n_jobs : -1

Training
========
Training for binary problems.
Total training time (including CV): 2.9 seconds

Cross Validation
----------------
             Log Loss Binary  MCC Binary   AUC  Precision    F1  Balanced Accuracy Binary  Accuracy Binary # Training # Validation
0                      0.222       0.813 0.975      0.889 0.889                     0.907            0.910        400          200
1                      0.180       0.875 0.985      0.937 0.925                     0.936            0.940        400          200
2                      0.262       0.772 0.963      0.875 0.864                     0.884            0.890        400          200
mean                   0.221       0.820 0.974      0.900 0.893                     0.909            0.913          -            -
std                    0.041       0.052 0.011      0.032 0.031                     0.026            0.025          -            -
coef of var            0.185       0.063 0.012      0.036 0.034                     0.028            0.028          -            -
[10]:
best_pipeline.graph()
[10]:
../_images/demos_text_input_19_0.svg

Notice above that there is a Text Featurization Component as the first step in the pipeline. AutoMLSearch uses the woodwork accessor to recognize that 'Message' is a text column, and converts this text into numerical values that can be handled by the estimator.

Evaluate on holdout

Now, we can score the pipeline on the holdout data using the core objectives for binary classification problems.

[11]:
scores = best_pipeline.score(X_holdout, y_holdout,  objectives=evalml.objectives.get_core_objectives('binary'))
print(f'Accuracy Binary: {scores["Accuracy Binary"]}')
Accuracy Binary: 0.9466666666666667

As you can see, this model performs relatively well on this dataset, even on unseen data.

Why encode text this way?

To demonstrate the importance of text-specific modeling, let’s train a model with the same dataset, without letting AutoMLSearch detect the text column. We can change this by explicitly setting the data type of the 'Message' column in Woodwork to Categorical using the utility method infer_feature_types.

[12]:
from evalml.utils import infer_feature_types
X = infer_feature_types(X, {'Message': 'Categorical'})
X_train, X_holdout, y_train, y_holdout = evalml.preprocessing.split_data(X, y, problem_type='binary', test_size=0.2, random_seed=0)
[13]:
automl_no_text = AutoMLSearch(X_train=X_train, y_train=y_train,
                              problem_type='binary',
                              max_batches=1,
                              optimize_thresholds=True)

automl_no_text.search()
Generating pipelines to search over...
8 pipelines ready for search.

*****************************
* Beginning pipeline search *
*****************************

Optimizing for Log Loss Binary.
Lower score is better.

Using SequentialEngine to train and score pipelines.
Searching up to 1 batches for a total of 9 pipelines.
Allowed model families: catboost, lightgbm, random_forest, xgboost, decision_tree, extra_trees, linear_model

Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 14.046

*****************************
* Evaluating Batch Number 1 *
*****************************

Elastic Net Classifier w/ Imputer + One Hot Encoder + Standard Scaler:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.674
Decision Tree Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.673
Random Forest Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.675
LightGBM Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.676
Logistic Regression Classifier w/ Imputer + One Hot Encoder + Standard Scaler:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.674
XGBoost Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.676
Extra Trees Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.674
CatBoost Classifier w/ Imputer:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.684

Search finished after 00:05
Best pipeline: Decision Tree Classifier w/ Imputer + One Hot Encoder
Best pipeline Log Loss Binary: 0.673265

Like before, we can look at the rankings and pick the best pipeline.

[14]:
automl_no_text.rankings
[14]:
id pipeline_name search_order mean_cv_score standard_deviation_cv_score validation_score percent_better_than_baseline high_variance_cv parameters
0 2 Decision Tree Classifier w/ Imputer + One Hot ... 2 0.673265 0.001010 0.672532 95.206635 False {'Imputer': {'categorical_impute_strategy': 'm...
1 5 Logistic Regression Classifier w/ Imputer + On... 5 0.673990 0.001041 0.673079 95.201474 False {'Imputer': {'categorical_impute_strategy': 'm...
2 1 Elastic Net Classifier w/ Imputer + One Hot En... 1 0.673990 0.001038 0.673086 95.201472 False {'Imputer': {'categorical_impute_strategy': 'm...
3 7 Extra Trees Classifier w/ Imputer + One Hot En... 7 0.674350 0.000880 0.673595 95.198910 False {'Imputer': {'categorical_impute_strategy': 'm...
4 3 Random Forest Classifier w/ Imputer + One Hot ... 3 0.674897 0.000910 0.674175 95.195014 False {'Imputer': {'categorical_impute_strategy': 'm...
5 6 XGBoost Classifier w/ Imputer + One Hot Encoder 6 0.675623 0.001095 0.674990 95.189849 False {'Imputer': {'categorical_impute_strategy': 'm...
6 4 LightGBM Classifier w/ Imputer + One Hot Encoder 4 0.675623 0.001095 0.674990 95.189849 False {'Imputer': {'categorical_impute_strategy': 'm...
7 8 CatBoost Classifier w/ Imputer 8 0.684270 0.001001 0.683586 95.128284 False {'Imputer': {'categorical_impute_strategy': 'm...
8 0 Mode Baseline Binary Classification Pipeline 0 14.045769 0.099705 13.988204 0.000000 False {'Baseline Classifier': {'strategy': 'mode'}}
[15]:
best_pipeline_no_text = automl_no_text.best_pipeline

Here, changing the data type of the text column removed the Text Featurization Component from the pipeline.

[16]:
best_pipeline_no_text.graph()
[16]:
../_images/demos_text_input_32_0.svg
[17]:
automl_no_text.describe_pipeline(automl_no_text.rankings.iloc[0]["id"])

*********************************************************
* Decision Tree Classifier w/ Imputer + One Hot Encoder *
*********************************************************

Problem Type: binary
Model Family: Decision Tree

Pipeline Steps
==============
1. Imputer
         * categorical_impute_strategy : most_frequent
         * numeric_impute_strategy : mean
         * categorical_fill_value : None
         * numeric_fill_value : None
2. One Hot Encoder
         * top_n : 10
         * features_to_encode : None
         * categories : None
         * drop : if_binary
         * handle_unknown : ignore
         * handle_missing : error
3. Decision Tree Classifier
         * criterion : gini
         * max_features : auto
         * max_depth : 6
         * min_samples_split : 2
         * min_weight_fraction_leaf : 0.0

Training
========
Training for binary problems.
Total training time (including CV): 0.4 seconds

Cross Validation
----------------
             Log Loss Binary  MCC Binary   AUC  Precision    F1  Balanced Accuracy Binary  Accuracy Binary # Training # Validation
0                      0.673       0.058 0.504      0.407 0.579                     0.504            0.410        400          200
1                      0.673       0.058 0.504      0.407 0.579                     0.504            0.410        400          200
2                      0.674       0.059 0.504      0.412 0.584                     0.504            0.415        400          200
mean                   0.673       0.059 0.504      0.409 0.580                     0.504            0.412          -            -
std                    0.001       0.000 0.000      0.003 0.003                     0.000            0.003          -            -
coef of var            0.001       0.006 0.000      0.007 0.005                     0.000            0.007          -            -
[18]:
# get standard performance metrics on holdout data
scores = best_pipeline_no_text.score(X_holdout, y_holdout, objectives=evalml.objectives.get_core_objectives('binary'))
print(f'Accuracy Binary: {scores["Accuracy Binary"]}')
Accuracy Binary: 0.5933333333333334

Without the Text Featurization Component, the 'Message' column was treated as a categorical column, and therefore the conversion of this text to numerical features happened in the One Hot Encoder. The best pipeline encoded the top 10 most frequent “categories” of these texts, meaning 10 text messages were one-hot encoded and all the others were dropped. Clearly, this removed almost all of the information from the dataset, as we can see the best_pipeline_no_text performs very similarly to randomly guessing “ham” in every case.