Using the Cost-Benefit Matrix Objective

The Cost-Benefit Matrix (CostBenefitMatrix) objective is an objective that assigns costs to each of the quadrants of a confusion matrix to quantify the cost of being correct or incorrect.

Confusion Matrix

Confusion matrices are tables that summarize the number of correct and incorrectly-classified predictions, broken down by each class. They allow us to quickly understand the performance of a classification model and where the model gets “confused” when it is making predictions. For the binary classification problem, there are four possible combinations of prediction and actual target values possible:

  • true positives (correct positive assignments)

  • true negatives (correct negative assignments)

  • false positives (incorrect positive assignments)

  • false negatives (incorrect negative assignments)

An example of how to calculate a confusion matrix can be found here.

Cost-Benefit Matrix

Although the confusion matrix is an incredibly useful visual for understanding our model, each prediction that is correctly or incorrectly classified is treated equally. For example, for detecting breast cancer, the confusion matrix does not take into consideration that it could be much more costly to incorrectly classify a malignant tumor as benign than it is to incorrectly classify a benign tumor as malignant. This is where the cost-benefit matrix shines: it uses the cost of each of the four possible outcomes to weigh each outcome differently. By scoring using the cost-benefit matrix, we can measure the score of the model by a concrete unit that is more closely related to the goal of the model. In the below example, we will show how the cost-benefit matrix objective can be used, and how it can give us better real-world impact when compared to using other standard machine learning objectives.

Customer Churn Example

Data

In this example, we will be using a customer churn data set taken from Kaggle.

This dataset includes records of over 7000 customers, and includes customer account information, demographic information, services they signed up for, and whether or not the customer “churned” or left within the last month.

The target we want to predict is whether the customer churned (“Yes”) or did not churn (“No”). In the dataset, approximately 73.5% of customers did not churn, and 26.5% did. We will refer to the customers who churned as the “positive” class and the customers who did not churn as the “negative” class.

[1]:
from evalml.demos.churn import load_churn
from evalml.preprocessing import split_data

X, y = load_churn()
X.ww.set_types({'PaymentMethod':'Categorical', 'Contract': 'Categorical'}) # Update data types Woodwork did not correctly infer
X_train, X_holdout, y_train, y_holdout = split_data(X, y, problem_type='binary', test_size=0.3, random_seed=0)
             Number of Features
Categorical                  16
Numeric                       3

Number of training examples: 7043
Targets
No     73.46%
Yes    26.54%
Name: Churn, dtype: object

In this example, let’s say that correctly identifying customers who will churn (true positive case) will give us a net profit of $400, because it allows us to intervene, incentivize the customer to stay, and sign a new contract. Incorrectly classifying customers who were not going to churn as customers who will churn (false positive case) will cost $100 to represent the marketing and effort used to try to retain the user. Not identifying customers who will churn (false negative case) will cost us $200 to represent the lost in revenue from losing a customer. Finally, correctly identifying customers who will not churn (true negative case) will not cost us anything ($0), as nothing needs to be done for that customer.

We can represent these values in our CostBenefitMatrix objective, where a negative value represents a cost and a positive value represents a profit–note that this means that the greater the score, the more profit we will make.

[2]:
from evalml.objectives import CostBenefitMatrix
cost_benefit_matrix = CostBenefitMatrix(true_positive=400,
                                        true_negative=0,
                                        false_positive=-100,
                                        false_negative=-200)

AutoML Search with Log Loss

First, let us run AutoML search to train pipelines using the default objective for binary classification (log loss).

[3]:
from evalml import AutoMLSearch
automl = AutoMLSearch(X_train=X_train, y_train=y_train, problem_type='binary', objective='log loss binary')
automl.search()

ll_pipeline = automl.best_pipeline
ll_pipeline.score(X_holdout, y_holdout, ['log loss binary'])
Using default limit of max_batches=1.

Generating pipelines to search over...
8 pipelines ready for search.

*****************************
* Beginning pipeline search *
*****************************

Optimizing for Log Loss Binary.
Lower score is better.

Using SequentialEngine to train and score pipelines.
Searching up to 1 batches for a total of 9 pipelines.
Allowed model families: decision_tree, random_forest, linear_model, extra_trees, xgboost, lightgbm, catboost

Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 9.164

*****************************
* Evaluating Batch Number 1 *
*****************************

Elastic Net Classifier w/ Imputer + One Hot Encoder + Standard Scaler:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.470
Decision Tree Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.648
Random Forest Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.425
LightGBM Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.458
Logistic Regression Classifier w/ Imputer + One Hot Encoder + Standard Scaler:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.423
XGBoost Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.445
Extra Trees Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.432
CatBoost Classifier w/ Imputer:
        Starting cross validation
        Finished cross validation - mean Log Loss Binary: 0.601

Search finished after 00:23
Best pipeline: Logistic Regression Classifier w/ Imputer + One Hot Encoder + Standard Scaler
Best pipeline Log Loss Binary: 0.423324
[3]:
OrderedDict([('Log Loss Binary', 0.41624237859129104)])

When we train our pipelines using log loss as our primary objective, we try to find pipelines that minimize log loss. However, our ultimate goal in training models is to find a model that gives us the most profit, so let’s score our pipeline on the cost benefit matrix (using the costs outlined above) to determine the profit we would earn from the predictions made by this model:

[4]:
ll_pipeline_score = ll_pipeline.score(X_holdout, y_holdout, [cost_benefit_matrix])
print (ll_pipeline_score)
OrderedDict([('Cost Benefit Matrix', 24.798864174159966)])
[5]:
# Calculate total profit across all customers using pipeline optimized for Log Loss
total_profit_ll = ll_pipeline_score['Cost Benefit Matrix'] * len(X)
print (total_profit_ll)
174658.40037860864

AutoML Search with Cost-Benefit Matrix

Let’s try rerunning our AutoML search, but this time using the cost-benefit matrix as our primary objective to optimize.

[6]:
automl = AutoMLSearch(X_train=X_train, y_train=y_train, problem_type='binary', objective=cost_benefit_matrix)
automl.search()

cbm_pipeline = automl.best_pipeline
Using default limit of max_batches=1.

Generating pipelines to search over...
8 pipelines ready for search.

*****************************
* Beginning pipeline search *
*****************************

Optimizing for Cost Benefit Matrix.
Greater score is better.

Using SequentialEngine to train and score pipelines.
Searching up to 1 batches for a total of 9 pipelines.
Allowed model families: decision_tree, random_forest, linear_model, extra_trees, xgboost, lightgbm, catboost

Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
        Starting cross validation
        Finished cross validation - mean Cost Benefit Matrix: -53.063

*****************************
* Evaluating Batch Number 1 *
*****************************

Elastic Net Classifier w/ Imputer + One Hot Encoder + Standard Scaler:
        Starting cross validation
        Finished cross validation - mean Cost Benefit Matrix: 56.511
Decision Tree Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Cost Benefit Matrix: 52.982
Random Forest Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Cost Benefit Matrix: 59.048
LightGBM Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Cost Benefit Matrix: 50.692
        High coefficient of variation (cv >= 0.2) within cross validation scores.
        LightGBM Classifier w/ Imputer + One Hot Encoder may not perform as estimated on unseen data.
Logistic Regression Classifier w/ Imputer + One Hot Encoder + Standard Scaler:
        Starting cross validation
        Finished cross validation - mean Cost Benefit Matrix: 59.311
XGBoost Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Cost Benefit Matrix: 54.444
Extra Trees Classifier w/ Imputer + One Hot Encoder:
        Starting cross validation
        Finished cross validation - mean Cost Benefit Matrix: 58.459
CatBoost Classifier w/ Imputer:
        Starting cross validation
        Finished cross validation - mean Cost Benefit Matrix: 57.688

Search finished after 00:26
Best pipeline: Logistic Regression Classifier w/ Imputer + One Hot Encoder + Standard Scaler
Best pipeline Cost Benefit Matrix: 59.310679

Now, if we calculate the cost-benefit matrix score on our best pipeline, we see that with this pipeline optimized for our cost-benefit matrix objective, we are able to generate more profit per customer. Across our 7043 customers, we generate much more profit using this best pipeline! Custom objectives like CostBenefitMatrix are just one example of how using EvalML can help find pipelines that can perform better on real-world problems, rather than on arbitrary standard statistical metrics.

[7]:
cbm_pipeline_score = cbm_pipeline.score(X_holdout, y_holdout, [cost_benefit_matrix])
print (cbm_pipeline_score)
OrderedDict([('Cost Benefit Matrix', 61.523899668717476)])
[8]:
# Calculate total profit across all customers using pipeline optimized for CostBenefitMatrix
total_profit_cbm = cbm_pipeline_score['Cost Benefit Matrix'] * len(X)
print (total_profit_cbm)
433312.8253667772
[9]:
# Calculate difference in profit made using both pipelines
profit_diff = total_profit_cbm - total_profit_ll
print (profit_diff)
258654.42498816855

Finally, we can graph the confusion matrices for both pipelines to better understand why the pipeline trained using the cost-benefit matrix is able to correctly classify more samples than the pipeline trained with log loss: we were able to correctly predict more cases where the customer would have churned (true positive), allowing us to intervene and prevent those customers from leaving.

[10]:
from evalml.model_understanding.graphs import graph_confusion_matrix

# pipeline trained with log loss
y_pred = ll_pipeline.predict(X_holdout)
graph_confusion_matrix(y_holdout, y_pred)