Building a Fraud Prediction Model with EvalML¶
In this demo, we will build an optimized fraud prediction model using EvalML. To optimize the pipeline, we will set up an objective function to minimize the percentage of total transaction value lost to fraud. At the end of this demo, we also show you how introducing the right objective during the training results in a much better than using a generic machine learning metric like AUC.
[1]:
import evalml
from evalml import AutoMLSearch
from evalml.objectives import FraudCost
Configure “Cost of Fraud”¶
To optimize the pipelines toward the specific business needs of this model, we can set our own assumptions for the cost of fraud. These parameters are
retry_percentage
- what percentage of customers will retry a transaction if it is declined?interchange_fee
- how much of each successful transaction do you collect?fraud_payout_percentage
- the percentage of fraud will you be unable to collectamount_col
- the column in the data the represents the transaction amount
Using these parameters, EvalML determines attempt to build a pipeline that will minimize the financial loss due to fraud.
[2]:
fraud_objective = FraudCost(retry_percentage=.5,
interchange_fee=.02,
fraud_payout_percentage=.75,
amount_col='amount')
Search for best pipeline¶
In order to validate the results of the pipeline creation and optimization process, we will save some of our data as the holdout set.
[3]:
X, y = evalml.demos.load_fraud(n_rows=5000)
Number of Features
Boolean 1
Categorical 6
Numeric 5
Number of training examples: 5000
Targets
False 86.20%
True 13.80%
Name: fraud, dtype: object
EvalML natively supports one-hot encoding. Here we keep 1 out of the 6 categorical columns to decrease computation time.
[4]:
cols_to_drop = ['datetime', 'expiration_date', 'country', 'region', 'provider']
for col in cols_to_drop:
X.ww.pop(col)
X_train, X_holdout, y_train, y_holdout = evalml.preprocessing.split_data(X, y, problem_type='binary', test_size=0.2, random_seed=0)
X.ww
[4]:
Physical Type | Logical Type | Semantic Tag(s) | |
---|---|---|---|
Column | |||
card_id | int64 | Integer | ['numeric'] |
store_id | int64 | Integer | ['numeric'] |
amount | int64 | Integer | ['numeric'] |
currency | category | Categorical | ['category'] |
customer_present | bool | Boolean | [] |
lat | float64 | Double | ['numeric'] |
lng | float64 | Double | ['numeric'] |
Because the fraud labels are binary, we will use AutoMLSearch(X_train=X_train, y_train=y_train, problem_type='binary')
. When we call .search()
, the search for the best pipeline will begin.
[5]:
automl = AutoMLSearch(X_train=X_train, y_train=y_train,
problem_type='binary',
objective=fraud_objective,
additional_objectives=['auc', 'f1', 'precision'],
allowed_model_families=["random_forest", "linear_model"],
max_batches=1,
optimize_thresholds=True,
verbose=True)
automl.search()
Generating pipelines to search over...
3 pipelines ready for search.
*****************************
* Beginning pipeline search *
*****************************
Optimizing for Fraud Cost.
Lower score is better.
Using SequentialEngine to train and score pipelines.
Searching up to 1 batches for a total of 4 pipelines.
Allowed model families: linear_model, random_forest
Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
Starting cross validation
Finished cross validation - mean Fraud Cost: 0.790
*****************************
* Evaluating Batch Number 1 *
*****************************
Elastic Net Classifier w/ Imputer + One Hot Encoder + Oversampler + Standard Scaler:
Starting cross validation
Finished cross validation - mean Fraud Cost: 0.539
Random Forest Classifier w/ Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean Fraud Cost: 0.289
Logistic Regression Classifier w/ Imputer + One Hot Encoder + Oversampler + Standard Scaler:
Starting cross validation
Finished cross validation - mean Fraud Cost: 0.539
Search finished after 00:11
Best pipeline: Random Forest Classifier w/ Imputer + One Hot Encoder + Oversampler
Best pipeline Fraud Cost: 0.289305
View rankings and select pipelines¶
Once the fitting process is done, we can see all of the pipelines that were searched, ranked by their score on the fraud detection objective we defined.
[6]:
automl.rankings
[6]:
id | pipeline_name | search_order | mean_cv_score | standard_deviation_cv_score | validation_score | percent_better_than_baseline | high_variance_cv | parameters | |
---|---|---|---|---|---|---|---|---|---|
0 | 2 | Random Forest Classifier w/ Imputer + One Hot ... | 2 | 0.289305 | 0.434440 | 0.790953 | 50.034387 | False | {'Imputer': {'categorical_impute_strategy': 'm... |
1 | 1 | Elastic Net Classifier w/ Imputer + One Hot En... | 1 | 0.538598 | 0.435507 | 0.790953 | 25.105090 | False | {'Imputer': {'categorical_impute_strategy': 'm... |
2 | 3 | Logistic Regression Classifier w/ Imputer + On... | 3 | 0.538598 | 0.435507 | 0.790953 | 25.105090 | False | {'Imputer': {'categorical_impute_strategy': 'm... |
3 | 0 | Mode Baseline Binary Classification Pipeline | 0 | 0.789648 | 0.001136 | 0.790953 | 0.000000 | False | {'Baseline Classifier': {'strategy': 'mode'}} |
To select the best pipeline we can call automl.best_pipeline
.
[7]:
best_pipeline = automl.best_pipeline
Describe pipelines¶
We can get more details about any pipeline created during the search process, including how it performed on other objective functions, by calling the describe_pipeline
method and passing the id
of the pipeline of interest.
[8]:
automl.describe_pipeline(automl.rankings.iloc[1]["id"])
***************************************************************************************
* Elastic Net Classifier w/ Imputer + One Hot Encoder + Oversampler + Standard Scaler *
***************************************************************************************
Problem Type: binary
Model Family: Linear
Pipeline Steps
==============
1. Imputer
* categorical_impute_strategy : most_frequent
* numeric_impute_strategy : mean
* categorical_fill_value : None
* numeric_fill_value : None
2. One Hot Encoder
* top_n : 10
* features_to_encode : None
* categories : None
* drop : if_binary
* handle_unknown : ignore
* handle_missing : error
3. Oversampler
* sampling_ratio : 0.25
* k_neighbors_default : 5
* n_jobs : -1
* sampling_ratio_dict : None
* k_neighbors : 5
4. Standard Scaler
5. Elastic Net Classifier
* penalty : elasticnet
* C : 1.0
* l1_ratio : 0.15
* n_jobs : -1
* multi_class : auto
* solver : saga
Training
========
Training for binary problems.
Objective to optimize binary classification pipeline thresholds for: <evalml.objectives.fraud_cost.FraudCost object at 0x7f344e5a6d30>
Total training time (including CV): 1.9 seconds
Cross Validation
----------------
Fraud Cost AUC F1 Precision # Training # Validation
0 0.791 0.856 0.000 0.000 2,666 1,334
1 0.789 0.796 0.000 0.000 2,667 1,333
2 0.036 0.828 0.635 0.592 2,667 1,333
mean 0.539 0.827 0.212 0.197 - -
std 0.436 0.030 0.366 0.342 - -
coef of var 0.809 0.036 1.732 1.732 - -
Evaluate on holdout data¶
Finally, since the best pipeline is already trained, we evaluate it on the holdout data.
Now, we can score the pipeline on the holdout data using both our fraud cost objective and the AUC (Area under the ROC Curve) objective.
[9]:
best_pipeline.score(X_holdout, y_holdout, objectives=["auc", fraud_objective])
[9]:
OrderedDict([('AUC', 0.8654628602172233),
('Fraud Cost', 0.026054721586011263)])
Why optimize for a problem-specific objective?¶
To demonstrate the importance of optimizing for the right objective, let’s search for another pipeline using AUC, a common machine learning metric. After that, we will score the holdout data using the fraud cost objective to see how the best pipelines compare.
[10]:
automl_auc = AutoMLSearch(X_train=X_train, y_train=y_train,
problem_type='binary',
objective='auc',
additional_objectives=['f1', 'precision'],
max_batches=1,
allowed_model_families=["random_forest", "linear_model"],
optimize_thresholds=True,
verbose=True)
automl_auc.search()
Generating pipelines to search over...
3 pipelines ready for search.
*****************************
* Beginning pipeline search *
*****************************
Optimizing for AUC.
Greater score is better.
Using SequentialEngine to train and score pipelines.
Searching up to 1 batches for a total of 4 pipelines.
Allowed model families: linear_model, random_forest
Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
Starting cross validation
Finished cross validation - mean AUC: 0.500
*****************************
* Evaluating Batch Number 1 *
*****************************
Elastic Net Classifier w/ Imputer + One Hot Encoder + Oversampler + Standard Scaler:
Starting cross validation
Finished cross validation - mean AUC: 0.827
Random Forest Classifier w/ Imputer + One Hot Encoder + Oversampler:
Starting cross validation
Finished cross validation - mean AUC: 0.847
Logistic Regression Classifier w/ Imputer + One Hot Encoder + Oversampler + Standard Scaler:
Starting cross validation
Finished cross validation - mean AUC: 0.826
Search finished after 00:05
Best pipeline: Random Forest Classifier w/ Imputer + One Hot Encoder + Oversampler
Best pipeline AUC: 0.847400
Like before, we can look at the rankings of all of the pipelines searched and pick the best pipeline.
[11]:
automl_auc.rankings
[11]:
id | pipeline_name | search_order | mean_cv_score | standard_deviation_cv_score | validation_score | percent_better_than_baseline | high_variance_cv | parameters | |
---|---|---|---|---|---|---|---|---|---|
0 | 2 | Random Forest Classifier w/ Imputer + One Hot ... | 2 | 0.847400 | 0.002923 | 0.845246 | 34.740035 | False | {'Imputer': {'categorical_impute_strategy': 'm... |
1 | 1 | Elastic Net Classifier w/ Imputer + One Hot En... | 1 | 0.826778 | 0.030029 | 0.856262 | 32.677824 | False | {'Imputer': {'categorical_impute_strategy': 'm... |
2 | 3 | Logistic Regression Classifier w/ Imputer + On... | 3 | 0.826327 | 0.030496 | 0.856172 | 32.632734 | False | {'Imputer': {'categorical_impute_strategy': 'm... |
3 | 0 | Mode Baseline Binary Classification Pipeline | 0 | 0.500000 | 0.000000 | 0.500000 | 0.000000 | False | {'Baseline Classifier': {'strategy': 'mode'}} |
[12]:
best_pipeline_auc = automl_auc.best_pipeline
[13]:
# get the fraud score on holdout data
best_pipeline_auc.score(X_holdout, y_holdout, objectives=["auc", fraud_objective])
[13]:
OrderedDict([('AUC', 0.8654628602172233),
('Fraud Cost', 0.026054721586011263)])
[14]:
# fraud score on fraud optimized again
best_pipeline.score(X_holdout, y_holdout, objectives=["auc", fraud_objective])
[14]:
OrderedDict([('AUC', 0.8654628602172233),
('Fraud Cost', 0.026054721586011263)])
When we optimize for AUC, we can see that the AUC score from this pipeline performs better compared to the AUC score from the pipeline optimized for fraud cost; however, the losses due to fraud are a much larger percentage of the total transaction amount when optimized for AUC and much smaller when optimized for fraud cost. As a result, we lose a noticable percentage of the total transaction amount by not optimizing for fraud cost specifically.
Optimizing for AUC does not take into account the user-specified retry_percentage
, interchange_fee
, fraud_payout_percentage
values, which could explain the decrease in fraud performance. Thus, the best pipelines may produce the highest AUC but may not actually reduce the amount loss due to your specific type fraud.
This example highlights how performance in the real world can diverge greatly from machine learning metrics.