Building a Fraud Prediction Model with EvalML#

In this demo, we will build an optimized fraud prediction model using EvalML. To optimize the pipeline, we will set up an objective function to minimize the percentage of total transaction value lost to fraud. At the end of this demo, we also show you how introducing the right objective during the training results in a much better than using a generic machine learning metric like AUC.

[1]:
import evalml
from evalml import AutoMLSearch
from evalml.objectives import FraudCost
pandas.core.index is deprecated and will be removed in a future version. The public classes are available in the top-level namespace.

Configure “Cost of Fraud”#

To optimize the pipelines toward the specific business needs of this model, we can set our own assumptions for the cost of fraud. These parameters are

  • retry_percentage - what percentage of customers will retry a transaction if it is declined?

  • interchange_fee - how much of each successful transaction do you collect?

  • fraud_payout_percentage - the percentage of fraud will you be unable to collect

  • amount_col - the column in the data the represents the transaction amount

Using these parameters, EvalML determines attempt to build a pipeline that will minimize the financial loss due to fraud.

[2]:
fraud_objective = FraudCost(
    retry_percentage=0.5,
    interchange_fee=0.02,
    fraud_payout_percentage=0.75,
    amount_col="amount",
)

Search for best pipeline#

In order to validate the results of the pipeline creation and optimization process, we will save some of our data as the holdout set.

[3]:
X, y = evalml.demos.load_fraud(n_rows=5000)
             Number of Features
Boolean                       1
Categorical                   6
Numeric                       5

Number of training examples: 5000
Targets
False    86.20%
True     13.80%
Name: fraud, dtype: object

EvalML natively supports one-hot encoding. Here we keep 1 out of the 6 categorical columns to decrease computation time.

[4]:
cols_to_drop = ["datetime", "expiration_date", "country", "region", "provider"]
for col in cols_to_drop:
    X.ww.pop(col)

X_train, X_holdout, y_train, y_holdout = evalml.preprocessing.split_data(
    X, y, problem_type="binary", test_size=0.2, random_seed=0
)

X.ww
[4]:
Physical Type Logical Type Semantic Tag(s)
Column
card_id int64 Integer ['numeric']
store_id int64 Integer ['numeric']
amount int64 Integer ['numeric']
currency category Categorical ['category']
customer_present bool Boolean []
lat float64 Double ['numeric']
lng float64 Double ['numeric']

Because the fraud labels are binary, we will use AutoMLSearch(X_train=X_train, y_train=y_train, problem_type='binary'). When we call .search(), the search for the best pipeline will begin.

[5]:
automl = AutoMLSearch(
    X_train=X_train,
    y_train=y_train,
    problem_type="binary",
    objective=fraud_objective,
    additional_objectives=["auc", "f1", "precision"],
    allowed_model_families=["random_forest", "linear_model"],
    max_batches=1,
    optimize_thresholds=True,
    verbose=True,
)

automl.search(interactive_plot=False)
AutoMLSearch will use mean CV score to rank pipelines.

*****************************
* Beginning pipeline search *
*****************************

Optimizing for Fraud Cost.
Lower score is better.

Using SequentialEngine to train and score pipelines.
Searching up to 1 batches for a total of None pipelines.
Allowed model families:

Evaluating Baseline Pipeline: Mode Baseline Binary Classification Pipeline
Mode Baseline Binary Classification Pipeline:
        Starting cross validation
        Finished cross validation - mean Fraud Cost: 0.790

*****************************
* Evaluating Batch Number 1 *
*****************************

Logistic Regression Classifier w/ Label Encoder + Replace Nullable Types Transformer + Imputer + One Hot Encoder + Oversampler + Standard Scaler:
        Starting cross validation
        Finished cross validation - mean Fraud Cost: 0.008
Random Forest Classifier w/ Label Encoder + Replace Nullable Types Transformer + Imputer + One Hot Encoder + Oversampler:
        Starting cross validation
        Finished cross validation - mean Fraud Cost: 0.009

Search finished after 00:08
Best pipeline: Logistic Regression Classifier w/ Label Encoder + Replace Nullable Types Transformer + Imputer + One Hot Encoder + Oversampler + Standard Scaler
Best pipeline Fraud Cost: 0.008149