Simply examining a model’s performance metrics is not enough to select a model and promote it for use in a production setting. While developing an ML algorithm, it is important to understand how the model behaves on the data, to examine the key factors influencing its predictions and to consider where it may be deficient. Determination of what “success” may mean for an ML project depends first and foremost on the user’s domain expertise.
EvalML includes a variety of tools for understanding models, from graphing utilities to methods for explaining predictions.
** Graphing methods on Jupyter Notebook and Jupyter Lab require ipywidgets to be installed.
** If graphing on Jupyter Lab, jupyterlab-plotly required. To download this, make sure you have npm installed.
First, let’s train a pipeline on some data.
[1]:
import evalml class RFBinaryClassificationPipeline(evalml.pipelines.BinaryClassificationPipeline): component_graph = ['Simple Imputer', 'Random Forest Classifier'] X, y = evalml.demos.load_breast_cancer() pipeline = RFBinaryClassificationPipeline({}) pipeline.fit(X, y) print(pipeline.score(X, y, objectives=['log loss binary']))
OrderedDict([('Log Loss Binary', 0.038403828027876195)])
We can get the importance associated with each feature of the resulting pipeline
[2]:
pipeline.feature_importance
We can also create a bar plot of the feature importances
[3]:
pipeline.graph_feature_importance()
We can also compute and plot the permutation importance of the pipeline.
[4]:
from evalml.model_understanding.graphs import calculate_permutation_importance calculate_permutation_importance(pipeline, X, y, 'log loss binary')
[5]:
from evalml.model_understanding.graphs import graph_permutation_importance graph_permutation_importance(pipeline, X, y, 'log loss binary')
We can calculate the partial dependence plots for a feature.
[6]:
from evalml.model_understanding.graphs import partial_dependence partial_dependence(pipeline, X, feature='mean radius')
100 rows × 2 columns
[7]:
from evalml.model_understanding.graphs import graph_partial_dependence graph_partial_dependence(pipeline, X, feature='mean radius')
For binary or multiclass classification, we can view a confusion matrix of the classifier’s predictions. In the DataFrame output of confusion_matrix(), the column header represents the predicted labels while row header represents the actual labels.
confusion_matrix()
[8]:
from evalml.model_understanding.graphs import confusion_matrix y_pred = pipeline.predict(X) confusion_matrix(y, y_pred)
[9]:
from evalml.model_understanding.graphs import graph_confusion_matrix y_pred = pipeline.predict(X) graph_confusion_matrix(y, y_pred)
For binary classification, we can view the precision-recall curve of the pipeline.
[10]:
from evalml.model_understanding.graphs import graph_precision_recall_curve # get the predicted probabilities associated with the "true" label y_encoded = y.map({'benign': 0, 'malignant': 1}) y_pred_proba = pipeline.predict_proba(X)["malignant"] graph_precision_recall_curve(y_encoded, y_pred_proba)
For binary and multiclass classification, we can view the Receiver Operating Characteristic (ROC) curve of the pipeline.
[11]:
from evalml.model_understanding.graphs import graph_roc_curve # get the predicted probabilities associated with the "malignant" label y_pred_proba = pipeline.predict_proba(X)["malignant"] graph_roc_curve(y_encoded, y_pred_proba)
The ROC curve can also be generated for multiclass classification problems. For multiclass problems, the graph will show a one-vs-many ROC curve for each class.
[12]:
class RFMulticlassClassificationPipeline(evalml.pipelines.MulticlassClassificationPipeline): component_graph = ['Simple Imputer', 'Random Forest Classifier'] X_multi, y_multi = evalml.demos.load_wine() pipeline_multi = RFMulticlassClassificationPipeline({}) pipeline_multi.fit(X_multi, y_multi) y_pred_proba = pipeline_multi.predict_proba(X_multi) graph_roc_curve(y_multi, y_pred_proba)
Some binary classification objectives (objectives that have score_needs_proba set to False) are sensitive to a decision threshold. For those objectives, we can obtain and graph the scores for thresholds from zero to one, calculated at evenly-spaced intervals determined by steps.
score_needs_proba
steps
[13]:
from evalml.model_understanding.graphs import binary_objective_vs_threshold binary_objective_vs_threshold(pipeline, X, y, 'f1', steps=100)
101 rows × 2 columns
[14]:
from evalml.model_understanding.graphs import graph_binary_objective_vs_threshold graph_binary_objective_vs_threshold(pipeline, X, y, 'f1', steps=100)
We can also create a scatterplot comparing predicted vs actual values for regression problems. We can specify an outlier_threshold to color values differently if the absolute difference between the actual and predicted values are outside of a given threshold.
outlier_threshold
[15]:
from evalml.model_understanding.graphs import graph_prediction_vs_actual class LinearRegressionPipeline(evalml.pipelines.RegressionPipeline): component_graph = ['One Hot Encoder', 'Linear Regressor'] X_regress, y_regress = evalml.demos.load_diabetes() X_train, X_test, y_train, y_test = evalml.preprocessing.split_data(X_regress, y_regress, regression=True) pipeline_regress = LinearRegressionPipeline({}) pipeline_regress.fit(X_train, y_train) y_pred = pipeline_regress.predict(X_test) graph_prediction_vs_actual(y_test, y_pred, outlier_threshold=50)
We can explain why the model made an individual prediction with the explain_prediction function. This will use the Shapley Additive Explanations (SHAP) algorithms to identify the top features that explain the predicted value.
This function can explain both classification and regression models - all you need to do is provide the pipeline, the input features (must correspond to one row of the input data) and the training data. The function will return a table that you can print summarizing the top 3 most positive and negative contributing features to the predicted value.
In the example below, we explain the prediction for the third data point in the data set. We see that the worst concave points feature increased the estimated probability that the tumor is malignant by 20% while the worst radius feature decreased the probability the tumor is malignant by 5%.
worst concave points
worst radius
[16]:
from evalml.model_understanding.prediction_explanations import explain_prediction table = explain_prediction(pipeline=pipeline, input_features=X.iloc[3:4], training_data=X, include_shap_values=True) print(table)
Feature Name Feature Value Contribution to Prediction SHAP Value ============================================================================== worst concave points 0.26 ++ 0.20 mean concave points 0.11 + 0.11 mean concavity 0.24 + 0.08 worst area 567.70 - -0.03 worst perimeter 98.87 - -0.05 worst radius 14.91 - -0.05
The interpretation of the table is the same for regression problems - but the SHAP value now corresponds to the change in the estimated value of the dependent variable rather than a change in probability. For multiclass classification problems, a table will be output for each possible class.
This functionality is currently not supported for XGBoost models or CatBoost multiclass classifiers.
When debugging machine learning models, it is often useful to analyze the best and worst predictions the model made. The explain_predictions_best_worst function can help us with this.
This function will display the output of explain_prediction for the best 2 and worst 2 predictions. By default, the best and worst predictions are determined by the absolute error for regression problems and cross entropy for classification problems.
We can specify our own ranking function by passing in a function to the metric parameter. This function will be called on y_true and y_pred. By convention, lower scores are better.
metric
y_true
y_pred
At the top of each table, we can see the predicted probabilities, target value, and error on that prediction. For a regression problem, we would see the predicted value instead of predicted probabilities.
[17]:
from evalml.model_understanding.prediction_explanations import explain_predictions_best_worst report = explain_predictions_best_worst(pipeline=pipeline, input_features=X, y_true=y, include_shap_values=True, num_to_explain=2) print(report)
RFBinary Classification Pipeline {'Simple Imputer': {'impute_strategy': 'most_frequent', 'fill_value': None}, 'Random Forest Classifier': {'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}} Best 1 of 2 Predicted Probabilities: [benign: 0.0, malignant: 1.0] Predicted Value: malignant Target Value: malignant Cross Entropy: 0.0 Feature Name Feature Value Contribution to SHAP Value Prediction ================================================================================ worst perimeter 155.30 + 0.10 worst radius 23.14 + 0.08 worst concave points 0.17 + 0.08 worst fractal dimension 0.09 - -0.00 compactness error 0.04 - -0.00 worst symmetry 0.22 - -0.00 Best 2 of 2 Predicted Probabilities: [benign: 0.0, malignant: 1.0] Predicted Value: malignant Target Value: malignant Cross Entropy: 0.0 Feature Name Feature Value Contribution to Prediction SHAP Value ============================================================================== worst perimeter 166.10 + 0.10 worst radius 25.45 + 0.08 worst concave points 0.22 + 0.08 compactness error 0.03 - -0.00 worst compactness 0.21 - -0.00 worst symmetry 0.21 - -0.00 Worst 1 of 2 Predicted Probabilities: [benign: 0.552, malignant: 0.448] Predicted Value: benign Target Value: malignant Cross Entropy: 0.802 Feature Name Feature Value Contribution to Prediction SHAP Value ============================================================================== smoothness error 0.00 + 0.04 mean texture 21.58 + 0.03 worst texture 30.25 + 0.02 worst concave points 0.11 - -0.02 worst radius 15.93 - -0.03 mean concave points 0.02 - -0.03 Worst 2 of 2 Predicted Probabilities: [benign: 0.788, malignant: 0.212] Predicted Value: benign Target Value: malignant Cross Entropy: 1.55 Feature Name Feature Value Contribution to Prediction SHAP Value ============================================================================== worst texture 33.37 + 0.05 mean texture 22.47 + 0.03 symmetry error 0.02 + 0.01 worst concave points 0.09 - -0.04 worst radius 14.49 - -0.05 worst perimeter 92.04 - -0.06
We use a custom metric (hinge loss) for selecting the best and worst predictions. See this example:
import numpy as np def hinge_loss(y_true, y_pred_proba): probabilities = np.clip(y_pred_proba.iloc[:, 1], 0.001, 0.999) y_true[y_true == 0] = -1 return np.clip(1 - y_true * np.log(probabilities / (1 - probabilities)), a_min=0, a_max=None) report = explain_predictions_best_worst(pipeline=pipeline, input_features=X, y_true=y, include_shap_values=True, num_to_explain=5, metric=hinge_loss) print(report)
We can also manually explain predictions on any subset of the training data with the explain_predictions function. Below, we explain the predictions on the first, fifth, and tenth row of the data.
[18]:
from evalml.model_understanding.prediction_explanations import explain_predictions report = explain_predictions(pipeline=pipeline, input_features=X.iloc[[0, 4, 9]], include_shap_values=True) print(report)
RFBinary Classification Pipeline {'Simple Imputer': {'impute_strategy': 'most_frequent', 'fill_value': None}, 'Random Forest Classifier': {'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}} 1 of 3 Feature Name Feature Value Contribution to Prediction SHAP Value ============================================================================== worst concave points 0.27 + 0.09 worst perimeter 184.60 + 0.09 worst radius 25.38 + 0.08 compactness error 0.05 - -0.00 worst texture 17.33 - -0.03 mean texture 10.38 - -0.05 2 of 3 Feature Name Feature Value Contribution to Prediction SHAP Value ============================================================================== worst perimeter 152.20 + 0.11 worst radius 22.54 + 0.09 worst concave points 0.16 + 0.08 worst symmetry 0.24 - -0.00 mean texture 14.34 - -0.03 worst texture 16.67 - -0.03 3 of 3 Feature Name Feature Value Contribution to Prediction SHAP Value ============================================================================== worst concave points 0.22 ++ 0.20 mean concave points 0.09 + 0.11 mean concavity 0.23 + 0.08 mean area 475.90 - -0.01 worst radius 15.09 - -0.03 worst perimeter 97.65 - -0.05
Instead of getting the prediction explanations as text, you can get the report as a python dictionary. All you have to do is pass output_format="dict" to either explain_prediction, explain_predictions, or explain_predictions_best_worst.
output_format="dict"
explain_prediction
explain_predictions
explain_predictions_best_worst
[19]:
import json report = explain_predictions_best_worst(pipeline=pipeline, input_features=X, y_true=y, num_to_explain=1, include_shap_values=True, output_format="dict") print(json.dumps(report, indent=2))
{ "explanations": [ { "rank": { "prefix": "best", "index": 1 }, "predicted_values": { "probabilities": { "benign": 0.0, "malignant": 1.0 }, "predicted_value": "malignant", "target_value": "malignant", "error_name": "Cross Entropy", "error_value": 9.95074382629983e-05 }, "explanations": [ { "feature_names": [ "worst perimeter", "worst radius", "worst concave points", "worst fractal dimension", "compactness error", "worst symmetry" ], "feature_values": [ 155.3, 23.14, 0.1721, 0.093, 0.03634, 0.216 ], "qualitative_explanation": [ "+", "+", "+", "-", "-", "-" ], "quantitative_explanation": [ 0.09988982304983156, 0.08240174808629956, 0.07868368954615064, -0.001381925705526203, -0.0022100542079298295, -0.00455357441134733 ], "class_name": "malignant" } ] }, { "rank": { "prefix": "worst", "index": 1 }, "predicted_values": { "probabilities": { "benign": 0.788, "malignant": 0.212 }, "predicted_value": "benign", "target_value": "malignant", "error_name": "Cross Entropy", "error_value": 1.5499050281608746 }, "explanations": [ { "feature_names": [ "worst texture", "mean texture", "symmetry error", "worst concave points", "worst radius", "worst perimeter" ], "feature_values": [ 33.37, 22.47, 0.01647, 0.09331, 14.49, 92.04 ], "qualitative_explanation": [ "+", "+", "+", "-", "-", "-" ], "quantitative_explanation": [ 0.05245422607466413, 0.03035933540832274, 0.013117759717201452, -0.04174884967530769, -0.0491285663898271, -0.05666940833106337 ], "class_name": "malignant" } ] } ] }