Model Understanding

Simply examining a model’s performance metrics is not enough to select a model and promote it for use in a production setting. While developing an ML algorithm, it is important to understand how the model behaves on the data, to examine the key factors influencing its predictions and to consider where it may be deficient. Determination of what “success” may mean for an ML project depends first and foremost on the user’s domain expertise.

EvalML includes a variety of tools for understanding models, from graphing utilities to methods for explaining predictions.

** Graphing methods on Jupyter Notebook and Jupyter Lab require ipywidgets to be installed.

** If graphing on Jupyter Lab, jupyterlab-plotly required. To download this, make sure you have npm installed.

Graphing Utilities

First, let’s train a pipeline on some data.

[1]:
import evalml
from evalml.pipelines import BinaryClassificationPipeline
X, y = evalml.demos.load_breast_cancer()

X_train, X_holdout, y_train, y_holdout = evalml.preprocessing.split_data(X, y, problem_type='binary',
                                                                         test_size=0.2, random_seed=0)


pipeline_binary = BinaryClassificationPipeline(['Simple Imputer', 'Random Forest Classifier'])
pipeline_binary.fit(X_train, y_train)
print(pipeline_binary.score(X_holdout, y_holdout, objectives=['log loss binary']))
         Number of Features
Numeric                  30

Number of training examples: 569
Targets
benign       62.74%
malignant    37.26%
Name: target, dtype: object
OrderedDict([('Log Loss Binary', 0.1686746297113362)])

Feature Importance

We can get the importance associated with each feature of the resulting pipeline

[2]:
pipeline_binary.feature_importance
[2]:
feature importance
0 mean concave points 0.138857
1 worst perimeter 0.137780
2 worst concave points 0.117782
3 worst radius 0.100584
4 mean concavity 0.086402
5 worst area 0.072027
6 mean perimeter 0.046500
7 worst concavity 0.043408
8 mean radius 0.037664
9 mean area 0.033683
10 radius error 0.025036
11 area error 0.019324
12 worst texture 0.014754
13 worst compactness 0.014462
14 mean texture 0.013856
15 worst smoothness 0.013710
16 worst symmetry 0.011395
17 perimeter error 0.010284
18 mean compactness 0.008162
19 mean smoothness 0.008154
20 worst fractal dimension 0.007034
21 fractal dimension error 0.005502
22 compactness error 0.004953
23 smoothness error 0.004728
24 texture error 0.004384
25 symmetry error 0.004250
26 mean fractal dimension 0.004164
27 concavity error 0.004089
28 mean symmetry 0.003997
29 concave points error 0.003076

We can also create a bar plot of the feature importances

[3]:
pipeline_binary.graph_feature_importance()

Permutation Importance

We can also compute and plot the permutation importance of the pipeline.

[4]:
from evalml.model_understanding import calculate_permutation_importance
calculate_permutation_importance(pipeline_binary, X_holdout, y_holdout, 'log loss binary')
[4]:
feature importance
0 worst perimeter 0.063657
1 worst area 0.045759
2 worst radius 0.041926
3 mean concave points 0.029325
4 worst concave points 0.021045
5 worst concavity 0.010105
6 worst texture 0.010044
7 mean texture 0.006178
8 mean symmetry 0.005857
9 mean area 0.004745
10 worst smoothness 0.003190
11 area error 0.003113
12 mean perimeter 0.002478
13 mean fractal dimension 0.001981
14 compactness error 0.001968
15 concavity error 0.001947
16 texture error 0.000291
17 smoothness error -0.000206
18 mean smoothness -0.000745
19 fractal dimension error -0.000835
20 worst compactness -0.002392
21 mean concavity -0.003188
22 mean compactness -0.005377
23 radius error -0.006229
24 mean radius -0.006870
25 worst fractal dimension -0.007415
26 symmetry error -0.008175
27 perimeter error -0.008980
28 concave points error -0.010415
29 worst symmetry -0.018645
[5]:
from evalml.model_understanding import graph_permutation_importance
graph_permutation_importance(pipeline_binary, X_holdout, y_holdout, 'log loss binary')

Partial Dependence Plots

We can calculate the one-way partial dependence plots for a feature.

[6]:
from evalml.model_understanding.graphs import partial_dependence
partial_dependence(pipeline_binary, X_holdout, features='mean radius', grid_resolution=5)
[6]:
feature_values partial_dependence class_label
0 9.69092 0.392453 malignant
1 12.40459 0.395962 malignant
2 15.11826 0.417396 malignant
3 17.83193 0.429542 malignant
4 20.54560 0.429717 malignant
[7]:
from evalml.model_understanding.graphs import graph_partial_dependence
graph_partial_dependence(pipeline_binary, X_holdout, features='mean radius', grid_resolution=5)

You can also compute the partial dependence for a categorical feature. We will demonstrate this on the fraud dataset.

[8]:
X_fraud, y_fraud = evalml.demos.load_fraud(100, verbose=False)
X_fraud.ww.init(logical_types={"provider": "Categorical", 'region': "Categorical",
                               "currency": "Categorical", "expiration_date": "Categorical"})

fraud_pipeline = BinaryClassificationPipeline(["DateTime Featurization Component","One Hot Encoder", "Random Forest Classifier"])
fraud_pipeline.fit(X_fraud, y_fraud)

graph_partial_dependence(fraud_pipeline, X_fraud, features='provider')

Two-way partial dependence plots are also possible and invoke the same API.

[9]:
partial_dependence(pipeline_binary, X_holdout, features=('worst perimeter', 'worst radius'), grid_resolution=5)
[9]:
10.6876 14.404924999999999 18.12225 21.839575 25.5569 class_label
69.140700 0.279038 0.282898 0.435179 0.435355 0.435355 malignant
94.334275 0.304335 0.308194 0.458283 0.458458 0.458458 malignant
119.527850 0.464455 0.468314 0.612137 0.616932 0.616932 malignant
144.721425 0.483437 0.487297 0.631120 0.635915 0.635915 malignant
169.915000 0.483437 0.487297 0.631120 0.635915 0.635915 malignant
[10]:
graph_partial_dependence(pipeline_binary, X_holdout, features=('worst perimeter', 'worst radius'), grid_resolution=5)

Confusion Matrix

For binary or multiclass classification, we can view a confusion matrix of the classifier’s predictions. In the DataFrame output of confusion_matrix(), the column header represents the predicted labels while row header represents the actual labels.

[11]:
from evalml.model_understanding.graphs import confusion_matrix
y_pred = pipeline_binary.predict(X_holdout)
confusion_matrix(y_holdout, y_pred)
[11]:
benign malignant
benign 0.930556 0.069444
malignant 0.023810 0.976190
[12]:
from evalml.model_understanding.graphs import graph_confusion_matrix
y_pred = pipeline_binary.predict(X_holdout)
graph_confusion_matrix(y_holdout, y_pred)

Precision-Recall Curve

For binary classification, we can view the precision-recall curve of the pipeline.

[13]:
from evalml.model_understanding.graphs import graph_precision_recall_curve
# get the predicted probabilities associated with the "true" label
import woodwork as ww
y_encoded = y_holdout.ww.map({'benign': 0, 'malignant': 1})
y_pred_proba = pipeline_binary.predict_proba(X_holdout)["malignant"]
graph_precision_recall_curve(y_encoded, y_pred_proba)

ROC Curve

For binary and multiclass classification, we can view the Receiver Operating Characteristic (ROC) curve of the pipeline.

[14]:
from evalml.model_understanding.graphs import graph_roc_curve
# get the predicted probabilities associated with the "malignant" label
y_pred_proba = pipeline_binary.predict_proba(X_holdout)["malignant"]
graph_roc_curve(y_encoded, y_pred_proba)

The ROC curve can also be generated for multiclass classification problems. For multiclass problems, the graph will show a one-vs-many ROC curve for each class.

[15]:
from evalml.pipelines import MulticlassClassificationPipeline
X_multi, y_multi = evalml.demos.load_wine()

pipeline_multi = MulticlassClassificationPipeline(['Simple Imputer', 'Random Forest Classifier'])
pipeline_multi.fit(X_multi, y_multi)

y_pred_proba = pipeline_multi.predict_proba(X_multi)
graph_roc_curve(y_multi, y_pred_proba)
         Number of Features
Numeric                  13

Number of training examples: 178
Targets
class_1    39.89%
class_0    33.15%
class_2    26.97%
Name: target, dtype: object

Binary Objective Score vs. Threshold Graph

Some binary classification objectives (objectives that have score_needs_proba set to False) are sensitive to a decision threshold. For those objectives, we can obtain and graph the scores for thresholds from zero to one, calculated at evenly-spaced intervals determined by steps.

[16]:
from evalml.model_understanding.graphs import binary_objective_vs_threshold
binary_objective_vs_threshold(pipeline_binary, X_holdout, y_holdout, 'f1', steps=10)
[16]:
threshold score
0 0.0 0.538462
1 0.1 0.811881
2 0.2 0.891304
3 0.3 0.901099
4 0.4 0.931818
5 0.5 0.931818
6 0.6 0.941176
7 0.7 0.951220
8 0.8 0.936709
9 0.9 0.923077
10 1.0 0.000000
[17]:
from evalml.model_understanding.graphs import graph_binary_objective_vs_threshold
graph_binary_objective_vs_threshold(pipeline_binary, X_holdout, y_holdout, 'f1', steps=100)

Predicted Vs Actual Values Graph for Regression Problems

We can also create a scatterplot comparing predicted vs actual values for regression problems. We can specify an outlier_threshold to color values differently if the absolute difference between the actual and predicted values are outside of a given threshold.

[18]:
from evalml.model_understanding.graphs import graph_prediction_vs_actual
from evalml.pipelines import RegressionPipeline

X_regress, y_regress = evalml.demos.load_diabetes()
X_train, X_test, y_train, y_test = evalml.preprocessing.split_data(X_regress, y_regress, problem_type='regression')

pipeline_regress = RegressionPipeline(['One Hot Encoder', 'Linear Regressor'])
pipeline_regress.fit(X_train, y_train)

y_pred = pipeline_regress.predict(X_test)
graph_prediction_vs_actual(y_test, y_pred, outlier_threshold=50)

         Number of Features
Numeric                  10

Number of training examples: 442
Targets
200.0    1.36%
72.0     1.36%
90.0     1.13%
178.0    1.13%
71.0     1.13%
         ...
73.0     0.23%
222.0    0.23%
86.0     0.23%
79.0     0.23%
57.0     0.23%
Name: target, Length: 214, dtype: object

Now let’s train a decision tree on some data.

[19]:
pipeline_dt = BinaryClassificationPipeline(['Simple Imputer', 'Decision Tree Classifier'])
pipeline_dt.fit(X_train, y_train)
[19]:
pipeline = BinaryClassificationPipeline(component_graph={'Simple Imputer': ['Simple Imputer', 'X', 'y'], 'Decision Tree Classifier': ['Decision Tree Classifier', 'Simple Imputer.x', 'y']}, parameters={'Simple Imputer':{'impute_strategy': 'most_frequent', 'fill_value': None}, 'Decision Tree Classifier':{'criterion': 'gini', 'max_features': 'auto', 'max_depth': 6, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0}}, random_seed=0)

Tree Visualization

We can visualize the structure of the Decision Tree that was fit to that data, and save it if necessary.

[20]:
from evalml.model_understanding.graphs import visualize_decision_tree

visualize_decision_tree(pipeline_dt.estimator, max_depth=2, rotate=False, filled=True, filepath=None)
[20]:
../_images/user_guide_model_understanding_36_0.svg

Explaining Predictions

We can explain why the model made certain predictions with the explain_predictions function. This will use the Shapley Additive Explanations (SHAP) algorithm to identify the top features that explain the predicted value.

This function can explain both classification and regression models - all you need to do is provide the pipeline, the input features, and a list of rows corresponding to the indices of the input features you want to explain. The function will return a table that you can print summarizing the top 3 most positive and negative contributing features to the predicted value.

In the example below, we explain the prediction for the third data point in the data set. We see that the worst concave points feature increased the estimated probability that the tumor is malignant by 20% while the worst radius feature decreased the probability the tumor is malignant by 5%.

[21]:
from evalml.model_understanding.prediction_explanations import explain_predictions

table = explain_predictions(pipeline=pipeline_binary, input_features=X_holdout, y=None, indices_to_explain=[3],
                           top_k_features=6, include_shap_values=True)
print(table)
Random Forest Classifier w/ Simple Imputer

{'Simple Imputer': {'impute_strategy': 'most_frequent', 'fill_value': None}, 'Random Forest Classifier': {'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}}

        1 of 1

                   Feature Name       Feature Value   Contribution to Prediction   SHAP Value
                =============================================================================
                  worst concavity         0.18                    -                  -0.02
                  mean concavity          0.04                    -                  -0.03
                    worst area           599.50                   -                  -0.03
                   worst radius           14.04                   -                  -0.05
                mean concave points       0.03                    -                  -0.05
                  worst perimeter         92.80                   -                  -0.06



The interpretation of the table is the same for regression problems - but the SHAP value now corresponds to the change in the estimated value of the dependent variable rather than a change in probability. For multiclass classification problems, a table will be output for each possible class.

Below is an example of how you would explain three predictions with explain_predictions.

[22]:
from evalml.model_understanding.prediction_explanations import explain_predictions

report = explain_predictions(pipeline=pipeline_binary,
                             input_features=X_holdout, y=y_holdout, indices_to_explain=[0, 4, 9], include_shap_values=True,
                             output_format='text')
print(report)
Random Forest Classifier w/ Simple Imputer

{'Simple Imputer': {'impute_strategy': 'most_frequent', 'fill_value': None}, 'Random Forest Classifier': {'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}}

        1 of 3

                    Feature Name       Feature Value   Contribution to Prediction   SHAP Value
                ==============================================================================
                  worst perimeter         101.20                   -                  -0.04
                worst concave points       0.06                    -                  -0.05
                mean concave points        0.01                    -                  -0.05


        2 of 3

                   Feature Name       Feature Value   Contribution to Prediction   SHAP Value
                =============================================================================
                   worst radius           11.94                   -                  -0.05
                  worst perimeter         80.78                   -                  -0.06
                mean concave points       0.02                    -                  -0.06


        3 of 3

                    Feature Name       Feature Value   Contribution to Prediction   SHAP Value
                ==============================================================================
                worst concave points       0.10                    -                  -0.05
                  worst perimeter          99.21                   -                  -0.06
                mean concave points        0.03                    -                  -0.08



Explaining Best and Worst Predictions

When debugging machine learning models, it is often useful to analyze the best and worst predictions the model made. The explain_predictions_best_worst function can help us with this.

This function will display the output of explain_predictions for the best 2 and worst 2 predictions. By default, the best and worst predictions are determined by the absolute error for regression problems and cross entropy for classification problems.

We can specify our own ranking function by passing in a function to the metric parameter. This function will be called on y_true and y_pred. By convention, lower scores are better.

At the top of each table, we can see the predicted probabilities, target value, error, and row index for that prediction. For a regression problem, we would see the predicted value instead of predicted probabilities.

[23]:
from evalml.model_understanding.prediction_explanations import explain_predictions_best_worst

report = explain_predictions_best_worst(pipeline=pipeline_binary, input_features=X_holdout, y_true=y_holdout,
                                        include_shap_values=True, top_k_features=6, num_to_explain=2)

print(report)
Random Forest Classifier w/ Simple Imputer

{'Simple Imputer': {'impute_strategy': 'most_frequent', 'fill_value': None}, 'Random Forest Classifier': {'n_estimators': 100, 'max_depth': 6, 'n_jobs': -1}}

        Best 1 of 2

                Predicted Probabilities: [benign: 1.0, malignant: 0.0]
                Predicted Value: benign
                Target Value: benign
                Cross Entropy: 0.0
                Index ID: 502

                    Feature Name       Feature Value   Contribution to Prediction   SHAP Value
                ==============================================================================
                   mean concavity          0.06                    -                  -0.03
                     worst area           552.00                   -                  -0.03
                worst concave points       0.08                    -                  -0.05
                    worst radius           13.57                   -                  -0.05
                mean concave points        0.03                    -                  -0.05
                  worst perimeter          86.67                   -                  -0.06


        Best 2 of 2

                Predicted Probabilities: [benign: 1.0, malignant: 0.0]
                Predicted Value: benign
                Target Value: benign
                Cross Entropy: 0.0
                Index ID: 52

                    Feature Name       Feature Value   Contribution to Prediction   SHAP Value
                ==============================================================================
                   mean concavity          0.02                    -                  -0.02
                     worst area           527.20                   -                  -0.03
                    worst radius           13.10                   -                  -0.04
                worst concave points       0.06                    -                  -0.04
                mean concave points        0.01                    -                  -0.05
                  worst perimeter          83.67                   -                  -0.06


        Worst 1 of 2

                Predicted Probabilities: [benign: 0.266, malignant: 0.734]
                Predicted Value: malignant
                Target Value: benign
                Cross Entropy: 1.325
                Index ID: 363

                 Feature Name     Feature Value   Contribution to Prediction   SHAP Value
                =========================================================================
                worst perimeter      117.20                   +                   0.13
                 worst radius         18.13                   +                   0.12
                  worst area         1009.00                  +                   0.11
                   mean area         838.10                   +                   0.06
                  mean radius         16.50                   +                   0.05
                worst concavity       0.17                    -                  -0.05


        Worst 2 of 2

                Predicted Probabilities: [benign: 1.0, malignant: 0.0]
                Predicted Value: benign
                Target Value: malignant
                Cross Entropy: 7.987
                Index ID: 135

                    Feature Name       Feature Value   Contribution to Prediction   SHAP Value
                ==============================================================================
                   mean concavity          0.05                    -                  -0.03
                     worst area           653.60                   -                  -0.04
                worst concave points       0.09                    -                  -0.05
                    worst radius           14.49                   -                  -0.05
                  worst perimeter          92.04                   -                  -0.06
                mean concave points        0.03                    -                  -0.06



We use a custom metric (hinge loss) for selecting the best and worst predictions. See this example:

import numpy as np

def hinge_loss(y_true, y_pred_proba):

    probabilities = np.clip(y_pred_proba.iloc[:, 1], 0.001, 0.999)
    y_true[y_true == 0] = -1

    return np.clip(1 - y_true * np.log(probabilities / (1 - probabilities)), a_min=0, a_max=None)

report = explain_predictions_best_worst(pipeline=pipeline, input_features=X, y_true=y,
                                        include_shap_values=True, num_to_explain=5, metric=hinge_loss)

print(report)

Changing Output Formats

Instead of getting the prediction explanations as text, you can get the report as a python dictionary or pandas dataframe. All you have to do is pass output_format="dict" or output_format="dataframe" to either explain_prediction, explain_predictions, or explain_predictions_best_worst.

Single prediction as a dictionary

[24]:
import json
single_prediction_report = explain_predictions(pipeline=pipeline_binary, input_features=X_holdout, indices_to_explain=[3],
                                               y=y_holdout, top_k_features=6, include_shap_values=True,
                                               output_format="dict")
print(json.dumps(single_prediction_report, indent=2))
{
  "explanations": [
    {
      "explanations": [
        {
          "feature_names": [
            "worst concavity",
            "mean concavity",
            "worst area",
            "worst radius",
            "mean concave points",
            "worst perimeter"
          ],
          "feature_values": [
            0.1791,
            0.038,
            599.5,
            14.04,
            0.034,
            92.8
          ],
          "qualitative_explanation": [
            "-",
            "-",
            "-",
            "-",
            "-",
            "-"
          ],
          "quantitative_explanation": [
            -0.023008481104309524,
            -0.02621982146725469,
            -0.033821592020020774,
            -0.04666659740586632,
            -0.0541511910494414,
            -0.05523688273171911
          ],
          "drill_down": {},
          "class_name": "malignant",
          "expected_value": 0.3711208791208791
        }
      ]
    }
  ]
}

Single prediction as a dataframe

[25]:
single_prediction_report = explain_predictions(pipeline=pipeline_binary, input_features=X_holdout,
                                               indices_to_explain=[3],
                                               y=y_holdout, top_k_features=6, include_shap_values=True,
                                               output_format="dataframe")
single_prediction_report
[25]:
feature_names feature_values qualitative_explanation quantitative_explanation class_name prediction_number
0 worst concavity 0.1791 - -0.023008 malignant 0
1 mean concavity 0.0380 - -0.026220 malignant 0
2 worst area 599.5000 - -0.033822 malignant 0
3 worst radius 14.0400 - -0.046667 malignant 0
4 mean concave points 0.0340 - -0.054151 malignant 0
5 worst perimeter 92.8000 - -0.055237 malignant 0

Best and worst predictions as a dictionary

[26]:
report = explain_predictions_best_worst(pipeline=pipeline_binary, input_features=X, y_true=y,
                                        num_to_explain=1, top_k_features=6,
                                        include_shap_values=True, output_format="dict")
print(json.dumps(report, indent=2))
{
  "explanations": [
    {
      "rank": {
        "prefix": "best",
        "index": 1
      },
      "predicted_values": {
        "probabilities": {
          "benign": 1.0,
          "malignant": 0.0
        },
        "predicted_value": "benign",
        "target_value": "benign",
        "error_name": "Cross Entropy",
        "error_value": 0.0001970443507070075,
        "index_id": 475
      },
      "explanations": [
        {
          "feature_names": [
            "mean concavity",
            "worst area",
            "worst radius",
            "worst concave points",
            "worst perimeter",
            "mean concave points"
          ],
          "feature_values": [
            0.05835,
            605.8,
            14.09,
            0.09783,
            93.22,
            0.03078
          ],
          "qualitative_explanation": [
            "-",
            "-",
            "-",
            "-",
            "-",
            "-"
          ],
          "quantitative_explanation": [
            -0.028481050954786636,
            -0.03050522196002462,
            -0.042922079201003216,
            -0.04429366151003684,
            -0.05486784013962313,
            -0.05639460900233733
          ],
          "drill_down": {},
          "class_name": "malignant",
          "expected_value": 0.3711208791208791
        }
      ]
    },
    {
      "rank": {
        "prefix": "worst",
        "index": 1
      },
      "predicted_values": {
        "probabilities": {
          "benign": 1.0,
          "malignant": 0.0
        },
        "predicted_value": "benign",
        "target_value": "malignant",
        "error_name": "Cross Entropy",
        "error_value": 7.986911819330411,
        "index_id": 135
      },
      "explanations": [
        {
          "feature_names": [
            "mean concavity",
            "worst area",
            "worst concave points",
            "worst radius",
            "worst perimeter",
            "mean concave points"
          ],
          "feature_values": [
            0.04711,
            653.6,
            0.09331,
            14.49,
            92.04,
            0.02704
          ],
          "qualitative_explanation": [
            "-",
            "-",
            "-",
            "-",
            "-",
            "-"
          ],
          "quantitative_explanation": [
            -0.029936744551331215,
            -0.03748357654576422,
            -0.04553126236476177,
            -0.0483274199182721,
            -0.06039220265366764,
            -0.060441902449258976
          ],
          "drill_down": {},
          "class_name": "malignant",
          "expected_value": 0.3711208791208791
        }
      ]
    }
  ]
}

Best and worst predictions as a dataframe

[27]:
report = explain_predictions_best_worst(pipeline=pipeline_binary, input_features=X_holdout, y_true=y_holdout,
                                        num_to_explain=1, top_k_features=6,
                                        include_shap_values=True, output_format="dataframe")
report
[27]:
feature_names feature_values qualitative_explanation quantitative_explanation class_name label_benign_probability label_malignant_probability predicted_value target_value error_name error_value index_id rank prefix
0 mean concavity 0.05928 - -0.029022 malignant 1.0 0.0 benign benign Cross Entropy 0.000197 502 1 best
1 worst area 552.00000 - -0.034112 malignant 1.0 0.0 benign benign Cross Entropy 0.000197 502 1 best
2 worst concave points 0.08411 - -0.046896 malignant 1.0 0.0 benign benign Cross Entropy 0.000197 502 1 best
3 worst radius 13.57000 - -0.046928 malignant 1.0 0.0 benign benign Cross Entropy 0.000197 502 1 best
4 mean concave points 0.03279 - -0.052902 malignant 1.0 0.0 benign benign Cross Entropy 0.000197 502 1 best
5 worst perimeter 86.67000 - -0.064320 malignant 1.0 0.0 benign benign Cross Entropy 0.000197 502 1 best
6 mean concavity 0.04711 - -0.029937 malignant 1.0 0.0 benign malignant Cross Entropy 7.986912 135 1 worst
7 worst area 653.60000 - -0.037484 malignant 1.0 0.0 benign malignant Cross Entropy 7.986912 135 1 worst
8 worst concave points 0.09331 - -0.045531 malignant 1.0 0.0 benign malignant Cross Entropy 7.986912 135 1 worst
9 worst radius 14.49000 - -0.048327 malignant 1.0 0.0 benign malignant Cross Entropy 7.986912 135 1 worst
10 worst perimeter 92.04000 - -0.060392 malignant 1.0 0.0 benign malignant Cross Entropy 7.986912 135 1 worst
11 mean concave points 0.02704 - -0.060442 malignant 1.0 0.0 benign malignant Cross Entropy 7.986912 135 1 worst

Force Plots

Force plots can be generated to predict single or multiple rows for binary, multiclass and regression problem types. Here’s an example of predicting a single row on a binary classification dataset. The force plots show the predictive power of each of the features in making the negative (“Class: 0”) prediction and the positive (“Class: 1”) prediction.

[28]:
import shap

from evalml.model_understanding.force_plots import graph_force_plot

rows_to_explain = [0] # Should be a list of integer indices of the rows to explain.

results = graph_force_plot(pipeline_binary, rows_to_explain=rows_to_explain,
                           training_data=X_holdout, y=y_holdout)

for result in results:
    for cls in result:
        print("Class:", cls)
        display(result[cls]["plot"])
Class: malignant
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.

Here’s an example of a force plot explaining multiple predictions on a multiclass problem. These plots show the force plots for each row arranged as consecutive columns that can be ordered by the dropdown above. Clicking the column indicates which row explanation is underneath.

[29]:
rows_to_explain = [0,1,2,3,4] # Should be a list of integer indices of the rows to explain.

results = graph_force_plot(pipeline_multi,
                           rows_to_explain=rows_to_explain,
                           training_data=X_multi, y=y_multi)

for idx, result in enumerate(results):
    print("Row:", idx)
    for cls in result:
        print("Class:", cls)
        display(result[cls]["plot"])
Row: 0
Class: class_0
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Class: class_1
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Class: class_2
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Row: 1
Class: class_0
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Class: class_1
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Class: class_2
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Row: 2
Class: class_0
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Class: class_1
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Class: class_2
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Row: 3
Class: class_0
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Class: class_1
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Class: class_2
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Row: 4
Class: class_0
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Class: class_1
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
Class: class_2
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.