explainers

Module Contents

Classes Summary

ExplainPredictionsStage

Generic enumeration.

Functions

abs_error

Computes the absolute error per data point for regression problems.

cross_entropy

Computes Cross Entropy Loss per data point for classification problems.

explain_predictions

Creates a report summarizing the top contributing features for each data point in the input features.

explain_predictions_best_worst

Creates a report summarizing the top contributing features for the best and worst points in the dataset as measured by error to true labels.

Attributes Summary

DEFAULT_METRICS

Contents

evalml.model_understanding.prediction_explanations.explainers.abs_error(y_true, y_pred)[source]

Computes the absolute error per data point for regression problems.

Parameters
  • y_true (pd.Series) – True labels.

  • y_pred (pd.Series) – Predicted values.

Returns

np.ndarray

evalml.model_understanding.prediction_explanations.explainers.cross_entropy(y_true, y_pred_proba)[source]

Computes Cross Entropy Loss per data point for classification problems.

Parameters
  • y_true (pd.Series) – True labels encoded as ints.

  • y_pred_proba (pd.DataFrame) – Predicted probabilities. One column per class.

Returns

np.ndarray

evalml.model_understanding.prediction_explanations.explainers.DEFAULT_METRICS
evalml.model_understanding.prediction_explanations.explainers.explain_predictions(pipeline, input_features, y, indices_to_explain, top_k_features=3, include_shap_values=False, include_expected_value=False, output_format='text')[source]

Creates a report summarizing the top contributing features for each data point in the input features.

XGBoost and Stacked Ensemble models, as well as CatBoost multiclass classifiers, are not currently supported.

Parameters
  • pipeline (PipelineBase) – Fitted pipeline whose predictions we want to explain with SHAP.

  • input_features (pd.DataFrame) – Dataframe of input data to evaluate the pipeline on.

  • y (pd.Series) – Labels for the input data.

  • indices_to_explain (list(int)) – List of integer indices to explain.

  • top_k_features (int) – How many of the highest/lowest contributing feature to include in the table for each data point. Default is 3.

  • include_shap_values (bool) – Whether SHAP values should be included in the table. Default is False.

  • include_expected_value (bool) – Whether the expected value should be included in the table. Default is False.

  • output_format (str) – Either “text”, “dict”, or “dataframe”. Default is “text”.

Returns

str, dict, or pd.DataFrame - A report explaining the top contributing features to each prediction for each row of input_features.

The report will include the feature names, prediction contribution, and SHAP Value (optional).

Raises
  • ValueError – if input_features is empty.

  • ValueError – if an output_format outside of “text”, “dict” or “dataframe is provided.

  • ValueError – if the requested index falls outside the input_feature’s boundaries.

evalml.model_understanding.prediction_explanations.explainers.explain_predictions_best_worst(pipeline, input_features, y_true, num_to_explain=5, top_k_features=3, include_shap_values=False, metric=None, output_format='text', callback=None)[source]

Creates a report summarizing the top contributing features for the best and worst points in the dataset as measured by error to true labels.

XGBoost and Stacked Ensemble models, as well as CatBoost multiclass classifiers, are not currently supported.

Parameters
  • pipeline (PipelineBase) – Fitted pipeline whose predictions we want to explain with SHAP.

  • input_features (pd.DataFrame) – Input data to evaluate the pipeline on.

  • y_true (pd.Series) – True labels for the input data.

  • num_to_explain (int) – How many of the best, worst, random data points to explain.

  • top_k_features (int) – How many of the highest/lowest contributing feature to include in the table for each data point.

  • include_shap_values (bool) – Whether SHAP values should be included in the table. Default is False.

  • metric (callable) – The metric used to identify the best and worst points in the dataset. Function must accept the true labels and predicted value or probabilities as the only arguments and lower values must be better. By default, this will be the absolute error for regression problems and cross entropy loss for classification problems.

  • output_format (str) – Either “text” or “dict”. Default is “text”.

  • callback (callable) – Function to be called with incremental updates. Has the following parameters: - progress_stage: stage of computation - time_elapsed: total time in seconds that has elapsed since start of call

Returns

str, dict, or pd.DataFrame - A report explaining the top contributing features for the best/worst predictions in the input_features.

For each of the best/worst rows of input_features, the predicted values, true labels, metric value, feature names, prediction contribution, and SHAP Value (optional) will be listed.

Raises
  • ValueError – if input_features does not have more than twice the requested features to explain.

  • ValueError – if y_true and input_features have mismatched lengths.

  • ValueError – if an output_format outside of “text”, “dict” or “dataframe is provided.

class evalml.model_understanding.prediction_explanations.explainers.ExplainPredictionsStage[source]

Generic enumeration.

Derive from this class to define new enumerations.

Attributes

COMPUTE_FEATURE_STAGE

compute_feature_stage

COMPUTE_SHAP_VALUES_STAGE

compute_shap_value_stage

DONE

done

PREDICT_STAGE

predict_stage

PREPROCESSING_STAGE

preprocessing_stage

Methods

name

The name of the Enum member.

value

The value of the Enum member.

name(self)

The name of the Enum member.

value(self)

The value of the Enum member.