evalml.model_understanding.prediction_explanations.explain_predictions¶
-
evalml.model_understanding.prediction_explanations.
explain_predictions
(pipeline, input_features, y, indices_to_explain, top_k_features=3, include_shap_values=False, output_format='text')[source]¶ Creates a report summarizing the top contributing features for each data point in the input features.
XGBoost and Stacked Ensemble models, as well as CatBoost multiclass classifiers, are not currently supported.
- Parameters
pipeline (PipelineBase) – Fitted pipeline whose predictions we want to explain with SHAP.
input_features (ww.DataTable, pd.DataFrame) – Dataframe of input data to evaluate the pipeline on.
y (ww.DataColumn, pd.Series) – Labels for the input data.
indices_to_explain (list(int)) – List of integer indices to explain.
top_k_features (int) – How many of the highest/lowest contributing feature to include in the table for each data point. Default is 3.
include_shap_values (bool) – Whether SHAP values should be included in the table. Default is False.
output_format (str) – Either “text”, “dict”, or “dataframe”. Default is “text”.
- Returns
- str, dict, or pd.DataFrame - A report explaining the top contributing features to each prediction for each row of input_features.
The report will include the feature names, prediction contribution, and SHAP Value (optional).
- Raises
ValueError – if input_features is empty.
ValueError – if an output_format outside of “text”, “dict” or “dataframe is provided.
ValueError – if the requested index falls outside the input_feature’s boundaries.