prediction_explanations
============================================================
.. py:module:: evalml.model_understanding.prediction_explanations
.. autoapi-nested-parse::
Prediction explanation tools.
Submodules
----------
.. toctree::
:titlesonly:
:maxdepth: 1
explainers/index.rst
Package Contents
----------------
Functions
~~~~~~~~~
.. autoapisummary::
:nosignatures:
evalml.model_understanding.prediction_explanations.explain_predictions
evalml.model_understanding.prediction_explanations.explain_predictions_best_worst
Contents
~~~~~~~~~~~~~~~~~~~
.. py:function:: explain_predictions(pipeline, input_features, y, indices_to_explain, top_k_features=3, include_explainer_values=False, include_expected_value=False, output_format='text', training_data=None, training_target=None, algorithm='shap')
Creates a report summarizing the top contributing features for each data point in the input features.
XGBoost models and CatBoost multiclass classifiers are not currently supported with the SHAP algorithm.
To explain XGBoost model predictions, use the LIME algorithm. The LIME algorithm does not currently support
any CatBoost models. For Stacked Ensemble models, the SHAP value for each input pipeline's predict function
into the metalearner is used.
:param pipeline: Fitted pipeline whose predictions we want to explain with SHAP or LIME.
:type pipeline: PipelineBase
:param input_features: Dataframe of input data to evaluate the pipeline on.
:type input_features: pd.DataFrame
:param y: Labels for the input data.
:type y: pd.Series
:param indices_to_explain: List of integer indices to explain.
:type indices_to_explain: list[int]
:param top_k_features: How many of the highest/lowest contributing feature to include in the table for each
data point. Default is 3.
:type top_k_features: int
:param include_explainer_values: Whether explainer (SHAP or LIME) values should be included in the table. Default is False.
:type include_explainer_values: bool
:param include_expected_value: Whether the expected value should be included in the table. Default is False.
:type include_expected_value: bool
:param output_format: Either "text", "dict", or "dataframe". Default is "text".
:type output_format: str
:param training_data: Data the pipeline was trained on. Required and only used for time series pipelines.
:type training_data: pd.DataFrame, np.ndarray
:param training_target: Targets used to train the pipeline. Required and only used for time series pipelines.
:type training_target: pd.Series, np.ndarray
:param algorithm: Algorithm to use while generating top contributing features, one of "shap" or "lime". Defaults to "shap".
:type algorithm: str
:returns:
A report explaining the top contributing features to each prediction for each row of input_features.
The report will include the feature names, prediction contribution, and explainer value (optional).
:rtype: str, dict, or pd.DataFrame
:raises ValueError: if input_features is empty.
:raises ValueError: if an output_format outside of "text", "dict" or "dataframe is provided.
:raises ValueError: if the requested index falls outside the input_feature's boundaries.
.. py:function:: explain_predictions_best_worst(pipeline, input_features, y_true, num_to_explain=5, top_k_features=3, include_explainer_values=False, metric=None, output_format='text', callback=None, training_data=None, training_target=None, algorithm='shap')
Creates a report summarizing the top contributing features for the best and worst points in the dataset as measured by error to true labels.
XGBoost models and CatBoost multiclass classifiers are not currently supported with the SHAP algorithm.
To explain XGBoost model predictions, use the LIME algorithm. The LIME algorithm does not currently support
any CatBoost models. For Stacked Ensemble models, the SHAP value for each input pipeline's predict function
into the metalearner is used.
:param pipeline: Fitted pipeline whose predictions we want to explain with SHAP or LIME.
:type pipeline: PipelineBase
:param input_features: Input data to evaluate the pipeline on.
:type input_features: pd.DataFrame
:param y_true: True labels for the input data.
:type y_true: pd.Series
:param num_to_explain: How many of the best, worst, random data points to explain.
:type num_to_explain: int
:param top_k_features: How many of the highest/lowest contributing feature to include in the table for each
data point.
:type top_k_features: int
:param include_explainer_values: Whether explainer (SHAP or LIME) values should be included in the table. Default is False.
:type include_explainer_values: bool
:param metric: The metric used to identify the best and worst points in the dataset. Function must accept
the true labels and predicted value or probabilities as the only arguments and lower values
must be better. By default, this will be the absolute error for regression problems and cross entropy loss
for classification problems.
:type metric: callable
:param output_format: Either "text" or "dict". Default is "text".
:type output_format: str
:param callback: Function to be called with incremental updates. Has the following parameters:
- progress_stage: stage of computation
- time_elapsed: total time in seconds that has elapsed since start of call
:type callback: callable
:param training_data: Data the pipeline was trained on. Required and only used for time series pipelines.
:type training_data: pd.DataFrame, np.ndarray
:param training_target: Targets used to train the pipeline. Required and only used for time series pipelines.
:type training_target: pd.Series, np.ndarray
:param algorithm: Algorithm to use while generating top contributing features, one of "shap" or "lime". Defaults to "shap".
:type algorithm: str
:returns:
A report explaining the top contributing features for the best/worst predictions in the input_features.
For each of the best/worst rows of input_features, the predicted values, true labels, metric value,
feature names, prediction contribution, and explainer value (optional) will be listed.
:rtype: str, dict, or pd.DataFrame
:raises ValueError: If input_features does not have more than twice the requested features to explain.
:raises ValueError: If y_true and input_features have mismatched lengths.
:raises ValueError: If an output_format outside of "text", "dict" or "dataframe is provided.
:raises PipelineScoreError: If the pipeline errors out while scoring.