Model Understanding ==================================== .. py:module:: evalml.model_understanding .. autoapi-nested-parse:: Model understanding tools. Subpackages ----------- .. toctree:: :titlesonly: :maxdepth: 3 prediction_explanations/index.rst Submodules ---------- .. toctree:: :titlesonly: :maxdepth: 1 decision_boundary/index.rst feature_explanations/index.rst force_plots/index.rst metrics/index.rst partial_dependence_functions/index.rst permutation_importance/index.rst visualizations/index.rst Package Contents ---------------- Functions ~~~~~~~~~ .. autoapisummary:: :nosignatures: evalml.model_understanding.binary_objective_vs_threshold evalml.model_understanding.calculate_permutation_importance evalml.model_understanding.calculate_permutation_importance_one_column evalml.model_understanding.confusion_matrix evalml.model_understanding.explain_predictions evalml.model_understanding.explain_predictions_best_worst evalml.model_understanding.find_confusion_matrix_per_thresholds evalml.model_understanding.get_linear_coefficients evalml.model_understanding.get_prediction_vs_actual_data evalml.model_understanding.get_prediction_vs_actual_over_time_data evalml.model_understanding.graph_binary_objective_vs_threshold evalml.model_understanding.graph_confusion_matrix evalml.model_understanding.graph_partial_dependence evalml.model_understanding.graph_permutation_importance evalml.model_understanding.graph_precision_recall_curve evalml.model_understanding.graph_prediction_vs_actual evalml.model_understanding.graph_prediction_vs_actual_over_time evalml.model_understanding.graph_roc_curve evalml.model_understanding.graph_t_sne evalml.model_understanding.normalize_confusion_matrix evalml.model_understanding.partial_dependence evalml.model_understanding.precision_recall_curve evalml.model_understanding.roc_curve evalml.model_understanding.t_sne Contents ~~~~~~~~~~~~~~~~~~~ .. py:function:: binary_objective_vs_threshold(pipeline, X, y, objective, steps=100) Computes objective score as a function of potential binary classification decision thresholds for a fitted binary classification pipeline. :param pipeline: Fitted binary classification pipeline. :type pipeline: BinaryClassificationPipeline obj :param X: The input data used to compute objective score. :type X: pd.DataFrame :param y: The target labels. :type y: pd.Series :param objective: Objective used to score. :type objective: ObjectiveBase obj, str :param steps: Number of intervals to divide and calculate objective score at. :type steps: int :returns: DataFrame with thresholds and the corresponding objective score calculated at each threshold. :rtype: pd.DataFrame :raises ValueError: If objective is not a binary classification objective. :raises ValueError: If objective's `score_needs_proba` is not False. .. py:function:: calculate_permutation_importance(pipeline, X, y, objective, n_repeats=5, n_jobs=None, random_seed=0) Calculates permutation importance for features. :param pipeline: Fitted pipeline. :type pipeline: PipelineBase or subclass :param X: The input data used to score and compute permutation importance. :type X: pd.DataFrame :param y: The target data. :type y: pd.Series :param objective: Objective to score on. :type objective: str, ObjectiveBase :param n_repeats: Number of times to permute a feature. Defaults to 5. :type n_repeats: int :param n_jobs: Non-negative integer describing level of parallelism used for pipelines. None and 1 are equivalent. If set to -1, all CPUs are used. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. Defaults to None. :type n_jobs: int or None :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int :returns: Mean feature importance scores over a number of shuffles. :rtype: pd.DataFrame :raises ValueError: If objective cannot be used with the given pipeline. .. py:function:: calculate_permutation_importance_one_column(pipeline, X, y, col_name, objective, n_repeats=5, fast=True, precomputed_features=None, random_seed=0) Calculates permutation importance for one column in the original dataframe. :param pipeline: Fitted pipeline. :type pipeline: PipelineBase or subclass :param X: The input data used to score and compute permutation importance. :type X: pd.DataFrame :param y: The target data. :type y: pd.Series :param col_name: The column in X to calculate permutation importance for. :type col_name: str, int :param objective: Objective to score on. :type objective: str, ObjectiveBase :param n_repeats: Number of times to permute a feature. Defaults to 5. :type n_repeats: int :param fast: Whether to use the fast method of calculating the permutation importance or not. Defaults to True. :type fast: bool :param precomputed_features: Precomputed features necessary to calculate permutation importance using the fast method. Defaults to None. :type precomputed_features: pd.DataFrame :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int :returns: Mean feature importance scores over a number of shuffles. :rtype: float :raises ValueError: If pipeline does not support fast permutation importance calculation. :raises ValueError: If precomputed_features is None. .. py:function:: confusion_matrix(y_true, y_predicted, normalize_method='true') Confusion matrix for binary and multiclass classification. :param y_true: True binary labels. :type y_true: pd.Series or np.ndarray :param y_predicted: Predictions from a binary classifier. :type y_predicted: pd.Series or np.ndarray :param normalize_method: Normalization method to use, if not None. Supported options are: 'true' to normalize by row, 'pred' to normalize by column, or 'all' to normalize by all values. Defaults to 'true'. :type normalize_method: {'true', 'pred', 'all', None} :returns: Confusion matrix. The column header represents the predicted labels while row header represents the actual labels. :rtype: pd.DataFrame .. py:function:: explain_predictions(pipeline, input_features, y, indices_to_explain, top_k_features=3, include_explainer_values=False, include_expected_value=False, output_format='text', training_data=None, training_target=None, algorithm='shap') Creates a report summarizing the top contributing features for each data point in the input features. XGBoost models and CatBoost multiclass classifiers are not currently supported with the SHAP algorithm. To explain XGBoost model predictions, use the LIME algorithm. The LIME algorithm does not currently support any CatBoost models. For Stacked Ensemble models, the SHAP value for each input pipeline's predict function into the metalearner is used. :param pipeline: Fitted pipeline whose predictions we want to explain with SHAP or LIME. :type pipeline: PipelineBase :param input_features: Dataframe of input data to evaluate the pipeline on. :type input_features: pd.DataFrame :param y: Labels for the input data. :type y: pd.Series :param indices_to_explain: List of integer indices to explain. :type indices_to_explain: list[int] :param top_k_features: How many of the highest/lowest contributing feature to include in the table for each data point. Default is 3. :type top_k_features: int :param include_explainer_values: Whether explainer (SHAP or LIME) values should be included in the table. Default is False. :type include_explainer_values: bool :param include_expected_value: Whether the expected value should be included in the table. Default is False. :type include_expected_value: bool :param output_format: Either "text", "dict", or "dataframe". Default is "text". :type output_format: str :param training_data: Data the pipeline was trained on. Required and only used for time series pipelines. :type training_data: pd.DataFrame, np.ndarray :param training_target: Targets used to train the pipeline. Required and only used for time series pipelines. :type training_target: pd.Series, np.ndarray :param algorithm: Algorithm to use while generating top contributing features, one of "shap" or "lime". Defaults to "shap". :type algorithm: str :returns: A report explaining the top contributing features to each prediction for each row of input_features. The report will include the feature names, prediction contribution, and explainer value (optional). :rtype: str, dict, or pd.DataFrame :raises ValueError: if input_features is empty. :raises ValueError: if an output_format outside of "text", "dict" or "dataframe is provided. :raises ValueError: if the requested index falls outside the input_feature's boundaries. .. py:function:: explain_predictions_best_worst(pipeline, input_features, y_true, num_to_explain=5, top_k_features=3, include_explainer_values=False, metric=None, output_format='text', callback=None, training_data=None, training_target=None, algorithm='shap') Creates a report summarizing the top contributing features for the best and worst points in the dataset as measured by error to true labels. XGBoost models and CatBoost multiclass classifiers are not currently supported with the SHAP algorithm. To explain XGBoost model predictions, use the LIME algorithm. The LIME algorithm does not currently support any CatBoost models. For Stacked Ensemble models, the SHAP value for each input pipeline's predict function into the metalearner is used. :param pipeline: Fitted pipeline whose predictions we want to explain with SHAP or LIME. :type pipeline: PipelineBase :param input_features: Input data to evaluate the pipeline on. :type input_features: pd.DataFrame :param y_true: True labels for the input data. :type y_true: pd.Series :param num_to_explain: How many of the best, worst, random data points to explain. :type num_to_explain: int :param top_k_features: How many of the highest/lowest contributing feature to include in the table for each data point. :type top_k_features: int :param include_explainer_values: Whether explainer (SHAP or LIME) values should be included in the table. Default is False. :type include_explainer_values: bool :param metric: The metric used to identify the best and worst points in the dataset. Function must accept the true labels and predicted value or probabilities as the only arguments and lower values must be better. By default, this will be the absolute error for regression problems and cross entropy loss for classification problems. :type metric: callable :param output_format: Either "text" or "dict". Default is "text". :type output_format: str :param callback: Function to be called with incremental updates. Has the following parameters: - progress_stage: stage of computation - time_elapsed: total time in seconds that has elapsed since start of call :type callback: callable :param training_data: Data the pipeline was trained on. Required and only used for time series pipelines. :type training_data: pd.DataFrame, np.ndarray :param training_target: Targets used to train the pipeline. Required and only used for time series pipelines. :type training_target: pd.Series, np.ndarray :param algorithm: Algorithm to use while generating top contributing features, one of "shap" or "lime". Defaults to "shap". :type algorithm: str :returns: A report explaining the top contributing features for the best/worst predictions in the input_features. For each of the best/worst rows of input_features, the predicted values, true labels, metric value, feature names, prediction contribution, and explainer value (optional) will be listed. :rtype: str, dict, or pd.DataFrame :raises ValueError: If input_features does not have more than twice the requested features to explain. :raises ValueError: If y_true and input_features have mismatched lengths. :raises ValueError: If an output_format outside of "text", "dict" or "dataframe is provided. :raises PipelineScoreError: If the pipeline errors out while scoring. .. py:function:: find_confusion_matrix_per_thresholds(pipeline, X, y, n_bins=None, top_k=5, to_json=False) Gets the confusion matrix and histogram bins for each threshold as well as the best threshold per objective. Only works with Binary Classification Pipelines. :param pipeline: A fitted Binary Classification Pipeline to get the confusion matrix with. :type pipeline: PipelineBase :param X: The input features. :type X: pd.DataFrame :param y: The input target. :type y: pd.Series :param n_bins: The number of bins to use to calculate the threshold values. Defaults to None, which will default to using Freedman-Diaconis rule. :type n_bins: int :param top_k: The maximum number of row indices per bin to include as samples. -1 includes all row indices that fall between the bins. Defaults to 5. :type top_k: int :param to_json: Whether or not to return a json output. If False, returns the (DataFrame, dict) tuple, otherwise returns a json. :type to_json: bool :returns: The dataframe has the actual positive histogram, actual negative histogram, the confusion matrix, and a sample of rows that fall in the bin, all for each threshold value. The threshold value, represented through the dataframe index, represents the cutoff threshold at that value. The dictionary contains the ideal threshold and score per objective, keyed by objective name. If json, returns the info for both the dataframe and dictionary as a json output. :rtype: (tuple(pd.DataFrame, dict)), json) :raises ValueError: If the pipeline isn't a binary classification pipeline or isn't yet fitted on data. .. py:function:: get_linear_coefficients(estimator, features=None) Returns a dataframe showing the features with the greatest predictive power for a linear model. :param estimator: Fitted linear model family estimator. :type estimator: Estimator :param features: List of feature names associated with the underlying data. :type features: list[str] :returns: Displaying the features by importance. :rtype: pd.DataFrame :raises ValueError: If the model is not a linear model. :raises NotFittedError: If the model is not yet fitted. .. py:function:: get_prediction_vs_actual_data(y_true, y_pred, outlier_threshold=None) Combines y_true and y_pred into a single dataframe and adds a column for outliers. Used in `graph_prediction_vs_actual()`. :param y_true: The real target values of the data :type y_true: pd.Series, or np.ndarray :param y_pred: The predicted values outputted by the regression model. :type y_pred: pd.Series, or np.ndarray :param outlier_threshold: A positive threshold for what is considered an outlier value. This value is compared to the absolute difference between each value of y_true and y_pred. Values within this threshold will be blue, otherwise they will be yellow. Defaults to None. :type outlier_threshold: int, float :returns: * `prediction`: Predicted values from regression model. * `actual`: Real target values. * `outlier`: Colors indicating which values are in the threshold for what is considered an outlier value. :rtype: pd.DataFrame with the following columns :raises ValueError: If threshold is not positive. .. py:function:: get_prediction_vs_actual_over_time_data(pipeline, X, y, X_train, y_train, dates) Get the data needed for the prediction_vs_actual_over_time plot. :param pipeline: Fitted time series regression pipeline. :type pipeline: TimeSeriesRegressionPipeline :param X: Features used to generate new predictions. :type X: pd.DataFrame :param y: Target values to compare predictions against. :type y: pd.Series :param X_train: Data the pipeline was trained on. :type X_train: pd.DataFrame :param y_train: Target values for training data. :type y_train: pd.Series :param dates: Dates corresponding to target values and predictions. :type dates: pd.Series :returns: Predictions vs. time. :rtype: pd.DataFrame .. py:function:: graph_binary_objective_vs_threshold(pipeline, X, y, objective, steps=100) Generates a plot graphing objective score vs. decision thresholds for a fitted binary classification pipeline. :param pipeline: Fitted pipeline :type pipeline: PipelineBase or subclass :param X: The input data used to score and compute scores :type X: pd.DataFrame :param y: The target labels :type y: pd.Series :param objective: Objective used to score, shown on the y-axis of the graph :type objective: ObjectiveBase obj, str :param steps: Number of intervals to divide and calculate objective score at :type steps: int :returns: plotly.Figure representing the objective score vs. threshold graph generated .. py:function:: graph_confusion_matrix(y_true, y_pred, normalize_method='true', title_addition=None) Generate and display a confusion matrix plot. If `normalize_method` is set, hover text will show raw count, otherwise hover text will show count normalized with method 'true'. :param y_true: True binary labels. :type y_true: pd.Series or np.ndarray :param y_pred: Predictions from a binary classifier. :type y_pred: pd.Series or np.ndarray :param normalize_method: Normalization method to use, if not None. Supported options are: 'true' to normalize by row, 'pred' to normalize by column, or 'all' to normalize by all values. Defaults to 'true'. :type normalize_method: {'true', 'pred', 'all', None} :param title_addition: If not None, append to plot title. Defaults to None. :type title_addition: str :returns: plotly.Figure representing the confusion matrix plot generated. .. py:function:: graph_partial_dependence(pipeline, X, features, class_label=None, grid_resolution=100, kind='average') Create an one-way or two-way partial dependence plot. Passing a single integer or string as features will create a one-way partial dependence plot with the feature values plotted against the partial dependence. Passing features a tuple of int/strings will create a two-way partial dependence plot with a contour of feature[0] in the y-axis, feature[1] in the x-axis and the partial dependence in the z-axis. :param pipeline: Fitted pipeline. :type pipeline: PipelineBase or subclass :param X: The input data used to generate a grid of values for feature where partial dependence will be calculated at. :type X: pd.DataFrame, np.ndarray :param features: The target feature for which to create the partial dependence plot for. If features is an int, it must be the index of the feature to use. If features is a string, it must be a valid column name in X. If features is a tuple of strings, it must contain valid column int/names in X. :type features: int, string, tuple[int or string] :param class_label: Name of class to plot for multiclass problems. If None, will plot the partial dependence for each class. This argument does not change behavior for regression or binary classification pipelines. For binary classification, the partial dependence for the positive label will always be displayed. Defaults to None. :type class_label: string, optional :param grid_resolution: Number of samples of feature(s) for partial dependence plot. :type grid_resolution: int :param kind: Type of partial dependence to plot. 'average' creates a regular partial dependence (PD) graph, 'individual' creates an individual conditional expectation (ICE) plot, and 'both' creates a single-figure PD and ICE plot. ICE plots can only be shown for one-way partial dependence plots. :type kind: {'average', 'individual', 'both'} :returns: figure object containing the partial dependence data for plotting :rtype: plotly.graph_objects.Figure :raises PartialDependenceError: if a graph is requested for a class name that isn't present in the pipeline. :raises PartialDependenceError: if an ICE plot is requested for a two-way partial dependence. .. py:function:: graph_permutation_importance(pipeline, X, y, objective, importance_threshold=0) Generate a bar graph of the pipeline's permutation importance. :param pipeline: Fitted pipeline. :type pipeline: PipelineBase or subclass :param X: The input data used to score and compute permutation importance. :type X: pd.DataFrame :param y: The target data. :type y: pd.Series :param objective: Objective to score on. :type objective: str, ObjectiveBase :param importance_threshold: If provided, graph features with a permutation importance whose absolute value is larger than importance_threshold. Defaults to 0. :type importance_threshold: float, optional :returns: plotly.Figure, a bar graph showing features and their respective permutation importance. :raises ValueError: If importance_threshold is not greater than or equal to 0. .. py:function:: graph_precision_recall_curve(y_true, y_pred_proba, title_addition=None) Generate and display a precision-recall plot. :param y_true: True binary labels. :type y_true: pd.Series or np.ndarray :param y_pred_proba: Predictions from a binary classifier, before thresholding has been applied. Note this should be the predicted probability for the "true" label. :type y_pred_proba: pd.Series or np.ndarray :param title_addition: If not None, append to plot title. Defaults to None. :type title_addition: str or None :returns: plotly.Figure representing the precision-recall plot generated .. py:function:: graph_prediction_vs_actual(y_true, y_pred, outlier_threshold=None) Generate a scatter plot comparing the true and predicted values. Used for regression plotting. :param y_true: The real target values of the data. :type y_true: pd.Series :param y_pred: The predicted values outputted by the regression model. :type y_pred: pd.Series :param outlier_threshold: A positive threshold for what is considered an outlier value. This value is compared to the absolute difference between each value of y_true and y_pred. Values within this threshold will be blue, otherwise they will be yellow. Defaults to None. :type outlier_threshold: int, float :returns: plotly.Figure representing the predicted vs. actual values graph :raises ValueError: If threshold is not positive. .. py:function:: graph_prediction_vs_actual_over_time(pipeline, X, y, X_train, y_train, dates, single_series=None) Plot the target values and predictions against time on the x-axis. :param pipeline: Fitted time series regression pipeline. :type pipeline: TimeSeriesRegressionPipeline :param X: Features used to generate new predictions. If problem is multiseries, X should be stacked. :type X: pd.DataFrame :param y: Target values to compare predictions against. If problem is multiseries, y should be stacked. :type y: pd.Series :param X_train: Data the pipeline was trained on. :type X_train: pd.DataFrame :param y_train: Target values for training data. :type y_train: pd.Series :param dates: Dates corresponding to target values and predictions. :type dates: pd.Series :param single_series: A single series id value to plot just one series in a multiseries dataset. Defaults to None. :type single_series: str :returns: Showing the prediction vs actual over time. :rtype: plotly.Figure :raises ValueError: If the pipeline is not a time-series regression pipeline. .. py:function:: graph_roc_curve(y_true, y_pred_proba, custom_class_names=None, title_addition=None) Generate and display a Receiver Operating Characteristic (ROC) plot for binary and multiclass classification problems. :param y_true: True labels. :type y_true: pd.Series or np.ndarray :param y_pred_proba: Predictions from a classifier, before thresholding has been applied. Note this should a one dimensional array with the predicted probability for the "true" label in the binary case. :type y_pred_proba: pd.Series or np.ndarray :param custom_class_names: If not None, custom labels for classes. Defaults to None. :type custom_class_names: list or None :param title_addition: if not None, append to plot title. Defaults to None. :type title_addition: str or None :returns: plotly.Figure representing the ROC plot generated :raises ValueError: If the number of custom class names does not match number of classes in the input data. .. py:function:: graph_t_sne(X, n_components=2, perplexity=30.0, learning_rate=200.0, metric='euclidean', marker_line_width=2, marker_size=7, **kwargs) Plot high dimensional data into lower dimensional space using t-SNE. :param X: Data to be transformed. Must be numeric. :type X: np.ndarray, pd.DataFrame :param n_components: Dimension of the embedded space. Defaults to 2. :type n_components: int :param perplexity: Related to the number of nearest neighbors that is used in other manifold learning algorithms. Larger datasets usually require a larger perplexity. Consider selecting a value between 5 and 50. Defaults to 30. :type perplexity: float :param learning_rate: Usually in the range [10.0, 1000.0]. If the cost function gets stuck in a bad local minimum, increasing the learning rate may help. Must be positive. Defaults to 200. :type learning_rate: float :param metric: The metric to use when calculating distance between instances in a feature array. The default is "euclidean" which is interpreted as the squared euclidean distance. :type metric: str :param marker_line_width: Determines the line width of the marker boundary. Defaults to 2. :type marker_line_width: int :param marker_size: Determines the size of the marker. Defaults to 7. :type marker_size: int :param kwargs: Arbitrary keyword arguments. :returns: Figure representing the transformed data. :rtype: plotly.Figure :raises ValueError: If marker_line_width or marker_size are not valid values. .. py:function:: normalize_confusion_matrix(conf_mat, normalize_method='true') Normalizes a confusion matrix. :param conf_mat: Confusion matrix to normalize. :type conf_mat: pd.DataFrame or np.ndarray :param normalize_method: Normalization method. Supported options are: 'true' to normalize by row, 'pred' to normalize by column, or 'all' to normalize by all values. Defaults to 'true'. :type normalize_method: {'true', 'pred', 'all'} :returns: normalized version of the input confusion matrix. The column header represents the predicted labels while row header represents the actual labels. :rtype: pd.DataFrame :raises ValueError: If configuration is invalid, or if the sum of a given axis is zero and normalization by axis is specified. .. py:function:: partial_dependence(pipeline, X, features, percentiles=(0.05, 0.95), grid_resolution=100, kind='average', fast_mode=False, X_train=None, y_train=None) Calculates one or two-way partial dependence. If a single integer or string is given for features, one-way partial dependence is calculated. If a tuple of two integers or strings is given, two-way partial dependence is calculated with the first feature in the y-axis and second feature in the x-axis. :param pipeline: Fitted pipeline :type pipeline: PipelineBase or subclass :param X: The input data used to generate a grid of values for feature where partial dependence will be calculated at :type X: pd.DataFrame, np.ndarray :param features: The target feature for which to create the partial dependence plot for. If features is an int, it must be the index of the feature to use. If features is a string, it must be a valid column name in X. If features is a tuple of int/strings, it must contain valid column integers/names in X. :type features: int, string, tuple[int or string] :param percentiles: The lower and upper percentile used to create the extreme values for the grid. Must be in [0, 1]. Defaults to (0.05, 0.95). :type percentiles: tuple[float] :param grid_resolution: Number of samples of feature(s) for partial dependence plot. If this value is less than the maximum number of categories present in categorical data within X, it will be set to the max number of categories + 1. Defaults to 100. :type grid_resolution: int :param kind: The type of predictions to return. 'individual' will return the predictions for all of the points in the grid for each sample in X. 'average' will return the predictions for all of the points in the grid but averaged over all of the samples in X. :type kind: {'average', 'individual', 'both'} :param fast_mode: Whether or not performance optimizations should be used for partial dependence calculations. Defaults to False. Note that user-specified components may not produce correct partial dependence results, so fast mode should only be used with EvalML-native components. Additionally, some components are not compatible with fast mode; in those cases, an error will be raised indicating that fast mode should not be used. :type fast_mode: bool, optional :param X_train: The data that was used to train the original pipeline. Will be used in fast mode to train the cloned pipelines. Defaults to None. :type X_train: pd.DataFrame, np.ndarray :param y_train: The target data that was used to train the original pipeline. Will be used in fast mode to train the cloned pipelines. Defaults to None. :type y_train: pd.Series, np.ndarray :returns: When `kind='average'`: DataFrame with averaged predictions for all points in the grid averaged over all samples of X and the values used to calculate those predictions. When `kind='individual'`: DataFrame with individual predictions for all points in the grid for each sample of X and the values used to calculate those predictions. If a two-way partial dependence is calculated, then the result is a list of DataFrames with each DataFrame representing one sample's predictions. When `kind='both'`: A tuple consisting of the averaged predictions (in a DataFrame) over all samples of X and the individual predictions (in a list of DataFrames) for each sample of X. In the one-way case: The dataframe will contain two columns, "feature_values" (grid points at which the partial dependence was calculated) and "partial_dependence" (the partial dependence at that feature value). For classification problems, there will be a third column called "class_label" (the class label for which the partial dependence was calculated). For binary classification, the partial dependence is only calculated for the "positive" class. In the two-way case: The data frame will contain grid_resolution number of columns and rows where the index and column headers are the sampled values of the first and second features, respectively, used to make the partial dependence contour. The values of the data frame contain the partial dependence data for each feature value pair. :rtype: pd.DataFrame, list(pd.DataFrame), or tuple(pd.DataFrame, list(pd.DataFrame)) :raises ValueError: Error during call to scikit-learn's partial dependence method. :raises Exception: All other errors during calculation. :raises PartialDependenceError: if the user provides a tuple of not exactly two features. :raises PartialDependenceError: if the provided pipeline isn't fitted. :raises PartialDependenceError: if the provided pipeline is a Baseline pipeline. :raises PartialDependenceError: if any of the features passed in are completely NaN :raises PartialDependenceError: if any of the features are low-variance. Defined as having one value occurring more than the upper percentile passed by the user. By default 95%. .. py:function:: precision_recall_curve(y_true, y_pred_proba, pos_label_idx=-1) Given labels and binary classifier predicted probabilities, compute and return the data representing a precision-recall curve. :param y_true: True binary labels. :type y_true: pd.Series or np.ndarray :param y_pred_proba: Predictions from a binary classifier, before thresholding has been applied. Note this should be the predicted probability for the "true" label. :type y_pred_proba: pd.Series or np.ndarray :param pos_label_idx: the column index corresponding to the positive class. If predicted probabilities are two-dimensional, this will be used to access the probabilities for the positive class. :type pos_label_idx: int :returns: Dictionary containing metrics used to generate a precision-recall plot, with the following keys: * `precision`: Precision values. * `recall`: Recall values. * `thresholds`: Threshold values used to produce the precision and recall. * `auc_score`: The area under the ROC curve. :rtype: list :raises NoPositiveLabelException: If predicted probabilities do not contain a column at the specified label. .. py:function:: roc_curve(y_true, y_pred_proba) Given labels and classifier predicted probabilities, compute and return the data representing a Receiver Operating Characteristic (ROC) curve. Works with binary or multiclass problems. :param y_true: True labels. :type y_true: pd.Series or np.ndarray :param y_pred_proba: Predictions from a classifier, before thresholding has been applied. :type y_pred_proba: pd.Series or pd.DataFrame or np.ndarray :returns: A list of dictionaries (with one for each class) is returned. Binary classification problems return a list with one dictionary. Each dictionary contains metrics used to generate an ROC plot with the following keys: * `fpr_rate`: False positive rate. * `tpr_rate`: True positive rate. * `threshold`: Threshold values used to produce each pair of true/false positive rates. * `auc_score`: The area under the ROC curve. :rtype: list(dict) .. py:function:: t_sne(X, n_components=2, perplexity=30.0, learning_rate=200.0, metric='euclidean', **kwargs) Get the transformed output after fitting X to the embedded space using t-SNE. :param X: Data to be transformed. Must be numeric. :type X: np.ndarray, pd.DataFrame :param n_components: Dimension of the embedded space. :type n_components: int, optional :param perplexity: Related to the number of nearest neighbors that is used in other manifold learning algorithms. Larger datasets usually require a larger perplexity. Consider selecting a value between 5 and 50. :type perplexity: float, optional :param learning_rate: Usually in the range [10.0, 1000.0]. If the cost function gets stuck in a bad local minimum, increasing the learning rate may help. :type learning_rate: float, optional :param metric: The metric to use when calculating distance between instances in a feature array. :type metric: str, optional :param kwargs: Arbitrary keyword arguments. :returns: TSNE output. :rtype: np.ndarray (n_samples, n_components) :raises ValueError: If specified parameters are not valid values.