cost_benefit_matrix¶
Module Contents¶
Classes Summary¶
Score using a cost-benefit matrix. Scores quantify the benefits of a given value, so greater numeric |
Contents¶
-
class
evalml.objectives.cost_benefit_matrix.
CostBenefitMatrix
(true_positive, true_negative, false_positive, false_negative)[source]¶ Score using a cost-benefit matrix. Scores quantify the benefits of a given value, so greater numeric scores represents a better score. Costs and scores can be negative, indicating that a value is not beneficial. For example, in the case of monetary profit, a negative cost and/or score represents loss of cash flow.
- Parameters
true_positive (float) – Cost associated with true positive predictions
true_negative (float) – Cost associated with true negative predictions
false_positive (float) – Cost associated with false positive predictions
false_negative (float) – Cost associated with false negative predictions
Attributes
greater_is_better
True
is_bounded_like_percentage
False
name
Cost Benefit Matrix
perfect_score
None
problem_types
[ProblemTypes.BINARY, ProblemTypes.TIME_SERIES_BINARY]
score_needs_proba
False
Methods
Calculate the percent difference between scores.
Returns a boolean determining if we can optimize the binary classification objective threshold.
Apply a learned threshold to predicted probabilities to get predicted classes.
Calculates cost-benefit of the using the predicted and true values.
Learn a binary classification threshold which optimizes the current objective.
If True, this objective is only valid for positive data. Default False.
Returns a numerical score indicating performance based on the differences between the predicted and actual values.
Validates the input based on a few simple checks.
-
classmethod
calculate_percent_difference
(cls, score, baseline_score)¶ Calculate the percent difference between scores.
- Parameters
score (float) – A score. Output of the score method of this objective.
baseline_score (float) – A score. Output of the score method of this objective. In practice, this is the score achieved on this objective with a baseline estimator.
- Returns
- The percent difference between the scores. Note that for objectives that can be interpreted
as percentages, this will be the difference between the reference score and score. For all other objectives, the difference will be normalized by the reference score.
- Return type
float
-
property
can_optimize_threshold
(cls)¶ Returns a boolean determining if we can optimize the binary classification objective threshold. This will be false for any objective that works directly with predicted probabilities, like log loss and AUC. Otherwise, it will be true.
-
decision_function
(self, ypred_proba, threshold=0.5, X=None)¶ Apply a learned threshold to predicted probabilities to get predicted classes.
- Parameters
ypred_proba (pd.Series, np.ndarray) – The classifier’s predicted probabilities
threshold (float, optional) – Threshold used to make a prediction. Defaults to 0.5.
X (pd.DataFrame, optional) – Any extra columns that are needed from training data.
- Returns
predictions
-
classmethod
is_defined_for_problem_type
(cls, problem_type)¶
-
objective_function
(self, y_true, y_predicted, X=None, sample_weight=None)[source]¶ Calculates cost-benefit of the using the predicted and true values.
- Parameters
y_predicted (pd.Series) – Predicted labels
y_true (pd.Series) – True labels
X (pd.DataFrame) – Ignored.
sample_weight (pd.DataFrame) – Ignored.
- Returns
Cost-benefit matrix score
- Return type
float
-
optimize_threshold
(self, ypred_proba, y_true, X=None)¶ Learn a binary classification threshold which optimizes the current objective.
- Parameters
ypred_proba (pd.Series) – The classifier’s predicted probabilities
y_true (pd.Series) – The ground truth for the predictions.
X (pd.DataFrame, optional) – Any extra columns that are needed from training data.
- Returns
Optimal threshold for this objective
-
positive_only
(cls)¶ If True, this objective is only valid for positive data. Default False.
-
score
(self, y_true, y_predicted, X=None, sample_weight=None)¶ Returns a numerical score indicating performance based on the differences between the predicted and actual values.
- Parameters
y_predicted (pd.Series) – Predicted values of length [n_samples]
y_true (pd.Series) – Actual class labels of length [n_samples]
X (pd.DataFrame or np.ndarray) – Extra data of shape [n_samples, n_features] necessary to calculate score
sample_weight (pd.DataFrame or np.ndarray) – Sample weights used in computing objective value result
- Returns
score
-
validate_inputs
(self, y_true, y_predicted)¶ Validates the input based on a few simple checks.
- Parameters
y_predicted (pd.Series, or pd.DataFrame) – Predicted values of length [n_samples]
y_true (pd.Series) – Actual class labels of length [n_samples]
- Returns
None