Precision is the quotient of the true positives (TP) over all positive predictions (TP + FP):
The values to depict the precision for a model are taken over the confusion matrices of all classes.
The higher your precision is, the fewer false positives (FP) are generated by your model. This makes it a great metric to evaluate spam filters, for example. Here, you want to minimize FPs because this would result in important emails landing in the spam folder.
Usually, precision correlates negatively with recall.
!pip install torchmetricsimport torchimport torchmetricsfrom torchmetrics import Precisionpreds = torch.tensor([2, 0, 2, 1])target = torch.tensor([1, 1, 2, 0])precision = Precision(average='macro', num_classes=3)print(precision(preds, target))precision1 = Precision(average='micro')print(precision1(preds, target))# 'micro': Calculate the metric globally, across all samples and classes.# 'macro': Calculate the metric for each class separately, and average the metrics across classes (with equal weights for each class).
from sklearn.metrics import recall_score# define actualact_pos = [1 for _ in range(100)]act_neg = [0 for _ in range(10000)]y_true = act_pos + act_neg# define predictionspred_pos = [0 for _ in range(10)] + [1 for _ in range(90)]pred_neg = [0 for _ in range(10000)]y_pred = pred_pos + pred_neg# calculate recallrecall = recall_score(y_true, y_pred, average='binary')print('Recall: %.3f' % recall)
tf.keras.metrics.Precision(thresholds=None, top_k=None, class_id=None, name=None, dtype=None)