Precision

How good are your model's predictions?

Precision estimates how many of a classifier's predictions are actually right. It is derived from the confusion matrix. It is the counterpart of recall.

Sometimes, it's also called positive predictive value.

Interpretation / calculation

Precision is the quotient of the true positives (TP) over all positive predictions (TP + FP):

Precision=TPTP+FPPrecision = \frac{TP}{TP+FP}

The values to depict the precision for a model are taken over the confusion matrices of all classes.

For instance and semantic segmentors as well as object detectors, a confusion matrix is calculated by first checking if the predicted class is the same as in the ground truth and then if the IoU is above a certain threshold. Often, 0.5 is used.

The higher your precision is, the fewer false positives (FP) are generated by your model. This makes it a great metric to evaluate spam filters, for example. Here, you want to minimize FPs because this would result in important emails landing in the spam folder.

Usually, precision correlates negatively with recall.

Code implementation

PyTorch
Sklearn
TensorFlow
PyTorch
!pip install torchmetrics
import torch
import torchmetrics
from torchmetrics import Precision
preds = torch.tensor([2, 0, 2, 1])
target = torch.tensor([1, 1, 2, 0])
precision = Precision(average='macro', num_classes=3)
print(precision(preds, target))
precision1 = Precision(average='micro')
print(precision1(preds, target))
# 'micro': Calculate the metric globally, across all samples and classes.
# 'macro': Calculate the metric for each class separately, and average the metrics across classes (with equal weights for each class).
Sklearn
from sklearn.metrics import recall_score
# define actual
act_pos = [1 for _ in range(100)]
act_neg = [0 for _ in range(10000)]
y_true = act_pos + act_neg
# define predictions
pred_pos = [0 for _ in range(10)] + [1 for _ in range(90)]
pred_neg = [0 for _ in range(10000)]
y_pred = pred_pos + pred_neg
# calculate recall
recall = recall_score(y_true, y_pred, average='binary')
print('Recall: %.3f' % recall)
TensorFlow
tf.keras.metrics.Precision(
thresholds=None, top_k=None, class_id=None, name=None, dtype=None
)

Further resources