Precision
How good are your model's predictions?
Precision estimates how many of a classifier's predictions are actually right. It is derived from the confusion matrix. It is the counterpart of recall.
Sometimes, it's also called positive predictive value.

Interpretation / calculation

Precision is the quotient of the true positives (TP) over all positive predictions (TP + FP):
Precision=TPTP+FPPrecision = \frac{TP}{TP+FP}
The values to depict the precision for a model are taken over the confusion matrices of all classes.
For instance and semantic segmentors as well as object detectors, a confusion matrix is calculated by first checking if the predicted class is the same as in the ground truth and then if the IoU is above a certain threshold. Often, 0.5 is used.
The higher your precision is, the fewer false positives (FP) are generated by your model. This makes it a great metric to evaluate spam filters, for example. Here, you want to minimize FPs because this would result in important emails landing in the spam folder.
Usually, precision correlates negatively with recall.

Code implementation

PyTorch
Sklearn
TensorFlow
1
!pip install torchmetrics
2
import torch
3
import torchmetrics
4
from torchmetrics import Precision
5
preds = torch.tensor([2, 0, 2, 1])
6
target = torch.tensor([1, 1, 2, 0])
7
precision = Precision(average='macro', num_classes=3)
8
print(precision(preds, target))
9
โ€‹
10
precision1 = Precision(average='micro')
11
print(precision1(preds, target))
12
โ€‹
13
# 'micro': Calculate the metric globally, across all samples and classes.
14
# 'macro': Calculate the metric for each class separately, and average the metrics across classes (with equal weights for each class).
Copied!
1
from sklearn.metrics import recall_score
2
โ€‹
3
# define actual
4
act_pos = [1 for _ in range(100)]
5
act_neg = [0 for _ in range(10000)]
6
y_true = act_pos + act_neg
7
โ€‹
8
# define predictions
9
pred_pos = [0 for _ in range(10)] + [1 for _ in range(90)]
10
pred_neg = [0 for _ in range(10000)]
11
y_pred = pred_pos + pred_neg
12
โ€‹
13
# calculate recall
14
recall = recall_score(y_true, y_pred, average='binary')
15
print('Recall: %.3f' % recall)
Copied!
1
tf.keras.metrics.Precision(
2
thresholds=None, top_k=None, class_id=None, name=None, dtype=None
3
)
Copied!

Further resources

Last modified 5mo ago