Recall
How good is your model in picking things up?
Recall indicates how many ground truth labels your classifier is able to pick up. It is derived from the confusion matrix. It is the counterpart of precision.
Recall goes also under the names of true positive rate or sensitivity.

Calculation / interpretation

Recall is the fraction of all correctly classified positive predictions between all positive ground truth labels:
Recall=TPTP+FNRecall = \frac{TP}{TP + FN}
The values to depict the precision for a model are taken over the confusion matrices of all classes.
For instance and semantic segmentors as well as object detectors, a confusion matrix is calculated by first checking if the predicted class is the same as in the ground truth and then if the IoU is above a certain threshold. Often, 0.5 is used.
The higher your recall is, the fewer false negatives (FN) are generated by your model. This makes it a great metric to evaluate quality assurance models, for example. Here, you want to minimize FNs because this would result in bad samples being shipped to customers.
Usually, recall correlates negatively with precision.

Code implementation

PyTorch
Sklearn
TensorFlow
1
!pip install torchmetrics
2
import torch
3
import torchmetrics
4
from torchmetrics import Recall
5
preds = torch.tensor([2, 0, 2, 1])
6
target = torch.tensor([1, 1, 2, 0])
7
recall = Recall(average='macro', num_classes=3)
8
print(recall(preds, target))
9
โ€‹
10
recall1 = Recall(average='micro')
11
print(recall1(preds, target))
12
โ€‹
13
# 'micro': Calculate the metric globally, across all samples and classes.
14
# 'macro': Calculate the metric for each class separately, and average the metrics across classes (with equal weights for each class).
Copied!
1
from sklearn.metrics import recall_score
2
โ€‹
3
# define actual
4
act_pos = [1 for _ in range(100)]
5
act_neg = [0 for _ in range(10000)]
6
y_true = act_pos + act_neg
7
โ€‹
8
# define predictions
9
pred_pos = [0 for _ in range(10)] + [1 for _ in range(90)]
10
pred_neg = [0 for _ in range(10000)]
11
y_pred = pred_pos + pred_neg
12
โ€‹
13
# calculate recall
14
recall = recall_score(y_true, y_pred, average='binary')
15
print('Recall: %.3f' % recall)
Copied!
1
tf.keras.metrics.Recall(
2
thresholds=None, top_k=None, class_id=None, name=None, dtype=None
3
)
Copied!

Further resources

Last modified 5mo ago