Recall

How good is your model in picking things up?

Recall indicates how many ground truth labels your classifier is able to pick up. It is derived from the confusion matrix. It is the counterpart of precision.

Recall goes also under the names of true positive rate or sensitivity.

Calculation / interpretation

Recall is the fraction of all correctly classified positive predictions between all positive ground truth labels:

Recall=TPTP+FNRecall = \frac{TP}{TP + FN}

The values to depict the precision for a model are taken over the confusion matrices of all classes.

For instance and semantic segmentors as well as object detectors, a confusion matrix is calculated by first checking if the predicted class is the same as in the ground truth and then if the IoU is above a certain threshold. Often, 0.5 is used.

The higher your recall is, the fewer false negatives (FN) are generated by your model. This makes it a great metric to evaluate quality assurance models, for example. Here, you want to minimize FNs because this would result in bad samples being shipped to customers.

Usually, recall correlates negatively with precision.

Code implementation

PyTorch
Sklearn
TensorFlow
PyTorch
!pip install torchmetrics
import torch
import torchmetrics
from torchmetrics import Recall
preds = torch.tensor([2, 0, 2, 1])
target = torch.tensor([1, 1, 2, 0])
recall = Recall(average='macro', num_classes=3)
print(recall(preds, target))
recall1 = Recall(average='micro')
print(recall1(preds, target))
# 'micro': Calculate the metric globally, across all samples and classes.
# 'macro': Calculate the metric for each class separately, and average the metrics across classes (with equal weights for each class).
Sklearn
from sklearn.metrics import recall_score
# define actual
act_pos = [1 for _ in range(100)]
act_neg = [0 for _ in range(10000)]
y_true = act_pos + act_neg
# define predictions
pred_pos = [0 for _ in range(10)] + [1 for _ in range(90)]
pred_neg = [0 for _ in range(10000)]
y_pred = pred_pos + pred_neg
# calculate recall
recall = recall_score(y_true, y_pred, average='binary')
print('Recall: %.3f' % recall)
TensorFlow
tf.keras.metrics.Recall(
thresholds=None, top_k=None, class_id=None, name=None, dtype=None
)

Further resources