Recall is the fraction of all correctly classified positive predictions between all positive ground truth labels:
The values to depict the precision for a model are taken over the confusion matrices of all classes.
The higher your recall is, the fewer false negatives (FN) are generated by your model. This makes it a great metric to evaluate quality assurance models, for example. Here, you want to minimize FNs because this would result in bad samples being shipped to customers.
Usually, recall correlates negatively with precision.
!pip install torchmetricsimport torchimport torchmetricsfrom torchmetrics import Recallpreds = torch.tensor([2, 0, 2, 1])target = torch.tensor([1, 1, 2, 0])recall = Recall(average='macro', num_classes=3)print(recall(preds, target))recall1 = Recall(average='micro')print(recall1(preds, target))# 'micro': Calculate the metric globally, across all samples and classes.# 'macro': Calculate the metric for each class separately, and average the metrics across classes (with equal weights for each class).
from sklearn.metrics import recall_score# define actualact_pos = [1 for _ in range(100)]act_neg = [0 for _ in range(10000)]y_true = act_pos + act_neg# define predictionspred_pos = [0 for _ in range(10)] + [1 for _ in range(90)]pred_neg = [0 for _ in range(10000)]y_pred = pred_pos + pred_neg# calculate recallrecall = recall_score(y_true, y_pred, average='binary')print('Recall: %.3f' % recall)
tf.keras.metrics.Recall(thresholds=None, top_k=None, class_id=None, name=None, dtype=None)