Model families ๐พ

Model architectures ๐

Metrics ๐

Solvers / Optimizers ๐งฎ

Training parameters

Augmentations

Deployment

Confusion Matrix

Not really a metric, but fundamental for most metrics related to classification

A confusion matrix is not really a metric, but many metrics are calculated on top of it because it gives a great indication of the model's performance. This is why it's important to know it before diving into the metrics themselves.

A confusion matrix is calculated by comparing the predictions of a classifier to the ground truth of the test or validation data set for a given class.

https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/#:~:text=A%20confusion%20matrix%20is%20a,related%20terminology%20can%20be%20confusing.

Interpretation / calculation

The rows and columns are divided into 'no' and 'yes', indicating belonging to the class in question. The rows are for the data-points in the ground truth, and the columns the ones for the predictions.

For classifiers, a confusion matrix is calculated by simply assigning the count of prediction/ground truth combinations to this table.

For instance and semantic segmentors as well as object detectors, a confusion matrix is calculated by first checking if the predicted class is the same as in the ground truth, and then if the IoU is above a certain threshold. Often, 0.5 is used.

In the example above, the classifier made 165 predictions in total. 60 samples of the ground truth were negative, 105 positive. On the other hand, the model only predicted 55 samples as negative and 110 as positive. Ideally, the numbers would be the same.

From the confusion matrix, we can derive four types of predictions. The ones we want to see:

**True positives (TP)**: samples which the model predicted as belonging to the class in question and which actually belong to the class according to ground truth.**True negatives (TN)**: samples which the model predicted as not being part of the class and which are negative in the ground truth as well.

And the ones we don't want to see:

**False positives (FP)**: samples which the model predicted as part of the class but which actually aren't (Type I Error).**False negatives (FN):**samples which the model predicted as negative but which actually belong to the class in question (Type II Error).

The absolute numbers of a confusion matrix are not straightforward to interpret, but as mentioned above, they are used to calculate more interpretable metrics. Check the further resources section for more details on the specific metrics.

Code example

PyTorch

Sklearn

TensorFlow

1

!pip install torchmetrics

2

import torch

3

import torchmetrics

4

from torchmetrics import ConfusionMatrix

5

target = torch.tensor([1, 1, 0, 0])

6

preds = torch.tensor([0, 1, 0, 0])

7

confmat = ConfusionMatrix(num_classes=2)

8

confmat(preds, target)

Copied!

1

from sklearn.metrics import confusion_matrix

2

โ

3

y_true = [2, 0, 2, 2, 0, 1]

4

y_pred = [0, 0, 2, 2, 0, 2]

5

โ

6

confusion_matrix(y_true, y_pred)

Copied!

1

# importing the library

2

import tensorflow as tf

3

4

# Initializing the input tensor

5

labels = tf.constant([1,3,4],dtype = tf.int32)

6

predictions = tf.constant([1,2,3],dtype = tf.int32)

7

8

# Printing the input tensor

9

print('labels: ',labels)

10

print('Predictins: ',predictions)

11

12

# Evaluating confusion matric

13

res = tf.math.confusion_matrix(labels,predictions)

14

15

# Printing the result

16

print('Confusion_matrix: ',res)

Copied!

Metrics based on the confusion Matrix

โ