Accuracy
The most intuitive metric for single-label classification tasks
The accuracy is the overall percentage of predictions without errors. It's derived from the confusion matrix.
Accuracy used for multi-label classifiers (taggers) is also referred to as the Hamming Score.

Interpretation / calculation

The accuracy is computed by the quotient of the sum of true positives (TP) and true negatives (TN) above the total number of predictions:
Accuracy=TP+TNTP+TN+FP+FNAccuracy = \frac{TP + TN}{TP + TN + FP + FN}
The values to depict the accuracy for a single-label classifier are taken over the confusion matrices of all classes.
For instance and semantic segmentors as well as object detectors, a confusion matrix is calculated by first checking if the predicted class is the same as in the ground truth, and then if the IoU is above a certain threshold. Often, 0.5 is used.
Whereas accuracy is very intuitive, it has one drawback: the accuracy doesn't tell you what kind of errors your model makes. At 1% miss-classification rate (99% accuracy), the error could be either caused by false positives (FP) or false negatives (FN). This information is important when you're evaluating a model for a specific use case, though. Take COVID-tests as an example: you'd rather have FPs than FNs.
More informative metrics are:

Code implementation

PyTorch
Sklearn
TensorFlow
1
!pip install torchmetrics
2
​
3
import torch
4
import torchmetrics
5
​
6
from torchmetrics import Accuracy
7
​
8
target = torch.tensor([0, 1, 2, 3])
9
preds = torch.tensor([0, 2, 1, 3])
10
​
11
accuracy = Accuracy()
12
accuracy(preds, target)
Copied!
1
from sklearn.metrics import accuracy_score
2
​
3
y_pred = [0, 2, 1, 3]
4
y_true = [0, 1, 2, 3]
5
​
6
accuracy_score(y_true, y_pred, normalize=False)
Copied!
1
# importing the library
2
import tensorflow as tf
3
​
4
m = tf.keras.metrics.Accuracy()
5
m.update_state([1, 2, 3, 4], [0, 2, 3, 4])
6
​
7
print('Final result: ', m.result().numpy())
Copied!

Further resources

Last modified 5mo ago