The accuracy is the overall percentage of predictions without errors. It's derived from the confusion matrix.
The accuracy is computed by the quotient of the sum of true positives (TP) and true negatives (TN) above the total number of predictions:
For instance and semantic segmentors as well as object detectors, a confusion matrix is calculated by first checking if the predicted class is the same as in the ground truth, and then if the IoU is above a certain threshold. Often, 0.5 is used.
Whereas accuracy is very intuitive, it has one drawback: the accuracy doesn't tell you what kind of errors your model makes. At 1% miss-classification rate (99% accuracy), the error could be either caused by false positives (FP) or false negatives (FN). This information is important when you're evaluating a model for a specific use case, though. Take COVID-tests as an example: you'd rather have FPs than FNs.
More informative metrics are:
!pip install torchmetricsimport torchimport torchmetricsfrom torchmetrics import Accuracytarget = torch.tensor([0, 1, 2, 3])preds = torch.tensor([0, 2, 1, 3])accuracy = Accuracy()accuracy(preds, target)
from sklearn.metrics import accuracy_scorey_pred = [0, 2, 1, 3]y_true = [0, 1, 2, 3]accuracy_score(y_true, y_pred, normalize=False)
# importing the libraryimport tensorflow as tfm = tf.keras.metrics.Accuracy()m.update_state([1, 2, 3, 4], [0, 2, 3, 4])print('Final result: ', m.result().numpy())