mAP (mean Average Precision)

Balancing the precision <> recall tradeoff

Mean average precision (mAP) is the most meaningful metric for object detectors, instance, and semantic segmentors. It incorporates the trade-off between precision and recall, and in doing so, takes into account both types of errors, false positives (FP) and false negatives (FN). This property makes mAP applicable for most use cases.

Calculation / interpretation

mAP is based on the IoU, precision, and recall metrics, as well as a confusion matrix. So, make sure to read the entries if you're not familiar with them.

Quick recap: how to calculate a confusion matrix

As the mAP is based on precision and recall, it also relies on the confusion matrix. For object detectors, instance, and semantic segmentors, the confusion matrix is computed by checking if the class of the prediction fits the class from the ground truth and checking if the IoU is above a certain threshold, most of the time above 0.5.

Average Precision (AP) is computed for one class

Don't let the term Average Precision fool you. It's not simply an average of a few precision values, but the result if you average out the Precision-Recall Curve for one class.

https://github.com/ultralytics/yolov3/issues/898

The Precision-Recall Curve depicts the conflict between precision and recall. Typically,

  • models with high precision and low recall produce very confident predictions but miss a part of the instances.

  • models with low precision and high recall can find most objects, but the predictions are false positives to a certain degree, and confidence decreases.

It is constructed by computing the precision and recall values for different confidence thresholds. I.e., if the confidence threshold is at 0.9, only predictions where the classifier has a confidence of over 90% are counted as positive predictions. The precision will be pretty high. Vice versa, if the threshold is at 0.1, much more predictions will be included in the positive ones and recall will be large.

The Average Precision is the area under the curve. Perfect models would have an AP of 1.0.

Mean Average Precision (mAP) is computed for many classes

Once you know how to calculate the AP, computing the mAP is easy:

The mAP is the mean of the AP over all classes.

mAP for different IoU thresholds

Often, you see the mAP cited as [email protected] or [email protected]. This notation depicts which IoU threshold has been used to calculate the confusion matrix. This is important to note because the IoU threshold can influence the mAP substantially: a low IoU will boost your mAP.

As mentioned above, more often than not, 0.5 is used as an IoU threshold.

Sometimes, for example, for the COCO dataset, the mAP is benchmarked by averaging it out over IoUs from [.5,.95] in .05 steps.

Code implementation

Numpy
TensorFlow
Numpy
import numpy as np
def apk(actual, predicted, k=10):#Computes the average precision at k.
#This function computes the average prescision at k between two lists of items.
if not actual:
return 0.0
return score / min(len(actual), k)
def mapk(actual, predicted, k=10):#Computes the mean average precision at k.
#This function computes the mean average prescision at k between two lists of lists of items.
return np.mean([apk(a,p,k) for a,p in zip(actual, predicted)])
TensorFlow
!pip install tensorflow==1.15 #Make sure you have updated the Tensorflow version for tf.metrics.average_precision_at_k to work
import tensorflow as tf
import numpy as np
y_true = np.array([[2], [1], [0], [3], [0]]).astype(np.int64)
y_true = tf.identity(y_true)
y_pred = np.array([[0.1, 0.2, 0.6, 0.1],
[0.8, 0.05, 0.1, 0.05],
[0.3, 0.4, 0.1, 0.2],
[0.6, 0.25, 0.1, 0.05],
[0.1, 0.2, 0.6, 0.1]
]).astype(np.float32)
y_pred = tf.identity(y_pred)
_, m_ap = tf.metrics.average_precision_at_k(y_true, y_pred, 3)
sess = tf.Session()
sess.run(tf.local_variables_initializer())
stream_vars = [i for i in tf.local_variables()]
tf_map = sess.run(m_ap)
print(tf_map)
print((sess.run(stream_vars)))
tmp_rank = tf.nn.top_k(y_pred,3)
print(sess.run(tmp_rank))

Related entries