Binary Cross-Entropy Loss
Cross-Entropy loss for a mulit-label classifier (taggers)
Binary Cross-Entropy loss is a special case of Cross-Entropy loss used for multilabel classification (taggers). It is the cross entropy loss when there are only two classes involved. It is reliant on Sigmoid activation functions.
Mathematically, it is given as,
$BinaryC.E=-\sum_i^2 t_i log(p_i)$
Where
$t_i$
is the true label and
$p_i$
is the probability of the
$i^{th}$
label

# Code implementation

PyTorch
TensorFlow
1
# importing the library
2
import torch
3
import torch.nn as nn
4
5
6
7
# Binary Cross-Entropy Loss
8
9
target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10
10
output = torch.full([10, 64], 1.5) # A prediction (logit)
11
12
pos_weight = torch.ones() # All weights are equal to 1
13
criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)
14
criterion(output, target) # -log(sigmoid(1.5))
Copied!
1
# importing the library
2
import tensorflow as tf
3
4
y_true = [[0., 1.], [0., 0.]]
5
y_pred = [[0.6, 0.4], [0.4, 0.6]]
6
7
# Using 'auto'/'sum_over_batch_size' reduction type.
8
bce = tf.keras.losses.BinaryCrossentropy()
9
bce(y_true, y_pred).numpy()
10
11
12
# Calling with 'sample_weight'.
13
bce(y_true, y_pred, sample_weight=[1, 0]).numpy()
Copied!