Binary Cross-Entropy Loss
Cross-Entropy loss for a mulit-label classifier (taggers)
Binary Cross-Entropy loss is a special case of Cross-Entropy loss used for multilabel classification (taggers). It is the cross entropy loss when there are only two classes involved. It is reliant on Sigmoid activation functions.
Mathematically, it is given as,
BinaryC.E=โˆ’โˆ‘i2tilog(pi)BinaryC.E=-\sum_i^2 t_i log(p_i)
Where
tit_i
is the true label and
pip_i
is the probability of the
ithi^{th}
label

Code implementation

PyTorch
TensorFlow
1
# importing the library
2
import torch
3
import torch.nn as nn
4
โ€‹
5
input = torch.randn(3, 5, requires_grad=True)
6
โ€‹
7
# Binary Cross-Entropy Loss
8
โ€‹
9
target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10
10
output = torch.full([10, 64], 1.5) # A prediction (logit)
11
โ€‹
12
pos_weight = torch.ones([64]) # All weights are equal to 1
13
criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)
14
criterion(output, target) # -log(sigmoid(1.5))
Copied!
1
# importing the library
2
import tensorflow as tf
3
โ€‹
4
y_true = [[0., 1.], [0., 0.]]
5
y_pred = [[0.6, 0.4], [0.4, 0.6]]
6
โ€‹
7
# Using 'auto'/'sum_over_batch_size' reduction type.
8
bce = tf.keras.losses.BinaryCrossentropy()
9
bce(y_true, y_pred).numpy()
10
โ€‹
11
โ€‹
12
# Calling with 'sample_weight'.
13
bce(y_true, y_pred, sample_weight=[1, 0]).numpy()
Copied!

Further Resources

Last modified 3mo ago