ExponentialLR
Decays the learning rate of each parameter group by gamma every epoch.
This scheduling technique reduces the learning rate every epoch (or every eval period in case of iteration trainer) by a factor "gamma".
At the last epoch, it sets the learning rate as the initial Base Learning Rate.

Major Parameter

Gamma

The factor by which the learning rate is decayed every epoch.
The value of gamma should be less than 1 in order to reduce the learning rate.

Code Implementation

1
import torch
2
model = [Parameter(torch.randn(2, 2, requires_grad=True))]
3
optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate, weight_decay=0.01, amsgrad=False)
4
scheduler=torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.1, last_epoch=-1, verbose=False)
5
for epoch in range(20):
6
for input, target in dataset:
7
optimizer.zero_grad()
8
output = model(input)
9
loss = loss_fn(output, target)
10
loss.backward()
11
optimizer.step()
12
scheduler.step()
Copied!
Last modified 3mo ago