Adagrad
Solver that uses adaptive gradient
Adagrad , short for adaptive gradient, is a gradient based optimizer that automatically tunes its learning rate in the training process. The learning rate is updated parameter wise, i.e. we have a different learning rate for each of the parameters.
The parameters associated with frequently occurring features have small updates (low learning rate), and the parameters associated with seldom occurring features have bigger updates (high learning rate).
Due to this, Adagrad is a suitable solver for sparse data.
Mathematically Adagrad can be formulated as,
gt,i=โˆ‡J(ฮธt,i)g_{t,i}= \nabla J(\theta_{t,i})
Where
gt,ig_{t,i}
is the gradient of the objective function with respect to the parameter
ฮธi\theta_i
The parameter is updated as follows,
ฮธt+1,i=ฮธt,iโˆ’ฮทโ‹…gt,iGt,ii+ฯต\theta_{t+1,i}=\theta_{t,i} - \eta \cdot \frac{g_{t,i}}{\sqrt{G_{t,ii}}+\epsilon}
Here
ฮธt,i\theta_{t,i}
is the parameter to be updated,
Gt,iiG_{t,ii}
is the sum of the square of all the gradient till time t. We can see that the learning rate is adjusted according to the previous encountered gradients.
ฮท\eta
is the Base Learning Rate.
Here, the base learning rate is usually initialized to 0.01.
โ€‹
ฯต\epsilon
is used for numeric stability. Its value is
10โˆ’810^{-8}
by default.

Major Parameters

Learning Rate Decay

It is a technique where a large learning rate is adopted in the beginning of the training process and then it is decayed by the certain factor after pre-defined epochs. Higher learning rate decay suggests that the initial learning rate will decay more in the epochs.
Setting a learning rate decay might potentially slow the training process since we decrease the learning rate.

Code Implementation

1
# importing the library
2
import torch
3
import torch.nn as nn
4
โ€‹
5
x = torch.randn(10, 3)
6
y = torch.randn(10, 2)
7
โ€‹
8
# Build a fully connected layer.
9
linear = nn.Linear(3, 2)
10
โ€‹
11
# Build MSE loss function and optimizer.
12
criterion = nn.MSELoss()
13
โ€‹
14
# Optimization method using Adagrad
15
optimizer = torch.optim.Adagrad(linear.parameters(), lr=0.01, lr_decay=0, weight_decay=0,eps=1e-10)
16
โ€‹
17
# Forward pass.
18
pred = linear(x)
19
โ€‹
20
# Compute loss.
21
loss = criterion(pred, y)
22
print('loss:', loss.item())
23
โ€‹
24
optimizer.step()
Copied!
โ€‹
Last modified 6mo ago