AMSgrad Variant (Adam)
Extension to Adam
While the Adam optimizer, which made use of momentum as well as the RMS prop, was efficient in adjusting the learning rates and finding the optimal solution, we have found that certain convergence issues with it. Research has been able to show that there are simple one dimensional convex functions for which the Adam is not able to converge.
AMSgrad tackles this convergence issue.
Adam makes use of adaptive gradient and updates the parameters separately. The updates might increase or decrease depending upon the calculated exponential moving average of the gradients. But sometimes, the updates of the the parameters is large which doesn't result in convergence. Citing the paper "On The Convergence of ADAM and beyond",
The key difference of AMSGRAD with ADAM is that it maintains the maximum of all
vtv_t
until the present time step and uses this maximum value for normalizing the running average of the gradient instead of
vtv_t
in ADAM
Hence, the difference between the AMSgrad and Adam is the calculated second moment vector which is used to update the parameters. To put it simply, AMSgrad uses the maximum second moment up until the
ithi^{th}
iteration to update the parameters.
Performance of ADAM and AMSgrad has been presented on a synthetic function to outline the convergence problem of the ADAM. Source โ€‹
Adam vs AMSgrad on an artificial function.
โ€‹

AMSgrad is code example

1
import torch
2
โ€‹
3
# N is batch size; D_in is input dimension;
4
# H is hidden dimension; D_out is output dimension.
5
N, D_in, H, D_out = 64, 1000, 100, 10
6
โ€‹
7
# Create random Tensors to hold inputs and outputs.
8
x = torch.randn(N, D_in)
9
y = torch.randn(N, D_out)
10
โ€‹
11
# Use the nn package to define our model and loss function.
12
model = torch.nn.Sequential(
13
torch.nn.Linear(D_in, H),
14
torch.nn.ReLU(),
15
torch.nn.Linear(H, D_out),
16
)
17
loss_fn = torch.nn.MSELoss(reduction='sum')
18
โ€‹
19
# Use the optim package to define an Optimizer that will update the weights of
20
# the model for us. Here we will use Adam; the optim package contains many other
21
# optimization algorithms. The first argument to the Adam constructor tells the
22
# optimizer which Tensors it should update.
23
learning_rate = 1e-4
24
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate,amsgrad=true)
25
#setting the amsgrad to be true
26
#note that we are using Adam in our example
27
for t in range(500):
28
# Forward pass: compute predicted y by passing x to the model.
29
y_pred = model(x)
30
โ€‹
31
# Compute and print loss.
32
loss = loss_fn(y_pred, y)
33
print(t, loss.item())
34
35
# Before the backward pass, use the optimizer object to zero all of the
36
# gradients for the Tensors it will update (which are the learnable weights
37
# of the model)
38
optimizer.zero_grad()
39
โ€‹
40
# Backward pass: compute gradient of the loss with respect to model parameters
41
loss.backward()
42
โ€‹
43
# Calling the step function on an Optimizer makes an update to its parameters
44
optimizer.step()
Copied!

Further Resources

Last modified 5mo ago