With SGD learning rate should not be changed during epochs but it is. Help me understand why it happens please and how to prevent this LR changing?
import torch
params = [torch.nn.Parameter(torch.randn(1, 1))]
optimizer = torch.optim.SGD(params, lr=0.9)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9)
for epoch in range(5):
print(scheduler.get_lr())
scheduler.step()
Output is:
[0.9]
[0.7290000000000001]
[0.6561000000000001]
[0.5904900000000002]
[0.5314410000000002]
My torch version is 1.4.0
To expand upon xiawi's answer about "strange" behavior (0.81
is missing): It is PyTorch's default way since 1.1.0
release, check documentation, namely this part:
[...] If you use the learning rate scheduler (calling
scheduler.step()
) before the optimizer’s update (callingoptimizer.step()
), this will skip the first value of the learning rate schedule.
Additionally you should get a UserWarning
thrown by this function after the first get_lr()
call as you do not call optimizer.step()
at all.