I am testing a cGAN in keras / tensorflow, and after 1000 epochs I saved the model.
After a bit of time I restored
This is the resulting val_accuracy:
It is possible to see clearly that there is an immense drop in val_loss after restoring the model.
Could someone explain me why/what could have caused this situation ?
Further analysis might be required to prove this, but you might just unintentionally discovered a technique called "warm restarting". Simple said, you train your model with an annealing learning normally, stop, reset the learning rate and start over again. Intuitively you give the model oppurtunities to jump out of local minima and this might result in the observed behavior.