I am training a neural network and it stopped training due to the gradient stopping condition. From what I can see the gradient 8.14e-0.6 is larger than minimum gradient 1e-0.5, so why did it stop? Is it because the gradient wasn't improving so there was little point continuing?
I am very new to neural networks (and using MATLAB's nntool) so any help/explanation would be much appreciated.
This is not a neural network problem, it is a problem of understanding floating point representations:
8.14e-06 = 8.14×10^−6 = 0.00000814 < 0.00001 = 1.0x10^-5 = 1e-05