pythontensorflowconv-neural-networktraining-dataefficientnet

Data type preference for training CNN?


I originally was using input data of int8 type ranging from 0-255 before learning that standardizing and normalizing should increase learning speeds and accuracy. I attempted both, with and without a mean of zero, and none of these methods improved learning speed or accuracy for my model relative to 0-255, int8 approach. I'm just wondering whether training with, for example, float64 is going to be any different in speed compared with int8, or whether the number of decimal places present in a value has any effect on training speeds. Thank you :)


Solution

  • You should always normalize/standardize your images before training. There are many post about this topic. Here are a few

    normalize-the-images-before-we-put-them-into-cnn

    Data_Scaling

    Training with INT is faster than FLOAT; however it's not recommended since there is a general loss in accuracy. Once you have a full trained model you can quantization your model from FLOAT to INT.