emm, converting imagenet to Tfrecord format is quite complicated, so I downloaded the processed Tfrecord imagenet from somewhere.
I applied it to resnet34 in tensorflow with parameters from Pytorch, but found that the accuracy is just 55%, too low. I guess the reason may be the different methods processing imagenet between Pytorch.models and this Tfrecord. And a nice bro has told me the way how Pytorch processing data, but I still need to know how tensorflow processing it.
I found that the values of Tfrecord pictures range from -1 to 1, can you tell me the processing method of this Tfrecord so that I can try to improve the accuracy?
Thanks a lot! I am just too new_flesh, your kind help is so so important
OK, Let me give out my first anser in stackoverflow to myself.
The different way processing images effect the accuracy, that' right.
Pytorch scales pictures to [0,1] and then normalizes them using mean and std.
And the Tfrecord file I found scales pictures to [0,1], and then scale them to [-1, 1] by:
image = tf.subtract(image, 0.5)
image = tf.multiply(image, 2.0)
(which is a strange way as I see).
So I add comments to thes two lines, and got a 66% accuracy. By fine_tune, got 72%(But I stiil don't understand the accuracy decline when transfer parameters from Pytorch to Tensoflow).
PS: In this period, I found that parameters set in operator would be saved when saving models in tensorflow, so don't worry about how to set parameter of operator the same.