deep-learningconv-neural-networkartificial-intelligenceflops

Difficulty understanding FLOPS in this scenario


Given FLOPS are the floating point operations per second, would that not be dependent on the power of the machine rather than the model and how many parameters it has? What am I missing here? Screenshot is from "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks". Thanks.

EfficientNet FLOPS


Solution

  • Some hardware manufacturers specify FLOPS as the performance metric. On the other hand, you can calculate approximate FLOPs value for your model, for regular (not depthwise) convolutional layer it would be (according to this):

    formula

    where 2 is for two different types of instructions: multiplying and accumulating.

    You need to keep in mind that low FLOPS value do not automatically provide high performance.