What is FLOPS in field of deep learning? Why we don't use the term just FLO?
We use the term FLOPS to measure the number of operations of a frozen deep learning network.
Following Wikipedia, FLOPS = floating point operations per second. When we test computing units, we should consider of the time. But in case of measuring deep learning network, how can I understand this concept of time? Shouldn't we use the term just FLO(floating point operations)?
Why do people use the term FLOPS? If there is anything I don't know, what is it?
==== attachment ===
Frozen deep learning networks that I mentioned is just a kind of software. It's not about hardware. In the field of deep learning, people use the term FLOPS to measure how many operations are needed to run the network model. In this case, in my opinion, we should use the term FLO. I thought people confused about the term FLOPS and I want to know if others think the same or if I'm wrong.
Please look at these cases:
how to calculate a net's FLOPs in CNN
https://iq.opengenus.org/floating-point-operations-per-second-flops-of-machine-learning-models/
FLOPS = Floating point operations per second
FLOPs = Floating point operations
FLOPS is a unit of speed. FLOPs is a unit of amount. More is better/faster.