I have a particle simulator that requires many calls for random numbers, mainly between 0.0-1.0. It does this for every particle generated so you can see it adds up.
I counted no fewer than 60 of them per iteration: (double)(rand() % RAND_MAX) / (RAND_MAX)
They are used for many features ranging from color randomness to size to velocity to forces, etc.
My question is two-fold:
Does the rand() function take a big toll on performance or is 60 of them, per particle, not worth worrying about? I could easily have 1000 particles so that would equate to 60,000 random calls!
I know I could compute say 5 random floats and re-use those throughout the 60 calls but I worry that's a bad idea and I'll start seeing inconsistencies due to the re-use. Computing just one and reusing that would probably be a terrible idea, right?
Is there any other better way to optimize this?
This is not an answer to your question as much as a comment: Even experienced programmers cannot tell where the bottlenecks of their code are unless they actually profile the code. The same applies to your question: You are wondering whether a part of the code is a performance problem, but you present no evidence that you have measured this part, and consequently you may very well be wasting your time wondering about this issue.
So the recommendation would be: If your code is too slow, run it with a profiler and see whether the calls to generate random numbers show up anywhere close to the top. If they do, then you can start wondering about solutions. If they don't, well, then you just gained a couple of days to do something else!