Action 1: Multiplication of a random number a with another random number b Action 2: Multiplication of the same number a with 0
I did a small experiment to see which of these actions has the smallest execution time. So I wrote the small program below that does Action 1 a large number of times measuring its total execution time and repeates the same for Action 2 to see the smallest execution time between the two. I repeat the above 100 times to create more reliable results.
Run the program to see that for some reason Action 1 is faster tha Action 2 most of the times (around 75%). What could be the explanation for something like that?
import time
import numpy as np
def compare_execution_times(a, b):
# Measure the execution time of multiplication with non-zero b
start_time = time.time()
for _ in range(1000000): # Perform multiplication a large number of times
result = a * b
end_time = time.time()
first_execution_time = end_time - start_time
# Measure the execution time of multiplication with zero b
start_time = time.time()
for _ in range(1000000): # Perform multiplication a large number of times
result = a * 0
end_time = time.time()
second_execution_time = end_time - start_time
return first_execution_time < second_execution_time
count_true = 0
count_false = 0
for _ in range(100):
a = np.random.rand() # Generate random a
b = np.random.rand() # Generate random b
if compare_execution_times(a, b):
count_true += 1
else:
count_false += 1
print("\nNumber of times first execution was smaller:", count_true)
print("Number of times second execution was smaller:", count_false)
Edit: One mistake that I made is that in Action 2 the 0
is int
but it should be 0.0
, thus float
, for a better comparison (see the answer below).
time.time
is terrible for measuring fine-grained performance. The timeit
module, or the %timeit
magic in IPython, handles a lot of small errors that can creep in with naive timing.
You're comparing with floating point values, not Python int
s, for your a
and b
, so type conversions are getting involved only when multiplying by 0
, but not by b
. It wouldn't surprise me that mismatched types would be more expensive than matched types. Changing your zero literal to 0.
or 0.0
would likely reduce the runtime for zeroes a bit.
If you want a legitimate comparison, here's an example using pure Python float
s:
In [1]: %%timeit import random; a, b = random.random(), random.random()
...: a*b
...:
...:
13.6 ns ± 0.13 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
In [2]: %%timeit import random; a, b = random.random(), random.random()
...: a*0.0
...:
...:
13.2 ns ± 0.105 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
(note I used 0.0
so the literal was a float
literal, not int
) and with pure Python int
s:
In [3]: %%timeit import random; a, b = random.randrange(16), random.randrange(16)
...: a*b
...:
...:
11.4 ns ± 0.986 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
In [4]: %%timeit import random; a, b = random.randrange(16), random.randrange(16)
...: a*0
...:
...:
11.3 ns ± 0.33 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
In both cases, multiplying by zero was slightly faster, but not enough to matter (in the second case, the timings were so close I suspect they're statistically insignificant; I'd only see a win if I'd allowed the randrange
to go higher than 16, causing the integer math to produce new int
s, rather than pulling from the small int
cache).