I'm new to using Python's timeit module to benchmark code, but the results I'm getting make me think that I'm misunderstanding how to interpret the results.
This question has two parts:
Part A. In the code below, I'm using timeit to measure the speed of Python's sort()
method for lists. In the below example, I'm sorting an empty list:
import timeit
setup_code = '''
array = []
'''
test_code = '''
array.sort()
'''
print(timeit.timeit(stmt=test_code, setup=setup_code, number=1))
I get results like 4.437984898686409e-06
and 2.2110179997980595e-06
. My understanding is that this is the number of seconds. But here's the thing: I get these results instantaneously. That is, as soon as I hit enter, that number appears on my screen. Shouldn't I, by definition, have to wait sometime between 2 and 4 seconds before I see those results?
Part B. Below, I measure the speed of sorting a list containing thousands of random integers:
import timeit
setup_code = '''
import random
array = []
for i in range(100000):
n = random.randint(1, 100000)
array.append(n)
'''
test_code = '''
array.sort()
'''
print(timeit.timeit(stmt=test_code, setup=setup_code, number=1))
Here, I get what seem to be more accurate results like 0.02303651801776141
. But why is timeit telling me that sorting a large list is much faster than sorting an empty list?
Thank you in advance for your help! I'm using a Macbook Air, and getting the same results in both Python 2.7 and Python 3.11.
4e-06
is in scientific notation. It means 4*10**-6
; timeit
is telling you that sorting an empty list takes two to four microseconds.