I want to compare execution time of two snippets and see which one is faster. So, I want an accurate method to measure execution time of my python snippets.
I already tried using time.time()
, time.process_time()
, time.perf_counter_ns()
as well as timeit.timeit()
, but I am facing the same issues with all of the them. That is: when I use any of the above methods to measure execution time of THE SAME snippet, it returns a different value each time I run it. And this variation is somewhat significant, to the extent that I cannot reliably use them to compare difference in execution time of two snippets.
As an example, I am running following code in my google colab:
import time
t1 = time.perf_counter()
sample_list = []
for i in range(1000000):
sample_list.append(i)
t2 = time.perf_counter()
print(t2 - t1)
I ran above code 10 times and the variation in my results is about 50% (min value = 0.14, max value = 0.28).
Any alternatives?
The execution time of a given code snippet will almost always be different every time you run it. Most tools that are available for profiling a single function/snippet of code take this into account, and run the code multiple times to be able to provide an average execution time. The reason for this is that there are other processes running on your computer, and resources are not always allocated the same way, so it is impossible to control every variable so that you get the same execution time for every run.
One of the easiest ways to profile a given function or short snippet of code is using the %timeit
"magic" command in ipython. Example:
>>> %timeit 1 + 1
8.41 ns ± 0.0181 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
It also allows you to enter a multi-line block of code to time if you use %%timeit
instead of %timeit
.
The timeit library can be used independently, but it is often easier to use in an interactive ipython session.
Additional resources: