I am trying to get timestamps that are accurate down to the microsecond on Windows OS and macOS in Python 3.10+.
On Windows OS, I have noticed Python's built-in time.time()
(paired with datetime.fromtimestamp()
) and datetime.datetime.now()
seem to have a slower clock. They don't have enough resolution to differentiate microsecond-level events. The good news is time
functions like time.perf_counter()
and time.time_ns()
do seem to use a clock that is fast enough to measure microsecond-level events.
Sadly, I can't figure out how to get them into datetime
objects. How can I get the output of time.perf_counter()
or PEP 564's nanosecond resolution time functions into a datetime
object?
Note: I don't need nanosecond-level stuff, so it's okay to throw away out precision below 1-μs).
Current Solution
This is my current (hacky) solution, which actually works fine, but I am wondering if there's a cleaner way:
import time
from datetime import datetime, timedelta
from typing import Final
IMPORT_TIMESTAMP: Final[datetime] = datetime.now()
INITIAL_PERF_COUNTER: Final[float] = time.perf_counter()
def get_timestamp() -> datetime:
"""Get a high resolution timestamp with μs-level precision."""
dt_sec = time.perf_counter() - INITIAL_PERF_COUNTER
return IMPORT_TIMESTAMP + timedelta(seconds=dt_sec)
That's almost as good as it gets, since the C module, if available, overrides all classes defined in the pure Python implementation of the datetime
module with the fast C implementation, and there are no hooks.
Reference: python/cpython@cf86e36
Note that:
datetime.now()
and obtaining the performance counter time.datetime
and a timedelta
.Depending on your specific use case if calling multiple times, that may or may not matter.
A slight improvement would be:
INITIAL_TIMESTAMP: Final[float] = time.time()
INITIAL_TIMESTAMP_PERF_COUNTER: Final[float] = time.perf_counter()
def get_timestamp_float() -> float:
dt_sec = time.perf_counter() - INITIAL_TIMESTAMP_PERF_COUNTER
return INITIAL_TIMESTAMP + dt_sec
def get_timestamp_now() -> datetime:
dt_sec = time.perf_counter() - INITIAL_TIMESTAMP_PERF_COUNTER
return datetime.fromtimestamp(INITIAL_TIMESTAMP + dt_sec)
Windows:
# Intrinsic error
timeit.timeit('datetime.now()', setup='from datetime import datetime')/1000000 # 0.31 μs
timeit.timeit('time.time()', setup='import time')/1000000 # 0.07 μs
# Performance cost
setup = 'from datetime import datetime, timedelta; import time'
timeit.timeit('datetime.now() + timedelta(1.000001)', setup=setup)/1000000 # 0.79 μs
timeit.timeit('datetime.fromtimestamp(time.time() + 1.000001)', setup=setup)/1000000 # 0.44 μs
# Resolution
min get_timestamp_float() delta: 239 ns
Windows and macOS:
Windows | macOS | |
---|---|---|
# Intrinsic error | ||
timeit.timeit('datetime.now()', setup='from datetime import datetime')/1000000 |
0.31 μs | 0.61 μs |
timeit.timeit('time.time()', setup='import time')/1000000 |
0.07 μs | 0.08 μs |
# Performance cost | ||
setup = 'from datetime import datetime, timedelta; import time' |
- | - |
timeit.timeit('datetime.now() + timedelta(1.000001)', setup=setup)/1000000 |
0.79 μs | 1.26 μs |
timeit.timeit('datetime.fromtimestamp(time.time() + 1.000001)', setup=setup)/1000000 |
0.44 μs | 0.69 μs |
# Resolution | ||
min time() delta (benchmark) |
x ms | 716 ns |
min get_timestamp_float() delta |
239 ns | 239 ns |
239 ns is the smallest difference that float
allows at the magnitude of Unix time, as noted by Kelly Bundy in the comments.
x = time.time()
print((math.nextafter(x, 2*x) - x) * 1e9) # 238.4185791015625
Resolution script, based on https://www.python.org/dev/peps/pep-0564/#script:
import math
import time
from typing import Final
LOOPS = 10 ** 6
INITIAL_TIMESTAMP: Final[float] = time.time()
INITIAL_TIMESTAMP_PERF_COUNTER: Final[float] = time.perf_counter()
def get_timestamp_float() -> float:
dt_sec = time.perf_counter() - INITIAL_TIMESTAMP_PERF_COUNTER
return INITIAL_TIMESTAMP + dt_sec
min_dt = [abs(time.time() - time.time())
for _ in range(LOOPS)]
min_dt = min(filter(bool, min_dt))
print("min time() delta: %s ns" % math.ceil(min_dt * 1e9))
min_dt = [abs(get_timestamp_float() - get_timestamp_float())
for _ in range(LOOPS)]
min_dt = min(filter(bool, min_dt))
print("min get_timestamp_float() delta: %s ns" % math.ceil(min_dt * 1e9))