I'm doing a lot of calculations with times, building time objects relative to other time objects by adding seconds. The code is supposed to run on embedded devices and servers. Most documentations say about time_t
that it's some arithmetic type, storing usually the time since the epoch. How safe is it to assume that time_t
store a number of seconds since something? If we can assume that, then we can just use addition and subtraction rather than localtime
, mktime
and difftime
.
So far I've solved the problem by using a constexpr bool time_tUsesSeconds
, denoting whether it is safe to assume that time_t
uses seconds. If it's non-portable to assume time_t
is in seconds, is there a way to initialize that constant automatically?
time_t timeByAddingSeconds(time_t theTime, int timeIntervalSeconds) {
if (Time_tUsesSeconds){
return theTime + timeIntervalSeconds;
} else {
tm timeComponents = *localtime(&theTime);
timeComponents.tm_sec += timeIntervalSeconds;
return mktime(&timeComponents);
}
}
The fact that it is in seconds is stated by the POSIX specification, so, if you're coding for POSIX-compliant environments, you can rely on that.
The C++ standard also states that time_t
must be an arithmetic type.
Anyway, the Unix timing system (second since the Epoch) is going to overflow in 2038. So, it's very likely that, before this date, C++ implementations will switch to other non-int data types (either a 64-bit int or a more complex datatype). Anyway, switching to a 64-bit int would break binary compatibility with previous code (since it requires bigger variables), and everything should be recompiled. Using 32-bit opaque handles would not break binary compatibility, you can change the underlying library, and everything still works, but time_t
would not a time in seconds anymore, it'd be an index for an array of times in seconds. For this reason, it's suggested that you use the functions you mentioned to manipulate time_t
values, and do not assume anything on time_t
.