cgettimeofday

What is the precision of the gettimeofday function?


I am reading the chapter single.dvi of OSTEP. In the homework part, it says:

One thing you’ll have to take into account is the precision and accuracy of your timer. A typical timer that you can use is gettimeofday(); read the man page for details. What you’ll see there is that gettimeofday() returns the time in microseconds since 1970; however, this does not mean that the timer is precise to the microsecond. Measure back-to-back calls to gettimeofday() to learn something about how precise the timer re- ally is; this will tell you how many iterations of your null system-call test you’ll have to run in order to get a good measurement result. If gettimeofday() is not precise enough for you, you might look into using the rdtsc instruction available on x86 machines

I wrote some code to test the cost of calling gettimeofday() function as below:

#include <stdio.h>
#include <sys/time.h>

#define MAX_TIMES 100000

void m_gettimeofday() {
    struct timeval current_time[MAX_TIMES];
    int i;
    for (i = 0; i < MAX_TIMES; ++i) {
        gettimeofday(&current_time[i], NULL);
    }
    printf("seconds: %ld\nmicro_seconds: %ld\n", current_time[0].tv_sec, current_time[0].tv_usec);
    printf("seconds: %ld\nmicro_seconds: %ld\n", current_time[MAX_TIMES - 1].tv_sec, current_time[MAX_TIMES - 1].tv_usec);
    printf("the average time of a gettimeofday function call is: %ld us\n", (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) / MAX_TIMES);
}

int main(int argc, char *argv[]) {
    m_gettimeofday();
    return 0;
}

However, the output will always be 0 microseconds. It seems like the precision of the gettimeofday() function is exactly one microsecond. What's wrong with my test code? Or have I misunderstood the author's meaning? Thanks for the help!


Solution

  • The average microseconds passed between consecutive calls to gettimeofday is usually less than one - on my machine it is somewhere between 0.05 and 0.15.

    Modern CPUs usually run at GHz speeds - i.e. billions of instructions per second, and so two consecutive instructions should take on the order of nanoseconds, not microseconds (obviously two calls to a function like gettimeofday is more complex than two simple opcodes, but it should still take on the order of tens of nanoseconds and not more).

    But you are performing a division of ints - dividing (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) by MAX_TIMES - which in C will return an int as well, in this case 0.


    To get the real measurement, divide by (double)MAX_TIMES (and print the result as a double):

    printf("the average time of a gettimeofday function call is: %f us\n", (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) / (double)MAX_TIMES);
    

    As a bonus - on Linux systems the reason gettimeofday is so fast (you might imagine it to be a more complex function, calling into the kernel and incurring the overhead of a syscall) is thanks to a special feature called vdso which lets the kernel provide information to user space without going through the kernel at all.