I probably missed some important point of getrusage() function.
When I run the following code, I get:
User time: 0.000000 s System time: 0.000000 s Total time: 0.000000 s
void naiveSearch(unsigned char *text, unsigned int textLen, unsigned char *pattern, unsigned int pattLen)
{
register unsigned int count = 0;
for(int i=0;i<=textLen-pattLen;i++)
{
int j;
for(j=0;j<pattLen;j++)
if((unsigned char)text[i+j] != (unsigned char)pattern[j])
break;
if(j == pattLen)
count++;
}
//printf("naiveSearch: count = %d\n",count);
}
int main(int argc, char * argv[])
{
struct rusage ruse;
double ssec1, ssec2, usec1, usec2;
getrusage(RUSAGE_SELF,&ruse);
ssec1 = (double)(ruse.ru_stime.tv_sec * 1000000 + ruse.ru_stime.tv_usec);
usec1 = (double)(ruse.ru_utime.tv_sec * 1000000 + ruse.ru_utime.tv_usec);
for(int k=0;k<1000;k++)
naiveSearch(writeBuffer,outSize,writeBuffer+rand,16);
getrusage(RUSAGE_SELF,&ruse);
ssec2 = (double)(ruse.ru_stime.tv_sec * 1000000 + ruse.ru_stime.tv_usec);
usec2 = (double)(ruse.ru_utime.tv_sec * 1000000 + ruse.ru_utime.tv_usec);
printf("User time:\t%f s\n",(usec2 - usec1)/(double)1000000);
printf("System time:\t%f s\n",(ssec2 - ssec1)/(double)1000000);
printf("Total time:\t%f s\n",((usec2 + ssec2) - (usec1 + ssec1))/(double)1000000);
return 0;
}
However, when I run the same code with printf() function (inside of naiveSearch) uncommented, I get some expected/realistic time. Why the zero time is measured in the first case? How can I reliably measure the time?
Thank you for any idea. P.
UPDATE: There must be some safe and reliable way how to test algorithms in a loop. I need to include all the standard compiler optimizations and still do all the iteration of the same algorithm. Is there any way how to do that? How to test time performance with certainty that time results represent all the iterations performed?
I think that your program doesn't use enough CPU to get the timer to show anything. Probably the clock tick isn't sensitive enough for your naiveSearch function to generate anything. Also, there's no reason to cast the times to doubles. Try doing something like this:
#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>
int runalot()
{
int i = 0;
int j = 1;
for (i = 0; i < 100000000; i++) {
if (i % j == 4) {
j += 1;
}
}
return j;
}
int main(int argc, char * argv[])
{
int j;
struct rusage ruse1, ruse2;
double ssec1, ssec2, usec1, usec2;
getrusage(RUSAGE_SELF,&ruse1);
j = runalot();
printf("runalot returns %d\n", j);
getrusage(RUSAGE_SELF,&ruse2);
printf("CPU usage is: %lu\n", ruse2.ru_utime.tv_sec * 1000000 + ruse2.ru_utime.tv_usec - ruse1.ru_utime.tv_sec * 1000000 + ruse1.ru_utime.tv_usec);
}
When I compile it with gcc on Linux with the -g debugging flag, and run it, I get CPU usage is: 372000
.