I've been trying to convince a friend of mine to avoid using dynamically allocated arrays and start moving over to the STL vectors. I sent him some sample code to show a couple things that could be done with STL and functors/generators:
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
#define EVENTS 10000000
struct random_double {
double operator() () { return (double)rand()/RAND_MAX; }
};
int main(int argc, char **argv){
std::vector<double> vd (EVENTS);
generate(vd.begin(), vd.end(), random_double());
copy(vd.begin(), vd.end(), std::ostream_iterator<double>(std::cout, "\n"));
return 0;
}
His reply to this, although he feels it's more elegant, is that his own code is faster (by almost a factor of 2!) Here's the C code he replied with:
#include <stdio.h>
#include <stdlib.h>
#include <malloc.h>
#include <string.h>
#define EVENTS 10000000
__inline double random_double() {
return (double)rand()/RAND_MAX;
}
int main(int argc, char **argv){
unsigned int i;
double *vd;
vd = (double *) malloc(EVENTS*sizeof(double));
for(i=0;i<EVENTS;i++){ vd[i]=random_double(); }
for(i=0;i<EVENTS;i++){ printf("%lf\n",vd[i]); }
free(vd);
return 0;
}
So I ran the simple timing test to see just what happens, and here's what I got:
> time ./c++test > /dev/null
real 0m14.665s
user 0m14.577s
sys 0m0.092s
> time ./ctest > /dev/null
real 0m8.070s
user 0m8.001s
sys 0m0.072s
The compiler options, using g++ were: g++ -finline -funroll-loops. Nothing too special. Can anyone tell me why the C++/STL version is slower in this case? Where is the bottleneck, and will I ever be able to sell my friend on using STL containers?
Using printf:
for (std::vector<double>::iterator i = vd.begin(); i != vd.end(); ++i)
printf("%lf\n", *i);
results are:
koper@elisha ~/b $ time ./cpp > /dev/null
real 0m4.985s
user 0m4.930s
sys 0m0.050s
koper@elisha ~/b $ time ./c > /dev/null
real 0m4.973s
user 0m4.920s
sys 0m0.050s
Flags used: -O2 -funroll-loops -finline