I want to calculate the big O of the following algorithms for resizing binary images:
Bilinear interpolation:
double scale_x = (double)new_height/(height-1);
double scale_y = (double)new_width/(width-1);
for (int i = 0; i < new_height; i++)
{
int ii = i / scale_x;
for (int j = 0; j < new_width; j++)
{
int jj = j / scale_y;
double v00 = matrix[ii][jj], v01 = matrix[ii][jj + 1],
v10 = matrix[ii + 1][jj], v11 = matrix[ii + 1][jj + 1];
double fi = i / scale_x - ii, fj = j / scale_y - jj;
double temp = (1 - fi) * ((1 - fj) * v00 + fj * v01) +
fi * ((1 - fj) * v10 + fj * v11);
if (temp >= 0.5)
result[i][j] = 1;
else
result[i][j] = 0;
}
}
Nearest neighbour interpolation
double scale_x = (double)height/new_height;
double scale_y = (double)width/new_width;
for (int i = 0; i < new_height; i++)
{
int srcx = floor(i * scale_x);
for (int j = 0; j < new_width; j++)
{
int srcy = floor(j * scale_y);
result[i][j] = matrix[srcx][srcy];
}
}
I assumed that the complexity of both of them is the loop dimensions, i.e O(new_height*new_width). However, the bilinear interpolation surely works much slower than the nearest neighbour. Could you please explain how to correctly compute complexity?
They are both running in Theta(new_height*new_width)
time because except for the loop iterations all operations are constant time.
This doesn't in any way imply that the two programs will execute equally fast. It merely means that if you increase new_height
and/or new_width
to infinity, the ratio of execution time between the two programs will neither go to infinity nor to zero.
(This is making the assumption that the integer types are unbounded and that all arithmetic operations are constant time operations independent of the length of the operands. Otherwise there will be another relevant factor accounting for the cost of the arithmetic.)