I'm making a game engine in C and I have a rotation function that rotates an image to a degree. But there's a few problems with it. I previously had coordinates x and y, transformed them into nx, ny using a rotation transform then assigned dest[x, y] = source[nx, ny] which worked nicely and had no dot effect. But this method because it reads from the transformed coordinates reads from things outside the width and height of the source image, brining in garbage memory. I can easily remedy this by checking if nx or ny exceeds the width or height or is less than zero. But then this cuts off parts of the sprite during rotation because sometimes the height of the image is larger than the width or vice versa, and diagonals are longer and such. I know I can resize the image to accommodate this, but that seems inefficient.
Example image That's the image without bounds checking.
My new method was to read from the normal source coordinates and transform them before rendering onto the dest so size is no longer a limitation, so: dest[nx, ny] = source[x, y]. And this worked in that it can render any size image and rotate it without clipping, but instead there's a dotted dithering effect that gets more common in 45 degree angles and goes away at cardinal angles. Here's what that looks like.
And here is my code with the first method commented out
void drawRotation(Sprite src, int xloc, int yloc, int degrees)
{//()
float radians = Deg2Rad(degrees);
float cosr = cos(radians);
float sinr = sin(radians);
uint32_t *dest = (uint32_t *)Screen.memory+ (xloc-src.ox) + (yloc-src.oy) * Screen.width;
for(int y = 0; y < src.height; y++)
{
for(int x = 0; x < src.width; x++)
{
int nx = (cosr * (x-src.ox) + sinr * (y-src.oy));
int ny = (-sinr *(x-src.ox) + cosr * (y-src.oy));
dest[(nx+src.ox) + (ny+src.oy) * Screen.width] = ((uint32_t*)src.memory)[(y) * src.width + (x)];
// dest[(x) + (y) * Screen.width] = ((uint32_t*)src.memory)[(ny+src.oy) * src.width + (nx+src.ox)];
}
}
}
I tried those two methods and prefer the dotted one if only it had no dots. But I'm open to other ways as long as it's efficient. I'd rather not use interpolation for something seemingly so small but if there's no other way...
I'm using wingdi32 graphics library. Images are stored with malloc and are void pointers. When cast as integers the colour is in format ARGB.
The problem is, that you loop over every pixel in the source image, but the pixel mapping isn't 1:1, means they could be more pixel in the displayed image than in the source image or the other way around. This means you have to expect gaps and pixels that are drawn multiple times.
One solution is to loop over ever display pixel, rotate it back to the source image (i.e. calculate the position on the source image), check if the pixel is on the source image and if it is check the colour. That should work but isn't the most efficient way, especially when you have multiple source images that generate one displayed image.
The more efficient way when you have more than a single image: Split your source images in triangles (in a case of a rectangular image, you would make 2 triangles). Rotate each vertex (edge) and store the new position. Then loop over the display image and check if a pixel is inside a triangle and if so, determine (in case there is more than 1) which source image/triangle it should use, and you should be able to calculate the corresponding coordinates on the source image from the distance of the current display pixel to the vertexes.
A similar thing is done when using a low level graphics API, like OpenGL (i assume it is the same for Vulcan and DirectX) to display 2D and 3D graphics. If you are not familiar with any of it, i suggest to learn how to such a API or a similar library. This way you may improve your understanding how computer graphics works, how to use matrices (since you didn't use them in your code), ...