ioscore-graphicscgcontextcompressioncgcontextdrawimage

iOS Redrawing image to prevent deferred decompression resulting in a bigger image


I've noticed some people redraw images on a CGContext to prevent deferred decompression and this has caused a bug in our app.

The bug is that the size of the image professes to remain the same but the CGImageDataProvider data has extra bytes appended to it.

For example, we have a 797x500 PNG image downloaded from the Internet, and the AsyncImageViewredraws and returns the redrawn image.

Here is the code:

UIImage *image = [[UIImage alloc] initWithData:data];
if (image)
{
    // Log to compare size and data length...
    NSLog(@"BEFORE: %f %f", image.size.width, image.size.height);
    NSLog(@"LEN  %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));

    // Original code from AsyncImageView
    //redraw to prevent deferred decompression
    UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
    [image drawAtPoint:CGPointZero];
    image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    // Log to compare size and data length...
    NSLog(@"AFTER:  %f %f", image.size.width, image.size.height);
    NSLog(@"LEN  %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));

    // Some other code...
}

The log shows as follows:

BEFORE: 797.000000 500.000000
LEN  1594000
AFTER:  797.000000 500.000000
LEN  1600000

I decided to print each byte one by one, and sure enough there were twelve 0s appended for each row.

Basically, the redrawing was causing the image data to be that of a 800x500 image. Because of this our app was looking at the wrong pixel when it wanted to look at the 797 * row + columnth pixel.

We're not using any big images so deferred decompression doesn't pose any problems, but should I decide to use this method to redraw images, there's a chance I might introduce a subtle bug.

Does anyone have a solution to this? Or is this a bug introduced by Apple and we can't really do anything?


Solution

  • As you've discovered, rows are padded out to a convenient size. This is generally to make vector algorithms more efficient. You just need to adapt to that layout if you're going to use CGImage this way. You need to call CGImageGetBytesPerRow to find out the actual number of bytes allocated, and then adjust your offsets based on that (bytesPerRow * row + column).

    That's probably best for you, but if you need to get rid of the padding, you can do that by creating your own CGBitmapContext and render into it. That's a heavily covered topic around Stack Overflow if you're not familiar with it. For example: How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?