iosuiimagecatiledlayercgimagerefuiimagepngrepresentation

Any way to encode a PNG faster than UIImagePNGRepresentation?


I'm generating a bunch of tiles for CATiledLayer. It takes about 11 seconds to generate 120 tiles at 256 x 256 with 4 levels of detail on an iPhone 4S. The image itself fits within 2048 x 2048.

My bottleneck is UIImagePNGRepresentation. It takes about 0.10-0.15 seconds to generate every 256 x 256 image.

I've tried generating multiple tiles on different background queue's, but this only cuts it down to about 9-10 seconds.

I've also tried using the ImageIO framework with code like this:

- (void)writeCGImage:(CGImageRef)image toURL:(NSURL*)url andOptions:(CFDictionaryRef) options
{
    CGImageDestinationRef myImageDest = CGImageDestinationCreateWithURL((__bridge CFURLRef)url, (__bridge CFStringRef)@"public.png", 1, nil);
    CGImageDestinationAddImage(myImageDest, image, options);
    CGImageDestinationFinalize(myImageDest);
    CFRelease(myImageDest);
}

While this produces smaller PNG files (win!), it takes about 13 seconds, 2 seconds more than before.

Is there any way to encode a PNG image from CGImage faster? Perhaps a library that makes use of NEON ARM extension (iPhone 3GS+) like libjpeg-turbo does?

Is there perhaps a better format than PNG for saving tiles that doesn't take up a lot of space?

The only viable option I've been able to come up with is to increase the tile size to 512 x 512. This cuts the encoding time by half. Not sure what that will do to my scroll view though. The app is for iPad 2+, and only supports iOS 6 (using iPhone 4S as a baseline).


Solution

  • It turns out the reason why UIImageRepresentation was performing so poorly was because it was decompressing the original image every time even though I thought I was creating a new image with CGImageCreateWithImageInRect.

    You can see the results from Instruments here:

    enter image description here

    Notice _cg_jpeg_read_scanlines and decompress_onepass.

    I was force-decompressing the image with this:

    UIImage *image = [UIImage imageWithContentsOfFile:path];
    UIGraphicsBeginImageContext(CGSizeMake(1, 1));
    [image drawAtPoint:CGPointZero];
    UIGraphicsEndImageContext();
    

    The timing of this was about 0.10 seconds, almost equivalent to the time taken by each UIImageRepresentation call.

    There are numerous articles over the internet that recommend drawing as a way of force decompressing an image.

    There's an article on Cocoanetics Avoiding Image Decompression Sickness. The article provides an alternate way of loading the image:

    NSDictionary *dict = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
                                                     forKey:(id)kCGImageSourceShouldCache];
    CGImageSourceRef source = CGImageSourceCreateWithURL((__bridge CFURLRef)[[NSURL alloc] initFileURLWithPath:path], NULL);
    CGImageRef cgImage = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)dict);
    UIImage *image = [UIImage imageWithCGImage:cgImage];
    CGImageRelease(cgImage);
    CFRelease(source);
    

    And now the same process takes about 3 seconds! Using GCD to generate tiles in parallel reduces the time more significantly.

    The writeCGImage function above takes about 5 seconds. Since the file sizes are smaller, I suspect the zlib compression is at a higher level.