iosperformancecore-graphicscgcontextdrawimage

CGContextDrawImage is EXTREMELY slow after large UIImage drawn into it


It seems that CGContextDrawImage(CGContextRef, CGRect, CGImageRef) performs MUCH WORSE when drawing a CGImage that was created by CoreGraphics (i.e. with CGBitmapContextCreateImage) than it does when drawing the CGImage which backs a UIImage. See this testing method:

-(void)showStrangePerformanceOfCGContextDrawImage
{
    ///Setup : Load an image and start a context:
    UIImage *theImage = [UIImage imageNamed:@"reallyBigImage.png"];
    UIGraphicsBeginImageContext(theImage.size);
    CGContextRef ctxt = UIGraphicsGetCurrentContext();    
    CGRect imgRec = CGRectMake(0, 0, theImage.size.width, theImage.size.height);


    ///Why is this SO MUCH faster...
    NSDate * startingTimeForUIImageDrawing = [NSDate date];
    CGContextDrawImage(ctxt, imgRec, theImage.CGImage);  //Draw existing image into context Using the UIImage backing    
    NSLog(@"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForUIImageDrawing]);

    /// Create a new image from the context to use this time in CGContextDrawImage:
    CGImageRef theImageConverted = CGBitmapContextCreateImage(ctxt);

    ///This is WAY slower but why??  Using a pure CGImageRef (ass opposed to one behind a UIImage) seems like it should be faster but AT LEAST it should be the same speed!?
    NSDate * startingTimeForNakedGImageDrawing = [NSDate date];
    CGContextDrawImage(ctxt, imgRec, theImageConverted);
    NSLog(@"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForNakedGImageDrawing]);


}

So I guess the question is, #1 what may be causing this and #2 is there a way around it, i.e. other ways to create a CGImageRef which may be faster? I realize I could convert everything to UIImages first but that is such an ugly solution. I already have the CGContextRef sitting there.

UPDATE : This seems to not necessarily be true when drawing small images? That may be a clue- that this problem is amplified when large images (i.e. fullsize camera pics) are used. 640x480 seems to be pretty similar in terms of execution time with either method

UPDATE 2 : Ok, so I've discovered something new.. Its actually NOT the backing of the CGImage that is changing the performance. I can flip-flop the order of the 2 steps and make the UIImage method behave slowly, whereas the "naked" CGImage will be super fast. It seems whichever you perform second will suffer from terrible performance. This seems to be the case UNLESS I free memory by calling CGImageRelease on the image I created with CGBitmapContextCreateImage. Then the UIImage backed method will be fast subsequently. The inverse it not true. What gives? "Crowded" memory shouldn't affect performance like this, should it?

UPDATE 3 : Spoke too soon. The previous update holds true for images at size 2048x2048 but stepping up to 1936x2592 (camera size) the naked CGImage method is still way slower, regardless of order of operations or memory situation. Maybe there are some CG internal limits that make a 16MB image efficient whereas the 21MB image can't be handled efficiently. Its literally 20 times slower to draw the camera size than a 2048x2048. Somehow UIImage provides its CGImage data much faster than a pure CGImage object does. o.O

UPDATE 4 : I thought this might have to do with some memory caching thing, but the results are the same whether the UIImage is loaded with the non-caching [UIImage imageWithContentsOfFile] as if [UIImage imageNamed] is used.

UPDATE 5 (Day 2) : After creating mroe questions than were answered yesterday I have something solid today. What I can say for sure is the following:

I proved this by 1)first creating a non-alpha context (1936x2592). 2) Filled it with randomly colored 2x2 squares. 3) Full frame drawing a CGImage into that context was FAST (.17 seconds) 4) Repeated experiment but filled context with a drawn CGImage backing a UIImage. Subsequent full frame image drawing was 6+ seconds. SLOWWWWW.

Somehow drawing into a context with a (Large) UIImage drastically slows all subsequent drawing into that context.


Solution

  • Well after a TON of experimentation I think I have found the fastest way to handle situations like this. The drawing operation above which was taking 6+ seconds now .1 seconds. YES. Here's what I discovered: