objective-ciosopengl-esuiimagevieweaglview

Draw UIImage (or JPEG) onto EAGLView


I am making a PDF annotator and when you switch pages it has to redraw all of the previously drawn OpenGL content (which was saved to file in JSON format). The problem is that it takes longer the more content there is to draw. I have a UIImage saved to disk for each page so I was hoping to speed up this process by drawing that UIImage onto EAGLContext in one big stroke.

I want to know how to take an UIImage (or JPEG/PNG file) and draw it directly on to the screen. The reason why it has to be on the EAGLView is because it needs to support the eraser, and using the regular UIKit way wouldn't work with that.

I assume there's some way to set a brush as the whole image and just stamp the screen with it once. Any suggestions?


Solution

  • As a pedantic note, there is no standard class named EAGLView, but I assume you're referring to one of Apple's sample UIView subclasses that host OpenGL ES content.

    The first step in doing this would be to load the UIImage into a texture. The following is some code that I've used for this in my image processing framework (newImageSource is the input UIImage):

    CGSize pointSizeOfImage = [newImageSource size];
    CGFloat scaleOfImage = [newImageSource scale];
    pixelSizeOfImage = CGSizeMake(scaleOfImage * pointSizeOfImage.width, scaleOfImage * pointSizeOfImage.height);
    CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
    
    BOOL shouldRedrawUsingCoreGraphics = YES;
    
    // For now, deal with images larger than the maximum texture size by resizing to be within that limit
    CGSize scaledImageSizeToFitOnGPU = [GPUImageOpenGLESContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
    if (!CGSizeEqualToSize(scaledImageSizeToFitOnGPU, pixelSizeOfImage))
    {
        pixelSizeOfImage = scaledImageSizeToFitOnGPU;
        pixelSizeToUseForTexture = pixelSizeOfImage;
        shouldRedrawUsingCoreGraphics = YES;
    }
    
    if (self.shouldSmoothlyScaleOutput)
    {
        // In order to use mipmaps, you need to provide power-of-two textures, so convert to the next largest power of two and stretch to fill
        CGFloat powerClosestToWidth = ceil(log2(pixelSizeOfImage.width));
        CGFloat powerClosestToHeight = ceil(log2(pixelSizeOfImage.height));
    
        pixelSizeToUseForTexture = CGSizeMake(pow(2.0, powerClosestToWidth), pow(2.0, powerClosestToHeight));
    
        shouldRedrawUsingCoreGraphics = YES;
    }
    
    GLubyte *imageData = NULL;
    CFDataRef dataFromImageDataProvider;
    
    if (shouldRedrawUsingCoreGraphics)
    {
        // For resized image, redraw
        imageData = (GLubyte *) calloc(1, (int)pixelSizeToUseForTexture.width * (int)pixelSizeToUseForTexture.height * 4);
    
        CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();    
        CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 8, (int)pixelSizeToUseForTexture.width * 4, genericRGBColorspace,  kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
        CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeToUseForTexture.width, pixelSizeToUseForTexture.height), [newImageSource CGImage]);
        CGContextRelease(imageContext);
        CGColorSpaceRelease(genericRGBColorspace);
    }
    else
    {
        // Access the raw image bytes directly
        dataFromImageDataProvider = CGDataProviderCopyData(CGImageGetDataProvider([newImageSource CGImage]));
        imageData = (GLubyte *)CFDataGetBytePtr(dataFromImageDataProvider);
    }    
    
    glBindTexture(GL_TEXTURE_2D, outputTexture);
    if (self.shouldSmoothlyScaleOutput)
    {
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    }
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
    
    if (self.shouldSmoothlyScaleOutput)
    {
        glGenerateMipmap(GL_TEXTURE_2D);
    }
    
    if (shouldRedrawUsingCoreGraphics)
    {
        free(imageData);
    }
    else
    {
        CFRelease(dataFromImageDataProvider);
    }
    

    As you can see, this has some functions for resizing images that exceed the maximum texture size of the device (the class method in the above code merely queries the max texture size), as well as a boolean flag for whether or not to generate mipmaps for the texture for smoother downsampling. These can be removed if you don't care about those cases. This is also OpenGL ES 2.0 code, so there might be an OES suffix or two that you'd need to add to some of the functions above in order for them to work with 1.1.

    Once you have the UIImage in a texture, you can draw it to the screen by using a textured quad (two triangles that make up a rectangle, with appropriate texture coordinates for the corners). How you do this will differ between OpenGL ES 1.1 and 2.0. For 2.0, you use a passthrough shader program that just reads the color from that location in the texture and draws that to the screen and for 1.1, you just set up the texture coordinates for your geometry and draw the two triangles.

    I have some OpenGL ES 2.0 code for this in this answer.