I have a raw bitmap image of RGBA malloc-ed data; rows are obviously a multiple of 4 bytes. This data actually originates from an AVI (24-bit BGR format), but I convert it to 32-bit ARGB. There's about 8mb of 32-bit data (1920x1080) per frame.
For each frame:
NSData
object via NSData:initWithBytes:length
.CIImage
object via CIImage:imageWithBitmapData:bytesPerRow:size:format:colorSpace
.CIImage
, I draw it into my final NSOpenGLView
context using NSOpenGLView:drawImage:inRect:fromRect
. Due to the "mosaic" nature of the target images, there are approximately 15-20 calls made on this with various source/destination Rects.Using a 30hz NSTimer
that calls [self setNeedsDisplay:YES]
on the NSOpenGLView
, I can attain about 20-25fps on a 2012 MacMini/2.6ghz/i7 -- it's not rock solid at 30hz. This to be expected with an NSTimer
instead of a CVDisplayLink
.
But... ignoring the NSTimer
issue for now, are there any suggestions/pointers on making this frame-by-frame rendering a little more efficient?
Thanks!
NB: I would like to stick with CIImage
objects as I'll want to access transition effects at some point.
Every frame, the call to NSData
's initWithBytes:length:
causes an 8MB memory allocation & an 8MB copy.
You can get rid of this per-frame allocation/copy by replacing theNSData
object with a persistent NSMutableData
object (set up once at the beginning), and using its mutableBytes
as the destination buffer for the frame's 24- to 32-bit conversion.
(Alternatively, if you prefer to manage the destination-buffer memory yourself, leave the object as NSData
class, but initialize it with initWithBytesNoCopy:length:freeWhenDone:
& pass NO
as the last parameter.)