I am writing an Android app in which a native thread keeps generating image frames (as raw pixel data) at about 60 fps, which are then supposed to be displayed in a SurfaceView
. Currently, the thread writes pixel data to a direct ByteBuffer
shared between the native thread and the JVM, then invokes a callback via JNI to notify the JVM side that a frame is ready; the buffer is then read into a Bitmap
and drawn on the SurfaceView
. My code looks roughly like this (full source code would be too big to share):
// this is a call into native code that retrieves
// the width and height of the frame in pixels
// returns something on the order of 320×200
private external fun getFrameDimensions(): (Int, Int)
// a direct ByteBuffer shared between the JVM and the native thread
private val frameBuffer: ByteBuffer
private fun getFrame(): Bitmap {
val (width, height) = getFrameDimensions()
return Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888).apply {
setHasAlpha(false)
copyPixelsFromBuffer(frameBuffer.apply {
order(ByteOrder.nativeOrder())
rewind()
})
}
}
// the SurfaceView widget which should display the image
private lateinit var surfaceView: SurfaceView
private val SCALE = 8
// callback invoked by the native thread
public fun onFrameReady() {
val canvas: Canvas = surfaceView.holder.lockCanvas() ?: return
val bitmap = getFrame()
try {
canvas.drawBitmap(bitmap.scale(
bitmap.width * SCALE,
bitmap.height * SCALE),
0.0f, 0.0f, null
)
} finally {
holder.unlockCanvasAndPost(canvas)
}
}
The good news is that the above works: the screen updates at roughly the expected frame rate. However, performance is really poor, to the point where the app locks up: it stops responing to e.g. pressing the ◁ button and eventually an ANR message pops up. Clearly I am doing something wrong, but I am not quite able to tell what exactly.
Is there a way to make the above run faster? Preferably I would like to avoid writing more native code than necessary. I would especially like to avoid putting anything Android-specific at the native side of things.
Well there are several problems. The major one with Kotlin code is that you create a Bitmap roughly 60 times per second and don't use the method to dispose it bitmap.recycle()
which releases the native memory of that bitmap after you don't need it anymore. With a framerate of 60 fps it will be hard to find a correct time a place to release it though but I think it should be somewhere after the draw of the frame is performed.
But the major issue in your overall code is that there is the correct way to work with surface view in native code and you are not using it. To do it correctly you have to pass your Surface from Android code directly to the native code. After that you should obtain ANativeWindow
via ANativeWindow_fromSurface()
. Use ANativeWindow_setBuffersGeometry()
to set the size and color format. Lock the buffer, copy the pixels in and unlock the buffer to post it. All in native code without any further Android interactions. It will look something like
ANativeWindow* window = ANativeWindow_fromSurface(env, javaSurface);
ANativeWindow_setBuffersGeometry(window, width, height, WINDOW_FORMAT_RGBA_8888);
ANativeWindow_Buffer buffer;
if (ANativeWindow_lock(window, &buffer, NULL) == 0) {
memcpy(buffer.bits, pixels, width * height * 2);
ANativeWindow_unlockAndPost(window);
}
ANativeWindow_release(window);
Notice that this way you won't be creating a huge pile of bitmaps that was the main cause of the problem with performance - you will be writing pixels directly to your Surface without any intermediate containers There are plenty of info online regarding this and even a google sample in NDK samples that displays a similar approach(it renders video via codecs but still). If something is unclear, ping me and I will try to give you more details.
One more approach you can use is to use OpenGl ES API Android provides to draw images on SurfaceView in a productive manner - also plenty of info online. It will be less productive comparing to the native drawing though.