androidgraphicsandroid-ndkandroid-guisurfaceflinger

How to upscale and render remote(RGB565) frame buffer on Android native?


I have ported remote frame buffer receive C code on Android 4.2.2 which receives frame buffer from host in RGB565 format. Able to render the received framebuffer following the standard android example frameworks/native/services/surfaceflinger/tests/resize/resize.cpp. Following is the code snippet used

sp<Surface> surface = client->createSurface(String8("resize"),
        800, 480, PIXEL_FORMAT_RGB_565, 0);


SurfaceComposerClient::openGlobalTransaction();
surface->setLayer(100000);
SurfaceComposerClient::closeGlobalTransaction();

Surface::SurfaceInfo info;
surface->lock(&info);
ssize_t bpr = info.s * bytesPerPixel(info.format);
/* rfb is the remote famebuffer filled by C stack*/
memcpy((uint16_t*)info.bits, rfb, 800*480*2);
surface->unlockAndPost();

But I am not able to upscale the received buffer to render full screen on android. For eg:- Host sends 800*480 but android device screen is 1024*786 Also have following doubts,
1. Does creating surface in native the right way to handle these kind of problem?
2. How to do upscale raw image and render on Android native?
3. While writing app, whether app can control this surface being it created on native?

I am new to android and it will be great if someone can guide me on right path to handle this problem


Solution

  • You're currently using private SurfaceFlinger APIs, which require privileged access. If you need to do that, I think you want to use the setSize() call to change the size of the window (which is independent of the size of the underlying Surface). This section in the arch doc shows how to read part of the adb shell dumpsys SurfaceFlinger output to see what the actual sizes are -- that'll tell you if the call is working. (Normally you'd go through the Window Manager for this, but you're bypassing most of the Android framework.)

    If you can do what you need in an unprivileged app, your code will be more portable and far less likely to break with changes to the operating system. The best way to go about it would be to create an OpenGL ES texture and "upload" the pixels with glTexImage2D() to a GL_UNSIGNED_SHORT_5_6_5 texture. (I'm reasonably confident that the GLES 565 format matches the Android gralloc format, but I haven't tried it.) Once you have the image in a GLES texture you can render it however you like -- you're no longer limited to rectangles.

    Some examples can be found in Grafika. In particular, the "texture upload benchmark" activity demonstrates uploading and rendering textures. (It's a benchmark, so it's using an off-screen texture, but other activities such as "texture from camera" show how to do stuff on-screen.)

    The GLES-based approach is significantly more work, but you can pull most of the pieces out of Grafika. The Java-language GLES code is generally just a thin wrapper around the native equivalents, so if you're determined to use the NDK for the GLES work it's a fairly straight conversion. Since all of the heavy lifting is done by the graphics driver, though, there's not much point in using the NDK for this. (If the pixels are arriving through native-only code, wrap the buffer with a "direct" ByteBuffer to get access from Java-language code.)