Background: I do video file demuxing, decode the video track, apply some changes to frames received, decode and mux them again.
The known issue doing this in Android are the number of vendor specify encoder / decoder color formats. Android 4.3 introduced surfaces to get device independent, but I found it hard to work with them as my frame changing routines require a Canvas to write to.
Since Android 5.0, the use of flexible YUV420 color formats is promising. Jointly with getOutputImage for decoding and getInputImage for encoding, Image objects can be used as format retrieved from a decoding MediaCodec. I got decoding working using getOutputImage and could visualize the result after RGB conversion. For encoding a YUV image and queuing it into a MediaCodec (encoder), there seems to be a missing link however:
After dequeuing an input buffer from MediaCodec
int inputBufferId = encoder.dequeueInputBuffer (5000);
I can get access to a proper image returned by
encoder.getInputImage (inputBufferId);
I fill in the image buffers - which is working too, but I do not see a way to queue the input buffer back into the codec for encoding... There is only a
encoder.queueInputBuffer (inputBufferId, position, size, presentationUs, 0);
method available, but nothing that matches an image. The size required for the call can be retrieved using
ByteBuffer byteBuffer = encoder.getInputBuffer (inputBufferId);
and
byteBuffer.remaining ();
But this seems to screw up the encoder when called in addition to getInputImage().
Another missing piece of documentation or just something I get wrong?
This is indeed a bit problematic - the most foolproof way probably is to calculate the maximum distance between the start pointer of any plane in the Image
to the last byte of any plane, but you need native code to do this (in order to get the actual pointer values for the direct byte buffers).
A second alternative is to use getInputBuffer
as you show, but with one caveat. First call getInputBuffer
to get the ByteBuffer
and call remaining()
on it. (Or perhaps capacity()
works better?). Only after this, call getInputImage
. The detail is that when calling getInputImage
, the ByteBuffer
returned by getInputBuffer
gets invalidated, and vice versa. (The docs says "After calling this method any ByteBuffer or Image object previously returned for the same input index MUST no longer be used." in MediaCodec.getInputBuffer(int)
.)