I was able to decode an mp4 video. If I configure the decoder using a Surface
I can see the video on screen. Now, I want to edit the frame (adding a yellow line or even better overlapping a tiny image) and encode the video as a new video. It is not necessary to show the video and I don't care now about the performance.(If I show the frames while editing I could have a gap if the editing function takes a lot of time), So, What do you recommend to me, configure the decoder with a GlSurface anyway and use OpenGl
(GLES), or configure it with null and somehow convert the Bytebuffer
to a Bitmap
, modify it, and encode the bitmap as a byte array? Also I saw in Grafika page that you cand use a Surface
with a custom Rederer and use OpenGl
(GLES). Thanks
You will have to use OpenGLES
. ByteBuffer/Bitmap approach can not give realistic performance/features.
Now that you've been able to decode the Video (using MediaExtractor and Codec) to a Surface
, you need to use the SurfaceTexture
used to create the Surface as an External Texture
and render using GLES to another Surface
retrieved from MediaCodec
configured as an encoder.
Though Grafika
doesn't have an exactly similar complete project, you can start with your existing project and then try to use either of the following subprojects in grafika Continuous Camera or Show + capture camera, which currently renders Camera
frames (fed to SurfaceTexture) to a Video (and display).
So essentially, the only change is the MediaCodec
feeding frames to SurfaceTexture
instead of the Camera
.
Google CTS DecodeEditEncodeTest does exactly the same and can be used as a reference in order to make the learning curve smoother.
Using this approach, you can certainly do all sorts of things like manipulating the playback speed of video (fast forward and slow-down), adding all sorts of overlays on the scene, play with colors/pixels in the video using shaders etc.
Checkout filters in Show + capture camera for an illustration for the same.
Decode-edit-Encode flow
When using OpenGLES, 'editing' of the frame happens via rendering using GLES to the Encoder's input surface.
If decoding and rendering+encoding are separated out in different threads, you're bound to skip a few frames every frame, unless you implement some sort of synchronisation between the two threads to keep the decoder waiting until the render+encode for that frame has happened on the other thread.
Although modern hardware codecs support simultaneous video encoding and decoding, I'd suggest, do the decoding, rendering and encoding in the same thread
, especially in your case, when the performance is not a major concern right now. That will help avoiding the problems of having to handle synchronisation on your own and/or frame jumps.