I'm trying to understand android audio buffer management for streaming usecases e.g. YouTube streaming. As per my understanding from the following URL: http://quandarypeak.com/2013/08/androids-stagefright-media-player-architecture/ , application sends data to Stagefright Media Player through Native Media Player Subsystem.
Can someone please explain how the buffer transfer happens between Native Media Player and Stagefright Media Player? Does media data gets downloaded at Native Media Player or Stagefright Media Player?
Your question is slightly open question and I will try to summarize the answer to provide an overview. It is recommended to refer the source files or have more targeted questions to understand the system better. For the life-cycle of a player, please refer to the MediaPlayer
documentation.
From an architectural perspective, the native
layer creates the corresponding player engine which in your example is StagefrightPlayer
. The interactions between native
and StagefrightPlayer
is more of administrative native, where user commands and requests are passed and feedback from underlying layer is provided to the user layer.
The real data transaction happens much below i.e. below StagefrightPlayer
. When the user creates a player, an URI
is provided (setDataSource
) which is passed to the player engine during it's creation.
Stagefright
player creates AwesomePlayer
and the data source is set on AwesomePlayer
.
In AwesomePlayer
a MediaExtractor
is created and the data source is provided as part of it's creation.
Now, the data transaction for audio would from Sink to Source as
AudioTrack
pulls data from AudioPlayer
which encompasses a OMXCodec
. The codec pulls data from the MediaExtractor
which in turn pulls the data from source. In cases of streaming data, one can buffer or cache the data via NuCachedSource2
, which basically creates a page cache. When MediaExtractor
requests for data, the same is provided from the page cache instead of waiting for buffering from the network source.