javascriptmedia-sourcempeg-dash

Dynamically append and remove mpeg-dash segments from mediasource sourcebuffer


I am writing a simple mpeg-dash streaming player using HTML5 video element. I am creating MediaSource and attaching a SourceBuffer to it. Then I am appending dash fragments into this sourcebuffer and everything is working fine.

Now, what I want to do is, I want to pre-fetch those segments dynamically depending upon current time of the media element. While doing this there are lot of doubts and which are not answered by MediaSource document.

  1. Is it possible to know how much data sourceBuffer can support at a time? If I have a very large video and append all the fragments into sourcebuffer, will it accommodate all fragments or cause errors or will slow down my browser?

  2. How to compute number of fragments in sourcebuffer?

  3. How to compute the presentation time or end time of the last segment in SourceBuffer?

  4. How do we remove only specific set of fragments from SourceBuffer and replace them with segments with other resolutions? (I want to do it to support adaptive resolution switching run time.)

Thanks.


Solution

    1. The maximum amount of buffered data is an implementation detail and is not exposed to the developer in any way AFAIK. According to the spec, when appending new data the browser will execute the coded frame eviction algorithm which removes any buffered data deemed unnecessary by the browser. Browsers tend to remove any part of the stream that has already been played and don't remove parts of the stream that are in the future relative to current time. This means that if the stream is very large and the dash player downloads it very quickly, faster than the MSE can play it, then there will be a lot of the stream that cannot be remove by the coded frame eviction algorithm and this may cause the append buffer method to throw a QuotaExceededError. Of course a good dash player should monitor the buffered amount and not download excessive amounts of data.

      In plain text: You have nothing to worry about, unless your player downloads all of the stream as quickly as possible without taking under consideration the current buffered amount.

    2. The MSE API works with a stream of data (audio or video). It has no knowledge of segments. Theoretically you could get the buffered timerange and map to to a pair of segments using the timing data provided in the MPD. But this is fragile IMHO. Better is to keep track of the downloaded and fed segments.

    3. Look at the buffered property. The easiest way to get the end time in seconds of the last appended segments is simply: videoElement.buffered.end(0)

      If by presentation time you mean the Presentation TimeStamp of the last buffered frame then there is no way of doing this apart from parsing the stream itself.

    4. To remove buffered data you can use the remove method.

      Quality switching is actually quite easy although the spec doesn't say much about it. To switch the qualities the only thing you have to do is append the init header for the new quality to the SourceBuffer. After that you can append the segments for the new quality as usual.

    I personally find the youtube dash mse test player a good place to learn.