ios14hevcvideo-toolbox

VideoToolbox HEVC decoding failing for iOS14 on device


So while I'm sure I'm not about to provide enough info for anyone to fix my specific code, what I am itching to know is this:

Does anyone know what might have happened to iOS14 to change HEVC decoding requirements??


I have a decoder built using VideoToolbox for an HEVC encoded video stream coming over the network, that was and is working fine on iOS 13 devices, and iOS 14 simulators. But it's failing most of the time in iOS 14 (up to 14.4 at time of writing) on iOS devices. "Most of the time", because sometimes it does just work, depending on where in the stream I'm trying to begin decoding.

An error I'm occasionally getting from my decompression output callback record is OSStatus -12909kVTVideoDecoderBadDataErr. So far, so unhelpful.

Or I may get no error output, like in a unit test which takes fixed packets of data in and should always generate video frames out. (This test likewise fails to generate expected frames when using iOS14 on devices.)

Anyone else had any issues with HEVC decoding in iOS 14 specifically? I'm literally fishing for clues here... I've tried toggling all the usual input flags for VTDecompressionSessionDecodeFrame() (._EnableAsynchronousDecompression, ._EnableTemporalProcessing, ...)

I've also tried redoing my entire rendering layer to use AVSampleBufferDisplayLayer with the raw CMSampleBuffers. It decodes perfectly!! But I can't use it... because I need to micromanage the timing of the output frames myself (and they're not always in order).



(If it helps, the fixed input packets I'm putting into my unit test include NALUs of the following types in order: NAL_UNIT_VPS, NAL_UNIT_SPS, NAL_UNIT_PPS, NAL_UNIT_PREFIX_SEI, NAL_UNIT_CODED_SLICE_CRA, and finally NAL_UNIT_CODED_SLICE_TRAIL_N and NAL_UNIT_CODED_SLICE_TRAIL_R. I took these from a working network stream at some point in the past to server as a basic sanity test.)


Solution

  • So this morning I came across a solution / workaround. It still sort of bears the original question of "what happened??" but here it is, may it help someone:

    The kVTVideoDecoderBadDataErr error was occuring on all NALU packets of type RASL_R or RASL_N that were typically coming in from my video stream immediately after the first content frame (CRA type NALU.)

    Simply skipping these packets (i.e. not passing them to VTDecompressionSessionDecodeFrame()) has resolved the issue for me and my decoder now works fine in both iOS 13 and 14.


    The section on "Random Access Support" here says "RASL frames are ... usually discarded." I wonder if iOS 13 and earlier VideoToolbox implementations discarded these frames, while newer implementations don't, leaving it in this case up to the developer?