iosdecoderhevch.265video-toolbox

iOS - fails to decode HEVC (H.265) stream if resolution is over 1080p


I am using VideoToolbox API from Apple to decode the HEVC stream. I am using AVSampleBufferDisplayLayer layer for rendering decoded frames.

I am successfully able to decode frames if the source resolution is 1080p (1920 X 1080) or less.

If the resolution is higher than 1080p, I see a black screen and the following error message from AVSampleBufferDisplayLayerFailedToDecodeNotification,

Optional(Error Domain=AVFoundationErrorDomain Code=-11821 "Cannot Decode" UserInfo={AVErrorMediaSubTypeKey=( 1752589105 ), NSLocalizedDescription=Cannot Decode, NSLocalizedFailureReason=The media data could not be decoded. It may be damaged., AVErrorMediaTypeKey=vide, AVErrorPresentationTimeStampKey=CMTime: {INVALID}, NSUnderlyingError=0x2830c3390 {Error Domain=NSOSStatusErrorDomain Code=-12909 "(null)"}})

-11821 = AVErrorDecodeFailed -12909 = kVTVideoDecoderBadDataErr

Am I missing anything for higher resolution? Do I need to set the correct HEVC level, profile, or tier? I am not sure what to do.

I would appreciate your inputs guys. Thanks!


Solution

  • HEVC introduces a concept of a slice where the image is broken up into a number of slices. a lot of cameras only use slices on high resolution images, for example hikvision cameras breaks up images into 3 slices, annke cameras break up high resolution into 4 slices. The last slice in a set has the marked bit set in the RTP header there is also a "first slice" bit in the slice header.

    To decode these slices with the apple videotoolbox decoder you have to put all the slices for a particular frame, each with their own 4 byte length field into the one CMBlockbuffer, that then gets used to create one CMSampleBuffer, that then gets passed into VTDecompressionSessionDecodeFrameWithOutputHandler(); or [AVSampleBufferDisplayLayer enqueueSampleBuffer:SB]; as per your application needs.