iosobjective-caudio-recordingavassetwriteravasset

AVAssetWriterInput appendSampleBuffer succeeds, but logs error kCMSampleBufferError_BufferHasNoSampleSizes from CMSampleBufferGetSampleSize


Starting from iOS 12.4 beta versions, Calling appendSampleBuffer on an AVAssetWriterInput is logging the following error :

CMSampleBufferGetSampleSize signalled err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.12/Sources/Core/FigSampleBuffer/FigSampleBuffer.c:4153

We don't see this error in prior version, and neither on iOS 13 beta. Does anyone else encounter this, and can provide any information to help us fix it?

More details

Our app is recording video and audio, using two AVAssetWriterInput objects, one for video input (appending pixel buffers) and one for audio input - appending audio buffers created with CMSampleBufferCreate. (See code below.)

Since our audio data in non-interleaved, after creation we convert it to interleaved format, and pass it to appendSampleBuffer.

Relevant Code

// Creating the audio buffer:
CMSampleBufferRef buff = NULL;
CMSampleTimingInfo timing = {
        CMTimeMake(1, _asbdFormat.mSampleRate),
        currentAudioTime,
        kCMTimeInvalid };


OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
                                           NULL,
                                           false,
                                           NULL,
                                           NULL,
                                           _cmFormat,
                                           (CMItemCount)(*inNumberFrames),
                                           1,
                                           &timing,
                                           0,
                                           NULL,
                                           &buff);

// checking for error... (non returned)

// Converting from non-interleaved to interleaved.
    float zero = 0.f;
    vDSP_vclr(_interleavedABL.mBuffers[0].mData, 1, numFrames * 2);
    // Channel L
    vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, _interleavedABL.mBuffers[0].mData, 2, numFrames);
    // Channel R
    vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, (float*)(_interleavedABL.mBuffers[0].mData) + 1, 2, numFrames);

    _interleavedABL.mBuffers[0].mDataByteSize =  _interleavedASBD.mBytesPerFrame * numFrames;
    status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,
                                                            kCFAllocatorDefault,
                                                            kCFAllocatorDefault,
                                                            0,
                                                            &_interleavedABL);

// checking for error... (non returned)

if (_assetWriterAudioInput.readyForMoreMediaData) {

    BOOL success = [_assetWriterAudioInput appendSampleBuffer:audioBuffer];  // THIS PRODUCES THE ERROR.

// success is returned true, but the above specified error is logged - on iOS 12.4 betas (not on 12.3 or before)
}

Before all that, here's how the _assetWriterAudioInput is created:

-(BOOL) initializeAudioWriting
{
    BOOL success = YES;

    NSDictionary *audioCompressionSettings = // settings dictionary, see below.

    if ([_assetWriter canApplyOutputSettings:audioCompressionSettings forMediaType:AVMediaTypeAudio]) {
        _assetWriterAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:audioCompressionSettings];
        _assetWriterAudioInput.expectsMediaDataInRealTime = YES;

        if ([_assetWriter canAddInput:_assetWriterAudioInput]) {
            [_assetWriter addInput:_assetWriterAudioInput];
        }
        else {
            // return error
        }
    }
    else {
        // return error
    }

    return success;
}

audioCompressionSettings is defined as:

+ (NSDictionary*)audioSettingsForRecording
{
    AVAudioSession *sharedAudioSession = [AVAudioSession sharedInstance];
    double preferredHardwareSampleRate;

    if ([sharedAudioSession respondsToSelector:@selector(sampleRate)])
    {
        preferredHardwareSampleRate = [sharedAudioSession sampleRate];
    }
    else
    {
        preferredHardwareSampleRate = [[AVAudioSession sharedInstance] currentHardwareSampleRate];
    }

    AudioChannelLayout acl;
    bzero( &acl, sizeof(acl));
    acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;


    return @{
         AVFormatIDKey: @(kAudioFormatMPEG4AAC),
         AVNumberOfChannelsKey: @2,
         AVSampleRateKey: @(preferredHardwareSampleRate),
         AVChannelLayoutKey: [ NSData dataWithBytes: &acl length: sizeof( acl ) ],
         AVEncoderBitRateKey: @160000
         };
}

The appendSampleBuffer logs the following error and call stack (relevant part) :

CMSampleBufferGetSampleSize signalled err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.6/Sources/Core/FigSampleBuffer/FigSampleBuffer.c:4153

0 CoreMedia 0x00000001aff75194 CMSampleBufferGetSampleSize + 268 [0x1aff34000 + 266644]

1 My App 0x0000000103212dfc -[MyClassName writeAudioFrames:audioBuffers:] + 1788 [0x102aec000 + 7499260] ...

Any help would be appreciated.

EDIT: Adding the following information: we are passing 0 and NULL to the numSampleSizeEntries and sampleSizeArray parameters of CMSampleBufferCreate - which according to the doc is what we must pass when creating a buffer of non-interleaved data (although this doc is a bit confusing to me).

We have tried passing 1 and a pointer to a size_t parameter, such as:

size_t sampleSize = 4;

but it didn't help: It logged an error of:

figSampleBufferCheckDataSize signalled err=-12731 (kFigSampleBufferError_RequiredParameterMissing) (bbuf vs. sbuf data size mismatch)

and we are not clear as to what value should be there (how to know the sample size of each sample), or whether this is the correct solution at all.


Solution

  • I think we have the answer:

    Passing the numSampleSizeEntries and sampleSizeArray parameters of CMSampleBufferCreate as follows seem to fix it (still requires full verifications).

    To my understanding the reason is that we are at the end appending the interleaved buffer, it needs to have the sample sizes (at least in 12.4 version).

    // _asbdFormat is the AudioStreamBasicDescription.
    size_t sampleSize = _asbdFormat.mBytesPerFrame;
    OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
                                               NULL,
                                               false,
                                               NULL,
                                               NULL,
                                               _cmFormat,
                                               (CMItemCount)(*inNumberFrames),
                                               1,
                                               &timing,
                                               1,
                                               &sampleSize,
                                               &buff);