iosaudioqueueaudioqueueservices

Last AudioQueueBuffer has exagerated magnitudes


The one who solves this has to have the Sherlock Holmes trophy. Here it goes.

I'm using AudioQueues to record sound (LPCM, SInt16, 4 buffers) In the callback, I tried measuring the mean amplitude by converting the samples to float and using vDSP_meamgv. Here are some example means:

Mean, No of samples
44.400364, 44100
36.077393, 44100
27.672422, 41984
2889.821289, 44100
57.481972, 44100
58.967506, 42872
54.691631, 44100
2894.467285, 44100
62.697800, 42872
63.732948, 44100
66.575623, 44100
2979.566406, 42872

As you can see, every fourth (last) buffer is wild. I looked at the separate samples, there are lots of 0's and lots of huge numbers, and no normal numbers, like for the other buffers. Things get more interesting. If I use 3 buffers instead, the third one (always the last) is a bogey. And this holds for any number of buffers I choose.

I put an if in the callback to not enqueue the wild buffers, and once it's gone, there are no more huge numbers, the other buffers continue to fill normally. I put in a button that reenqueues this queue after it is being dropped, and once I reenqueue it, it again gets filled with gigantic samples (namely that buffer!)

And now the cherry - I put my code to calculate the mean in other projects, like SpeakHere from Apple, and the same thing happens there o.O , although the app works fine, recording and playing back what was recorded.

I just don't get it, I've cracked my brain trying to figure this one out. If somebody would have a clue...

Here's the callback, if it helps:

void Recorder::MyInputBufferHandler(void *                             inUserData,
                                    AudioQueueRef                          inAQ,
                                    AudioQueueBufferRef                    inBuffer,
                                    const AudioTimeStamp *                 inStartTime,
                                    UInt32                                 inNumPackets,
                                    const AudioStreamPacketDescription*    inPacketDesc) {
    Recorder* eu = (Recorder*)inUserData;

    vDSP_vflt16((SInt16*)inBuffer->mAudioData, 1, eu->conveier, 1, inBuffer->mAudioDataByteSize);
    float mean;
    vDSP_meamgv(eu->conveier, 1, &mean, inBuffer->mAudioDataByteSize);
    printf("values: %f, %d\n",mean,inBuffer->mAudioDataByteSize); 
//    if (mean<2300)
        AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}

'conveier' is a float array I've preallocated.


Solution

  • It's also me that gets the trophy. The error was that the vDSP functions shouldn't have got the mAudioDataByteSize parameter, because they need the number of ELEMENTS in the array. In my case each element (SInt16) has 2 bytes, so I should have passed mAudioDataByteSize / 2. When it read the last buffer, it fell off the edge by another length and counted some random data. Voila! Very basic mistake, but when you look in all the wrong places, it doesn't appear so.

    For anybody that stepped on the same rake...

    PS. It came to me while taking a bath :)