caudiosignal-processingpcmmixing

Mixing 16 bit linear PCM streams and avoiding clipping/overflow


I've trying to mix together 2 16bit linear PCM audio streams and I can't seem to overcome the noise issues. I think they are coming from overflow when mixing samples together.

I have following function ...

short int mix_sample(short int sample1, short int sample2)
{
    return #mixing_algorithm#;
}

... and here's what I have tried as #mixing_algorithm#

sample1/2 + sample2/2
2*(sample1 + sample2) - 2*(sample1*sample2) - 65535
(sample1 + sample2) - sample1*sample2
(sample1 + sample2) - sample1*sample2 - 65535
(sample1 + sample2) - ((sample1*sample2) >> 0x10) // same as divide by 65535

Some of them have produced better results than others but even the best result contained quite a lot of noise.

Any ideas how to solve it?


Solution

  • here's a descriptive implementation:

    short int mix_sample(short int sample1, short int sample2) {
        const int32_t result(static_cast<int32_t>(sample1) + static_cast<int32_t>(sample2));
        typedef std::numeric_limits<short int> Range;
        if (Range::max() < result)
            return Range::max();
        else if (Range::min() > result)
            return Range::min();
        else
            return result;
    }
    

    to mix, it's just add and clip!

    to avoid clipping artifacts, you will want to use saturation or a limiter. ideally, you will have a small int32_t buffer with a small amount of lookahead. this will introduce latency.

    more common than limiting everywhere, is to leave a few bits' worth of 'headroom' in your signal.