openglalphablendingcompositing

Refactoring a complicated opengl blend


I am trying to reproduce the effects of a fairly complicated blend operation (corrected).

(1-(1-src_alpha)*dest_alpha)*src+(1-src_alpha)*dest_alpha*dest

src and dest refer to the respective rgba components in the source and destination buffers. What I would like to do is turn this into a chain of blends using glBlendFunc and/or glBlendFuncSeparate that could be invoked to produce the same result as above expression, perhaps by rendering the same scene more than once, once after each glBlend* call. While I regrettably cannot do my complicated blend as a single glBlend* call, it seems to me that I may still be able to achieve the effect I am after by rendering my scene multiple times using different glBlend* parameters. I am thinking that I may need 3 such calls, but it may be possible to do it in two.

Edit:

To understand what the blend function is doing, consider first the regular alpha blend src * src_alpha + dest * (1 - src_alpha). This style of blending is fine for virtually all alpha blending where the source does not already contain premultiplied alpha, and particularly when we are rendering to a final device such as a screen or other device where the destination either explicitly is or else may be assumed to be opaque.

If the destination is already opaque, then this complex blend function would yield the same results as the aforementioned common blend function, while leaving the destination as opaque, while if the destination happens to be completely transparent, the complex blend function degrades to a source copy and inherits the source's transparency instead. So what the complex blend function actually does is simply linearly interpolate between these two extremes, based on the destination alpha.

Edit: needed to correct the blend function above... I had factored it incorrectly. I show the derivation below for verification:

First, I start with a basic blend function:

src * src_alpha + dest * (1 - src_alpha)

And what I want to do is blend that with the src, depending on the dest_alpha, so the above function is multiplied by dest_alpha, and I add that to the source multiplied by 1 - dest_alpha, as follows:

src * (1-dest_alpha) + (src * src_alpha + dest * (1 - src_alpha)) * dest_alpha

Refactoring arithmetically gives me this:

(1-(1-src_alpha)*dest_alpha)*src+(1-src_alpha)*dest_alpha*dest

Solution

  • I figured it out, so if anyone else has any need of this, here is what I discovered:

    First, I call

    glblendfuncseparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ZERO, GL_ONE)
    

    Then I draw the scene normally. This produces what would be a correct style of blend on the color channels assuming that the source had non-premultiplied alpha and was being blended with an opaque background. Although the original alpha of the desination is kept, so it can be used at the next step:

    glblendfuncseparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA)
    

    And then I redraw the exact same scene as before. This further linearly interpolates the color channels between the original source and the above blended result based on the destination alpha, and combines the alpha components in such a way that the result is always greater than or equal to either of the source or destination alpha, but is still always constrained to be between 0 and 1, just as the source and destination alpha values are.