As I know alpha composition uses straight alpha basically. But all GLES blending sample codes I've seen are using premultiplied alpha. Is this correct and right way to do this? Is premultiplied alpha a regular way in real-time graphics?
If the blending sample code you’ve seen targets iPhone OS, that’s because Core Graphics and Core Animation both operate on premultiplied alpha, so drawing into a CGBitmapContext
produces premultiplied data, and non-opaque view contents need to be in premultiplied alpha format to composite correctly over other views. However, if you’re just trying to blend content into an opaque view, then either approach will work, as long as your blending matches the format of your image data.
There are upsides and downsides to premultiplied alpha data. Tom Forsyth has a good explanation of the many useful properties of premultiplied alpha. The primary downside is that some stages of fixed-function fragment processing (texture environment, fog, etc.) are built around non-premultiplied alpha, and may not produce the results you want if you try to use them with premultiplied data.