> Consider a candle flame that exists as mostly emission and low to no occlusion.
I don't think associated alpha helps much here. You can special case pixels that are doing pure emission, but when a pixel is doing both you need the lighting to affect the color of the occlusion but not affect the color of the emission.
But that means you need to dedicate specific objects to being purely emissive, to avoid blurring at the boundaries.
And once you've separated the objects, you don't really need to have purely-emissive textures and non-emissive textures in the same file, with the same exact format. You might as well store emissive textures as RGB and save on memory.
Using a different format can even benefit you. You're less likely to accidentally blend emissive and non-emissive pixels, and you're less likely to accidentally apply lighting calculations to emissions.
Except associated alpha models emission and occlusion via the operation. You don't need zero alpha either, as any ratio can occlude partially and emit partially as well.
You can't do both in a single pixel, unless you turn off lighting entirely and make everything fullbright.
Let's have blue-tinted pane of glass, (0, 0, .5, .5). And a red tinted pane, (.5, 0, 0, .5).
Then a blue glow, (0, 0, .5, 0). And a red glow, (.5, 0, 0, 0).
If you have your blue glass glow red, and your red glass glow blue, both combinations come out as (.5, 0, .5, .5).
Under 99% of lighting conditions, it will look wrong. If you put it in darkness, it will look overwhelmingly wrong.
Even if occlusion and emission are the same color it doesn't work. A dark blue object that glows brightly, and a bright blue object that glows dimly, both will have the same RGBA.
Objects that block a certain amount of light, and then emit a certain amount of light: You can only make that simplification if the entire world is evenly lit by white light.
Blur the entire matchstick into one pixel. Under white light it has to emit brown plus yellow. In a dark room it has to emit just yellow.
It's a clever technique but I can't figure out any way it's not fundamentally incompatible with having lighting. If one pixel has to both be lit and emit extra light, you need two RGB values.
RGBA in associated is unique in that it represents distilled geometry. If you were to zoom in on a pixel that had a degree of geometry occluding it, the coverage is represented by the alpha ratio, while the RGB is purely emission.
It is complimentary to lighting.
The math works because of the differing forms of the alpha over formula. FG.RGB + ((1.0 - FG.Alpha) * BG.RGB), resulting in a pure add at the extreme case of alpha being zero, or ratios of addition when non-zero.
Within the limitations of the RGB model, it works extremely well.
The folks cited are extremely adept imaging people, covering years of experience and several Academy Achievement Awards.
They're plenty smart but they're talking about a different use case.
You don't know what color will be emitted unless you know what light is hitting the surface.
A dark glowing surface and a bright non-glowing surface have the same emissions under white light, but different emissions under other kinds of light. The method you're talking about requires the emissions be precalculated, which means you can't apply lighting at runtime.
I agree with you, to do it properly you need two different RGBA objects, the emissive one will have A=0. But the same format suffices to represent both.
I don't think associated alpha helps much here. You can special case pixels that are doing pure emission, but when a pixel is doing both you need the lighting to affect the color of the occlusion but not affect the color of the emission.