Hacker News new | past | comments | ask | show | jobs | submit login

> pay attention to what color you put inside the transparent pixels

I don't understand this. When I make transparency I don't use any color? I use the Eraser tool or Ctrl-X, not a color with 0 opacity.




This is actually a very common problem with 3-D stuff and transparency in textures. This isn't an issue with the colors of the pixels themselves, it's an issue with texture filtering. nVidia has a pretty good explanation as it applies to games and 3d graphics: https://developer.nvidia.com/content/alpha-blending-pre-or-n...

Say you have two adjacent pixels using floating point RGBA values of (0,0,0,0) and (1,1,1,1), and you apply it to a 3-d shape. Because of the rasterization algorithm, you will be sampling weighted averages of the two pixels, either because you're scaling up and need to interpolate, or because you're scaling down and need to average.

The average of (0,0,0,0) (fully transparent) and (1,1,1,1) (opaque white) is (0.5,0.5,0.5,0.5), a half transparent gray. But you'd intuitively expect (1,1,1,0.5), half transparent white. This is the essence of the problem. The fix is to make sure that your transparent pixel was (1,1,1,0) and not (0,0,0,0).


Surely the answer if you want this is to weight the final RGB by the transparency. E.g. the final red channel would be (R1xT1 + R2xT2)/(T1+T2)


You're mostly right. The industry-wide accepted answer is to multiply the opacity into the colors before interpolating, so the formula would be (R1xT1 + R2xT2)/2 for the average, and then to do later transparency blending as if the opacity term was already multiplied in.


Which means that your output pixel cant be both white and low transparency. I guess its a typical graphics 'close enough and better performance' outcome (where mine is marginally more difficult to calculate and needs some more logic to avoid divide by 0)


> Which means that your output pixel cant be both white and low transparency.

Did you mean low opacity?

If so, that's not quite right. (0.1,0.1,0.1,0.1) premultiplied is the same color as (1,1,1,0.1) "normal." They're both white and low opacity, just in different representations. You don't actually lose much granularity because the graphics card has to multiply the color channels by the opacity value sooner or later.

Separately, your formula doesn't work for interpolation. It works for averaging, but in order to do texture sampling, you need interpolation, so your formula can't actually be used unless you can adjust it to deal with interpolation gracefully.


Thanks for the answer - I did not know of this premultiplacation. This makes it effectively the same or very close? Assuming output transparency/alpha is (T1+T2/2), dividing by this gives the difference

I don't quite get your point on interpolation, but I'll look up when I have the chance


A better interpolation would do the premultiply for you automatically to get the proper result. Since that requires a couple of extra multiplies and a divide, it gets skipped most of the time.


Each pixel is defined by 32 bits -- 8 R, 8 B, 8 G, 8 A. Even if alpha is 0, there has to be information stored in Red, Green, Blue. There can't be "no information", because then a pixel is not defined by 32 bits of information. There is always a "color" (RGB values) for transparent pixels. Some editors will set the RGB values to 0 or 255 when fully-transparent pixels are saved/output. Others don't.

As the author mentions, certain resampling algorithms might be naive or plain ignorant about how to resample images with potentially transparent pixels. Should transparent pixels not count to the final pixel? Should all pixels be averaged? Should the output pixel be the median or mode value? If the image is being resampled to 1/3 its size the resampling can be very cheap if only the middle of the 3x3 pixel cluster is selected as the output value.


"And be careful to export the RGB values of transparent pixels when you save to PNG for example, many programs will by default discard transparent pixel RGB data and replace it with a solid color (white or black) during the export to help with the compression."

Here is some information for Photoshop and a plugin you can use:

https://graphicdesign.stackexchange.com/questions/63783/how-...


Regardless of how you perceive making it, the representation is still (nearly always) a color underneath an alpha channel.

If you inspect the image in a full-featured editor like Photoshop or GIMP, you can inspect the channels individually or remove the transparency entirely to see this fact.

I think with most GUI programs, the eraser and cut tool will leave color as what it was before it became transparent.

EDIT: saurik has a good point that I forgot about -- many editors may actually throw the colors away when you export unless you ask them not to.


An eraser tool and a paintbrush tool do essentially the same thing—overwrite an area of the image with new pixel values, typically blended in some way with the old values. It’s just that the eraser is for painting in the alpha channel, while the paintbrush also affects colour channels.

An alpha mask (on a layer with no alpha channel) is essentially a different way of viewing & editing the same data, and there you probably have no logical trouble with using a paintbrush tool.


When you erase, it has a soft edge, right? And in the edge, you see the colour that was there before is still there, just more transparent. Well similarly, in the area that is fully erased, it is fully transparent.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: