What an excellent article. I can't think of anything (except the very esoteric) that it did not cover.
If I had to criticize, I wish it had talked a bit more about printer color profiles (although in this paperless, web-world we live in, perhaps printing is in fact esoteric).
Unlike displays, a printer can't be simply defined with three primaries and a white point. Printer profiles can be quite large as they rely on someone having printed out a copious number of swatches on a given paper type and then measured (with some kind of colorimeter) device independent color values for each swatch. Those are used to build a large table for mapping from device independent color spaces to the printer's gamut.
Those large tables make the profile so large. And then of course interpolation is still required when mapping from a device independent color space to the printer profile. (Now imagine too that you need a different profile for each type of paper you might want to print to since each can represent color differently — plain paper unable to get the levels of saturation that a coated paper can.)
What was shocking to me was just how small the gamut of a printer typically is when seen alongside that of a decent display.
Consider that, in print, you'll never see an image as vivid as you can display on a nice, modern display. (And then consider that there are colors in nature so vivid that even a modern display cannot represent them. Just look at how much color is outside the triangle on the CIE "shark fin" color representation.)
Also not touched on (did I miss it?), all the math presented to map from one color space to another, also allows for "soft proofing" — where in fact you might match to a printer ICC profile but then take the result and match again to the user's display to give you a "preview" of what will be lost when going to said printer.
> What was shocking to me was just how small the gamut of a printer typically is when seen alongside that of a decent display.
What was a nightmare for me when I worked in prepress was how hard it is to get a convincing purple out of a $20K printer. I used tricks which basically produced nothing like purple in a way that gave customers a good purple impression because the things with a closer actual resemblance to purple always looked awful.
Purple and orange were/are common spot colours for that reason.
My father used to work on all sorts of R&D involving things like how much K to use in substitution of CMY without getting desaturated etc. It's a real rabbit hole, especially if you want to reduce the amount of ink used to prevent soaking the paper.
One of the tricks we used in situations that allowed it was to get paper that matched the most important color, this has other downsides but they are more manageable. They were always the last run before everything was taken apart and cleaned for maitenance.
That reminded me of a time when a printer manufacturer approached my old team with this problem. They needed a custom driver for a certain region of the world. In this region in a certain industry they liked the highly saturated 'bad' colours from a competitor, and wanted theirs to match. Much paper and ink was spent on this.
There's a lot to cover. HDR is coming to the web (and already exists on native) and there's certainly lots of issues doing it correctly, and, learning all of the various parts and how to deal with them. (HDR input data, HDR processing, HDR output, the display itself which may or may not be HDR and even if it is HDR might only have so much "headroom", etc..., and there are tradeoffs at each step.
Maybe not appropriate for that particular article but definitely appropriate for the site
Even native still has tons of issues, like the fact that AFAICT, no OS does HDR screen capture. You're viewing an HDR image, you ask the OS to capture the screen. It gives you an SDR capture :( On Mac and iOS that's certainly true. On Windows, the XBox Game Bar thingy will actually capture HDR but the OS level PrintScreen method will not, and the popular ShareX will not either.
I suspect that the biggest limitation in printing vs. emissive displays is the simple fact that your contrast ratio and color reproduction is severely limited in printing, because the dye is modifying ambient illumination.
This affects brightness and contrast: For emissive displays, you can have emissive values that are several to many orders of magnitude brighter than the 'black point', and more importantly, the primaries are defined by the display, not by ambient illumination.
Part of the magic of HDR displays is manipulating local masking (a human perceptual quirk) to drive bright regions on a display much brighter than the darker regions, so you can achieve even higher contrast ratios than the base technology could achieve (LED back-illuminated LCD panels, for many consumer TVs). Basically, a bright pixel will cause other nearby pixels to be brighter, because you can't see the dark details near a bright region anyway — but other regions could be darker, where you can perceive more detail in the blacks. This is achieved by illuminating sections of the display at significantly higher or lower levels, based on what your eyes/brain can actually perceive. That leads to significantly higher contrast ratios.
(As a heuristic: photographers generally say you can only get ~5 stops of contrast out of a print. (That is, bright areas are 2^5 times brighter than the darkest regions.) Modern HDR displays can do 2^10 or better. YMMV.)
But this also affects color... much of the complexity in getting printers to match derives from the interaction between the imperfect gamut caused by differing primaries, as filtered through human perception (and/or perceptual models). But you can't control the ambient illumination, so you're at the mercy of whatever the spectrum of your illumination is, plus whatever adaptation the viewer has. This feels fundamentally impossible to do "correctly" under all circumstances.
Which is to say, the original sin of color theory is the dimensional collapse from a continuous spectrum to a 3-dimensional, discretized representation. It's a miracle we can see color at all...!
> the primaries are defined by the display, not by ambient illumination
In itself that is correct, but as you've noted, our own vision system isn't operating like that. The same display brightness and colors will be perceived very differently depending on the ambient light's brightness and color, and can also mean a severe breakdown in the dynamic range that can be made visible via a display.
And this ambient light also clearly impacts how prints are seen.
It does not affect perception. This is one of those early anthro/cogsci results that said more about the authors' cultural bias than it did about the people being studied, up there with "Eskimos have a thousand words for snow".
It affects communication. People can still discern the difference between colors, they just don't have an easy way to communicate this difference to others.
The Japanese language until relatively recently didn't have a clear verbal distinction between what we call green and blue in English. That doesn't mean Japanese people can't tell the difference between green and blue. It just means that there is a kind of "blue" that is the sky and a kind of "blue" that is for traffic control lights, and in context nobody is confused.
The same issue can occur within a language between people with differing levels of study of color. A graphic designer might say a particular shade of green is "chartreuse" that his boss instead might call "yellowish green".
This article says almost the exact same things I said in my post. I also don't see where it definitely says the Inuit language has a richer vocabulary for snow than other languages. It just ends with a joke about how such a thing might come to pass. A casual observer here who doesn't bother reading the link might take it as a refutation from your wording, but it actually very strongly supports what I had to say.
As for the "Russian blue" study, I find it strange that the article is so skeptical of unreplicated results in linguistics and yet seems to accept the "Russian blue" study uncritically. I can see at least one glaring flaw: all of the Russian speakers were bi-lingual with English, with at least some of them being so since they were young children. They also discarded 16% of their test data because they deemed responses "too slow", with this discarding more heavily weighted towards the Russian speakers.
Wonder how this applies to animals since their color discrimination would not be impacted by language. Did humans evolve linguistic abilities that alter sensory processing? Seems odd that animals would be able to discriminate colors their eyes can see just fine, but humans would need words to do so.
Good question. I'm no expert, but I guess that the key issue is one of categorization. Without language, it is impossible to effectively categorize our perceptual domain.
It is also true that among mammals, chromatic vision is pretty much restricted to primates. The ability to perceive difference of light is a must if you don't want to become someone else's food. In contrast, chromatic vision is an 'extra' that in many ways serves as a (literally) florid extension to our lives. To me it is no surprise that range of emotion and range of hue are so often associated with each other. Interestingly enough, they are similarly mapped: as a set of differences, rather than as a degree of intensity.
If I had to criticize, I wish it had talked a bit more about printer color profiles (although in this paperless, web-world we live in, perhaps printing is in fact esoteric).
Unlike displays, a printer can't be simply defined with three primaries and a white point. Printer profiles can be quite large as they rely on someone having printed out a copious number of swatches on a given paper type and then measured (with some kind of colorimeter) device independent color values for each swatch. Those are used to build a large table for mapping from device independent color spaces to the printer's gamut.
Those large tables make the profile so large. And then of course interpolation is still required when mapping from a device independent color space to the printer profile. (Now imagine too that you need a different profile for each type of paper you might want to print to since each can represent color differently — plain paper unable to get the levels of saturation that a coated paper can.)
What was shocking to me was just how small the gamut of a printer typically is when seen alongside that of a decent display.
Consider that, in print, you'll never see an image as vivid as you can display on a nice, modern display. (And then consider that there are colors in nature so vivid that even a modern display cannot represent them. Just look at how much color is outside the triangle on the CIE "shark fin" color representation.)
Also not touched on (did I miss it?), all the math presented to map from one color space to another, also allows for "soft proofing" — where in fact you might match to a printer ICC profile but then take the result and match again to the user's display to give you a "preview" of what will be lost when going to said printer.