Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

90% really ? What color information get ejected exactly ? For the sensor part are you talking about the fact that the photosites don't cover all the surface ? Or that we only capture a short band of wavelength ? Or that the lens only focuses rays unto specific exact points and make the rest blurry and we loose 3D ?


Cameras capture linear brightness data, proportional to the number of photons that hit each pixel. Human eyes (film cameras too) basically process the logarithm of brightness data. So one of the first things a digital camera can do to throw out a bunch of unneeded data is to take the log of the linear values it records, and save that to disk. You lose a bunch of fine gradations of lightness in the brightest parts of the image. But humans can't tell.

Gamma encoding, which has been around since the earliest CRTs was a very basic solution to this fact. Nowadays it's silly for any high-dynamic image recording format to not encode data in a log format. Because it's so much more representative of human vision.


Ok so similar to the other commentator then, thanks. According to that metric its much more than 90% we’re throwing out then (:


well technically there's a bunch of stuff that happens after the sensor gets raw data. (also excluding the fact that normal sensors do not capture light phase)

demosaicing is a first point of loss of data (there is a tiling of monochrome small sensors, you reconstruct color from little bunches with various algorithms)

there is also a mapping to a color space of your choosing (probably mentioned in the op video, i apologize for i have not watched yet...). sensor color space do not need to match that rendered color space...

note of interest being that sensors actually capture some infrared light (modulo physical filters to remove that). so yeah if you count that as color, it gets removed. (infrared photography is super cool!)

then there is denoising/sharpening etc. that mess with your image.

there might be more stuff i am not aware of too. i have very limited knowledge of the domain...


But even before sensor data we go from 100 bits of photons data to 42 bits counted by photosites. Mh well maybe my calculations are too rough


The amount of captured sensor data thrown out when editing heavily depends on the scene and shooting settings, but as I wrote it is probably almost always 90%+ even with the worst cameras and widest possible dynamic range display technology available today.

In a typical scene shot with existing light outdoors it is probably 98%+.


Raw photographs, don't, do that?


Third blind man touching the elephant here: the other commenters are wrong! it’s not about bit depth or linear-to-gamma, it’s the fact that the human eye can detect way more “stops” (the word doesn’t make sense you have to just look it up) of brightness (I guess you could say “a wider range of brightness”, but photography people all say “stops”) than the camera, and the camera can detect more stops of brightness than current formats can properly represent!

So you have to decide whether to lose the darker parts of the image or the brighter parts of the image you’re capturing. Either way, you’re losing information.

(In reality we’re all kind of right)


This was what I meant primarily.

Camera sensor can get <1% of what we can see, any display media (whether paper or screen, SDR or HDR, etc.) can show <1% of what camera sensor can get.

(That 1% figure is very rough, it will vary by scene conditions, but it is not very off.)

Add to that, what each of us sees is always subjective and depends on our preceding experience as well as shared cultural baggage.

As a result, it is a creative task. We selectively amplify and suppress aspects of raw data according to what the display space fits, what we think should be seen, what our audience would be expecting to see.

People in this thread claiming there to be some objective standard reference process for compressing/discarding extra data for display space completely miss the fundamental aspect of perception. There is no reference process for even a basic task of determining what counts as neutral grey.

(As a bonus point, think how as more and more of our visual input from the youngest ages comes from looking at bland JPEGs on shining rectangles with tiny dynamic ranges this shapes our common perception of reality, makes it less subjective and more universal. Compare with how before photography we really did not have any equivalent of some “standard”—not really, but we mistake it for such—representation of reality we must all adhere to.)


Ok I get it but I doubt photographers have full control over that 1%, so it’s not just a creative task, we’re constrained by physics too


There is a lot of control: at shooting time, exposure/aperture settings, various filter and lens choices, etc., scene light if you can change it; at processing time, many different ways to change which parts of the raw captured spectrum should map to which parts of display colour space.

There are technical limitations, but my point is that the process is inherently subjective; there is no way by which you can capture light from a scene and purely automatically obtain some sort of reference objective representation of it in the narrow display space.


A 4k 30fps video sensor capturing 8 bits per pixel (bayer pattern) image, is capturing 2 gigabits per second. That same 4k 30fps video on Youtube will be 20 megabits per second or less.

Luckily, it turns out relatively few people need to record random noise, so when we lower the data rate by 99% we get away with it.


1. I believe in modern cameras it’s 10+ bits per pixel, undebayered, but willing to be corrected. Raw-capable cameras capture 12+ bits of usable range. Data rates far exceed 5 gigabit per second.

2. Your second paragraph is a misunderstanding. Unless you really screw up shooting settings, it is not random noise but pretty usefully scene data available for mapping to narrow display space in whatever way you see fit.


I read the second paragraph as a reference to compressibility of the resulting stream, not the contents of the encoded/discarded data.

Only random noise is incompressible, so realistic scenes allow compression rates over 100X without a 100X quality loss.


It is not even about YouTube data rates but about display media limitations. There is not going to be any sort of realistic scene data going over the wire just because of that. 99% of it has to be discarded because it cannot be displayed. It cannot be discarded automatically, because what should be discarded is a creative decision. Even if you could compress 5+ gigabit per second into 20 megabit per second losslessly, it is a pure waste of CPU.

Also, noise is desirable. Even if you magically can discern on the fly at 30 or 60 fps, and at 5 gigabit/second, what is noise and what is fine details and texture in a real scene, which is technically impossible (remember, it is a creative task because you cannot automatically determine even neutral grey), eliminating noise would result in a fake-ish washed look.


Presumably they're referring to the fact that most cameras capture ~12-14 bits of brightness vs the 8 that (non-hdr) displays show.


Oh that's normal then. There are mandatory steps of dynamic range reduction in the video editing / color grading pipeline (like a compressor in audio production). So the whole information is not lost but the precision / details can be yes. But that's a weird definition, there are so many photons in daylight capture that you could easily say we really need minimum 21 bits per channel minimum (light intensity of sun / light intensity of moon)


But that’s not seen at the sensor - at least not at once - look at the sun and then look immediately at the dark sky moon (if it were possible) - the only reason you get the detail on the moon is the aperture in front. You couldn’t see the same detail if they were next to each other. The precision is the most dark in the scene next to the most bright, as opposed to the most dark possible next to the most bright. That’s the difference.


Hum I can look at a moon croissant and the sun at the same time


Do you not find it takes your eyes time to adjust to different brightness levels? There’s a good reason boats use red lights inside at night.


> So the whole information is not lost but the precision / details can be yes.

That does not seem a meaningful statement. Information, and by far most of it, is necessarily discarded. The creative task of the photographer is in deciding what is to be discarded (both at shooting time and at editing time) and shaping the remaining data to make the optimal use of the available display space. Various ways of competing dynamic range is often part of this process.

> like a compressor in audio production

Audio is a decent analogy and an illustration of why it is a subjective and creative process. You don’t want to just naively compress everything into a wall of illegible sound, you want to make some things pop at the expense of other things, which is a similar task in photography. Like with photography, you must lose a lot of information at it, because if you preserve all the finest details no one would be able to hear much in real-life circumstances.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: