Yeah, I see it. This one is as pure signal removal as it comes in analog world. And they can, indeed, drop significant information - not just reflections, but also e.g. by blacking out computer screens - but they don't introduce fake information either, and lost information could in principle be recovered -- because in reality, everything is correlated with everything else.
> But yes, I agree that computational photography offers a different kind of reality distortion.
A polarizing filter or choice of photographic paper won't make e.g. shadows come out the wrong way. Conversely, if you get handed a photo with wrong shadows, you not only can be sure it was 'shopped, but could use those shadows and other details to infer what was removed from the original photo. If you tried the same trick with computational photograph, your math would not converge. The information in the image is no longer self-consistent.
That's as close as I can come up to describing the difference between the two kinds of reality distortion; there's probably some mathematical framework to classify it better.
That's weird. Whenever I tried to take a picture of the moon, it would look great in the camera view on the screem, but look terrible once I actually took the picture.
Fair enough.
> Polarizing filters
Yeah, I see it. This one is as pure signal removal as it comes in analog world. And they can, indeed, drop significant information - not just reflections, but also e.g. by blacking out computer screens - but they don't introduce fake information either, and lost information could in principle be recovered -- because in reality, everything is correlated with everything else.
> But yes, I agree that computational photography offers a different kind of reality distortion.
A polarizing filter or choice of photographic paper won't make e.g. shadows come out the wrong way. Conversely, if you get handed a photo with wrong shadows, you not only can be sure it was 'shopped, but could use those shadows and other details to infer what was removed from the original photo. If you tried the same trick with computational photograph, your math would not converge. The information in the image is no longer self-consistent.
That's as close as I can come up to describing the difference between the two kinds of reality distortion; there's probably some mathematical framework to classify it better.