Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is only true if you take just one capture. On Pixel, iPhone, and others we take many small captures and merge them together. There's lots of cleverness there and it allows you to have less noise without motion blur or blowing out highlights.


Computational photography is coming to "real" cameras now, too.

I just upgraded to an Olympus OM-D E-M1.3. Olympus has, unlike most manufacturers, in-body image stabilization. That means that there are tiny actuators moving the sensor around to offset shaky photographer hands. Doing the stabilization in-body gives some other nifty side effects.

Olympus cameras over the past several years have a "super resolution" mode that uses this. My sensor is only 20MP, but using super resolution I can get 50MP or 80MP (on a tripod) out of it. The camera accomplishes that by using the IBIS system to move the sensor by fractional pixels and combining them as you describe. And as you note, this in turn has the side effect of cutting noise.

I was just taking advantage of this a couple weeks ago out at Fort Davis, TX, near the McDonald Observatory, which has some of the darkest skies in the continental USA. Astrophotography is notorious for causing headaches with noise, but this technique goes a huge way to cutting that down with zero extra work from the photographer (except that the shot takes 9 times longer to record).


I think it does more than just lower noise. On my pen-f, there is a noticeable difference in the way the tones are rendered between a regular capture and the hi-res mode.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: