Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If anyone wants to move beyond using the "auto" setting on their camera (or phone), I would recommend the book Understanding Exposure by Bryan Peterson, the first edition of which was published in 1990:

* https://www.goodreads.com/book/show/142239.Understanding_Exp...

The principles involved haven't changed much in the intervening decades; the current fourth edition was publish in 2016.

If all you have is a phone you don't have to get new equipment: just perhaps a third-party 'camera app' that allows you manual control of aperture, shutter speed, ISO/sensitivity.

Once you know how each of these settings alter the resulting photo you can use them to alter the composition of photos, which is a whole other craft.

Edit: seems recent smartphones have little-to-no adjustable camera settings.



I think when you break down all the variables there is really very little to play with bc no phones have variable apertures.

ISO is basically a linear gain that's done on the sensor. As long as you aren't blowing out your photo and loosing information, it basically makes not difference if you do it in sensor or later while editing

So the only variable left is the shutter speed - which is basically directly dictated by the amount of light you have. You try to get as much light as you can without blowing anything out. This is how you get the most information. You can decrease it to get faster shots with less blur, at the cost of more noise

So it all boils down to basically one "slider"/variable between blur and noise


ISO is basically a linear gain that's done on the sensor. As long as you aren't blowing out your photo and loosing information, it basically makes not difference if you do it in sensor or later while editing

This is true for some cameras, but certainly not all. Many cameras, especially pro or pro-sumer grade, have non-linear ISO. That is, there are ranges for which it behaves linearly, but typically there will be some range - say the minimum up to 1600 or something - where it behaves as a linear range, and then the next setting up from that (where the settings are typically 1/3 stop) will reset to a lower snr. (And yes, that does imply that in such cases it often yields better results to go up by one or even two clicks in ISO)

I'm not sure if there any camera-phones that behave this way, though.


This is a great resource for finding out which ISO ranges various cameras are linear over:

https://photonstophotos.net/Charts/RN_e.htm


Taking one of the few recent-ish mainstream phones on this list since the subthread is about smartphone sensors: Samsung Galaxy S7 has an ISO range of 50–800 and basically all the noise values (measured in log2(electrons)) are between 2 and 3. There is a downward trend from 50 to ~300, above that it's all around 2. Other phones have similarly shaped graphs with different absolute values.

That sounds like the opposite of what GP (CWuestefeld) described. Am I misinterpreting the graph?

Lower sounds better to me, so the downward trend on a scale called "Input-referred read noise" sounds like it is tuning the Signal to Noise Ratio (SNR) on the sensor rather than just multiplying the sensor's output value, and it stops doing that above ~300 ISO. GP described that it would be a linear multiplier up until (for many cameras, not specifically smartphones) ~1600 ISO and after that it would be tuning the SNR. Do smartphones behave differently for some reason or am I misunderstanding something?

(It doesn't seem as though the absolute value says anything about the quality by the way, as a 10th gen Apple phone has a much lower value on this "noise" scale than a 12th gen one. The page does remark "raw values are not appropriate for comparing camera models because they are not adjusted for area", so this is probably that.)


> That sounds like the opposite of what GP (CWuestefeld) described. Am I misinterpreting the graph?

No. I was stupid in what I said above, getting the direction wrong. Where I said, "...will reset to a lower snr", I should have said HIGHER.


Ah, got it! Thanks for confirming! And it's definitely not stupid: I didn't even know of this concept, so I learned something today thanks to you :)


I'm not an expert, but I think phone camera sensors really are that different from camera camera sensors, presumably (going out on a limb) because of tradeoffs they make to get good quality from small sensors. The sensors in top phones are about the same size as in the smallest cameras, and way smaller than in the cameras GP is thinking of.


"then the next setting up from that will reset to a lower snr"

How does it magically make more photons fit the sensor..?

And why wouldn't you use that same magic at lower iso gain factors?


No magic and the same photons, but you can have the hardware sensor read them out differently. Specifically using varying amounts of analog gain / amplification before doing analog-to-digital conversion, minimizing noise. This varies based on camera design.

See the "ISO-Invariance and Downstream Electronic Noise" part here for a better explanation: https://www.lonelyspeck.com/how-to-find-the-best-iso-for-ast...

The article mentions the Sony A7S as an example, with the sensor showing marked improvements in SNR when reaching ISO 100, 200, 1600 and 3200, while behaving ISO-invariant wrt. noise in between those values.


The base stops are ISO 100, 200, 400, etc.

Many cameras let you set the ISO in 1/3 stop increments, but if I recall correctly, many camera manufacturers just keep the sensitivity at the base stops and adjust the brightness via software.

So shooting at ISO 250 really means ISO 200 (underexposing what you requested) but then adding a third stop equivalent of brightening to the digital file. Conversely, using ISO 160 actually means the camera is using ISO 200 (overexposing) and lowering the brightness in software.

What this means, at least 10 years ago when I was more in tune with the photography world, is that people would prefer to shoot at the [base ISO stop - 1/3] levels to because those were the levels with the least noise near that exposure setting. The cost is you risk saturating more pixels in the highlights.

And for the same reasoning, the ISO setting s 1/3 over the base stops were typically avoided as they were noisier, albeit with slightly more dynamic range.


> ISO is basically a linear gain that's done on the sensor.

It's usually not done on the sensor - it's usually done by the ADC that performs sensor readout.

Some cameras use a technology like Aptina-DRPix to dynamically change the capacitance of the sensor FETs, but (as far as I'm aware) this only exists in a simple binary form right now. E.g. one of my cameras reduces sensor gate capacitance when the ISO exceeds 800, but otherwise any ISO changes only affect off-sensor hardware.


Right, sorry. Not on the sensor, but the hardware that's reads it off the .. which I guess is not technically the sensor.. haha

You're still multiplying what is the photo-count or shotnoise

Didn't know there is technology built on top of that... Does changing the capacitance increase the sensitivity somehow? I guess then the question is.. why isn't that always enabled. There must be some downside to it


U=Q/C

The pixels output voltage is the charge (number of photoelectrons) divided by the integrating capacitance. Reducing the capacitance increases sensitivity at the expense of full-well capacity - because the saturation voltage is reached earlier. That’s the point at which highlights blow out.

The advantage of increased sensitivity at the integration node is that all downstream noise is applied to a larger signal, so SNR is improved.


If you underexpose too severely, the JPEG compression will eat up all the detail in the shadows.


Pretty much any serious camera, and I think even serious camera-phones these days, will optionally record the image as RAW data rather than encoding as jpeg.

That you want to shoot RAW is pretty much covered in day 1 of intro to digital photography. The only exception to the rule is when you need super-fast frame capture rate, and the bandwidth to storage of much larger RAW files interferes. This is typical of folks shooting sports.


Even then the entry level full-frame cameras will shoot something like 10 photos per second in RAW which is usually sufficient for sports or motorsports photography. There is a compressed raw mode that's required to enable the fastest capture mode but when you get into more expensive cameras like the Sony A1, it can shoot full resolution (50 megapixel) raw with auto focus at 30 frames per second which is mind blowing. It costs $6,500 vs $2,000 for the A7Iv though so you're definitely paying for it.


You can push shadows in Lightroom before exporting jpg


This is only true if you take just one capture. On Pixel, iPhone, and others we take many small captures and merge them together. There's lots of cleverness there and it allows you to have less noise without motion blur or blowing out highlights.


Computational photography is coming to "real" cameras now, too.

I just upgraded to an Olympus OM-D E-M1.3. Olympus has, unlike most manufacturers, in-body image stabilization. That means that there are tiny actuators moving the sensor around to offset shaky photographer hands. Doing the stabilization in-body gives some other nifty side effects.

Olympus cameras over the past several years have a "super resolution" mode that uses this. My sensor is only 20MP, but using super resolution I can get 50MP or 80MP (on a tripod) out of it. The camera accomplishes that by using the IBIS system to move the sensor by fractional pixels and combining them as you describe. And as you note, this in turn has the side effect of cutting noise.

I was just taking advantage of this a couple weeks ago out at Fort Davis, TX, near the McDonald Observatory, which has some of the darkest skies in the continental USA. Astrophotography is notorious for causing headaches with noise, but this technique goes a huge way to cutting that down with zero extra work from the photographer (except that the shot takes 9 times longer to record).


I think it does more than just lower noise. On my pen-f, there is a noticeable difference in the way the tones are rendered between a regular capture and the hi-res mode.


> no phones have variable apertures.

Even my almost 5 years old Galaxy S9+ has aperture. Surely something better came out since.


Wikipedia says the S9/S9+ was the first phone since 2009 to have a variable aperture. The S10 series also had it, but it was gone again in the S20. So it's definitely not common, and even in the handful of phones that had it, it was only one stop of adjustment.


> […] bc no phones have variable apertures.

:(


Aperture Size, Shutter Speed and ISO. Just understand what they are. And their units.

Actually see pictures varying one and keeping others constant to get a hang of things.

Then go backwards, check professional pictures and guess the values. Professional photography forums all photos have these values published .

Night/Day photography, moving/still and background focus are the only 3 skills you need as an amateur photographer. They rely on the 3 settings above.

Beyond that lies the rabbit hole that, if you venture, speak not to any people whom you wish to keep friends. They hate when you try to tell them.

Don't spend over 2k on lenses.

Have printed photos on glossy, matte paper. Touch and feel them. Worth the time.


+1 for printing, which are a rare habit on social media era.

Virtually any image looks great on FB/Instagram/etc, but once you printed it big, you'll notice how good or bad actually it is.


> The principles involved haven't changed much in the intervening decades.

My problem with many old-time tutors is that they refuse to recognize that photography has gotten a lot easier. We don't need to learn the craft the way they did.

For example, you don't need stuff like the "sunny 16" rule of exposure if you have real-time previews in the camera. You use visual feedback, usually with better accuracy.

In the same vein, you probably don't need to learn about flash guide numbers when modern continuous LED illumination covers 99% of use cases without any guesswork.

Or, you don't need to learn about optical filters (perhaps except for the polarizer) when almost all their functions can be accomplished in software without loss in fidelity.


> For example, you don't need stuff like the "sunny 16" rule of exposure if you have real-time previews in the camera. You use visual feedback, usually with better accuracy.

Except that it may not, unless you know what you are doing and press the right button:

> With the monitor or viewfinder, you may see an image with an aperture that differs from the shooting result. Since the blurring of a subject changes if the aperture is changed, the blurriness of the actual picture will differ from the image you were viewing prior to shooting.

> While you press and hold the key to which you assigned the [Aperture Preview] function, the aperture is stepped down to the set aperture value and you can check the blurriness prior to shooting.

* https://helpguide.sony.net/ilc/1420/v1/en/contents/TP0000226...

* https://www.cnet.com/tech/computing/how-to-use-the-depth-of-...

* https://www.slrphotographyguide.com/depthfield-preview-butto...

> In the same vein, you probably don't need to learn about flash guide numbers when modern continuous LED illumination covers 99% of use cases without any guesswork.

And leaving your camera in "auto" also probably "covers 99% of use cases without any guesswork"… but you give up creative control to the software. Why bother learning what aperture is at all if 99% of the time you won't ever matter to taking a photo?

The whole point of reducing the use of "auto" is to make creative choices yourself.


The point of learning Sunny 16 is that once you’ve internalized thinking in full stops, you don’t need visual feedback, which makes you faster, which can make the difference between getting your shot or not and having a happy or angry client.

> Or, you don't need to learn about optical filters (perhaps except for the polarizer) when almost all their functions can be accomplished in software without loss in fidelity.

I still think it’s a good idea to learn what they do, so you know when to use a (digital) BW red filter because you want brighter skin.


A quite technical but fascinating book on lighting and photography: "Light Science & Magic" by Fil Hunter and Paul Fuqua (https://www.goodreads.com/book/show/290153.Light)


I bought that book months back. Have yet to read it but thanks for the reminder.


Is there any good book on photo editing.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: