For decades people have had the ability to get great photos using cell phones that included useful features like automatically adjusting focus, or exposure, or flash all without their phones inventing total misrepresentations of what the camera was pointed at.
I mean, at a certain point taking a less than perfect photo is more important than getting a fake image that looks good. If I see a pretty flower and want to take a picture of it, the result might look a lot better if my phone just searched for online images of similar flowers, selected one, and saved that image to my icloud, but I wouldn't want that.
The case in the article is obviously one that any idiot could have taken without these tools.
But "in difficult scenarios", as the GP comment put it, your mistake is assuming people have been taking those photos all along no problem. They have not. People have been filling their photo albums and memory cards up with underexposed blurry photos that look more like abstract art than reality. That's where this sort of technology shines.
I'm pretty reasonable at getting what I want out of a camera. But at some point you just hit limitations of the hardware. In "difficult scenarios" like a fairly dark situation, I can open the lens on my Nikon DLSR up to f/1.4 (the depth of field is so shallow I can focus your eyes while your nose stays blurry, so it's basically impossible to focus), crank the ISO up to 6400 (basically more grain than photo at that point), and still not get the shutter speed up to something that I can shoot handheld. I'd need a tripod and a very still subject to get a reasonably sharp photo. The hardware cannot do what I want in this situation. I can throw a speedlight on top, but besides making the camera closer to a foot tall than not and upping the weight to like 4lbs, a flash isn't always appropriate or acceptable in every situation. And it's not exactly something I carry with me everywhere.
These photos _cannot_ be saved because there just isn't the data there to save. You can't pull data back out of a stream of zeros. You can't un-motion-blur a photo using basic corrections.
Or I can pull out my iPhone and press a button and it does an extremely passable job of it.
The right tool for the right job. These tools are very much the "right" tool in a lot of difficult scenarios.
In circumstances where it really matters having a prettied up image might be worse than having no image at all. If you rely on the image being correct to make some consequential decision, you could convict someone of a crime, or if you were trying to diagnose some issue with some machine you might cause damage. While if the camera gave an honest but uninterpretable picture you would be forced to try again.
- Photographing serial numbers or readouts on hard-to-reach labels and displays, like e.g. your water meter.
- Photographing damage to walls, surfaces or goods, for purpose of warranty or insurance claim.
- DIY / citizen science / school science experiments of all kind.
- Workshops, auto-repairs, manufacturing, tradespeople - all heavily relying on COTS cameras for documenting, calibrating, sometimes even automation, because it's cheap, available, and it works. Well, it worked.
Imagine your camera fighting you on any of that, giving you bullshit numbers or actively removing the very details you're trying to capture. Or insurance rejecting your claim on the possibility of that happening.
Also let's not forget that plenty of science and even military ops are done using mass-market cameras, because ain't anyone have money to spend on Dedicated Professional Stuff.
Can people taking documentary photos can disable the feature? Obviously casual users won't be aware of the option, if it exists at all.
I've often wished for an image format that is a container for [original, modified]. Maybe with multiple versions. I hate having to manage separate files to keep things together.
Man, I just wanna take pictures of my kid when she's sleeping and looks super adorable. It's sucks that me doing so is going to send people to jail and delay machinery diagnostics and cause insurance fraud, but I'mma keep doing it anyway.
That’s a pretty hand-wavy argument. You can just frame a picture however you want to give a very different image of a situation, que that overused media-satire picture of a foot vs a knife being shown.
> For decades people have had the ability to get great photos using cell phones
Not in these light conditions. Simple as that. What iPhones are doing nowadays gives you the ability to take some photos you couldn’t have in the past. Try shooting a few photos with an iPhone and the app Halide. It can give you a single RAW of a single exposure. Try it in some mildly inconvenient light conditions, like in a forest. Where any big boy camera wouldn’t bat an eye, what the tiny phone sensor sees is a noisy pixel soup that, if it came from my big boy camera, I’d consider unsalvageable.
Again, decades of people photographing themselves in wedding dresses while in dress shops (which tend to be pretty well lit) would disagree with you. Also, the things that help most with lighting (like auto-exposure) aren't the problem here. That's not why her arms ended up in three different positions at once.
> Apple's horrible tech featured in the article had nothing to do with the lighting.
Of course it did.
iPhones take an “exposure” (scare quotes quite intentional) of a certain length. A conventional camera taking an exposure literally integrates the light hitting each sensor pixel (or region of film) during the exposure. iPhone do not — instead (for long enough exposures), iPhones take many pictures, aka a video, and apply fancy algorithms to squash that video back to a still image.
But all the data comes from the video, with length equal to the “exposure”. Apple is not doing Samsung-style “it looks like an arm/moon, so paint one in”. So this image had the subject moving her arms such that all the arm positions in the final image happened during the “exposure”.
Which means the “exposure” was moderately long, which means the light was dim. In bright light, iPhones take a short exposure just like any other camera, and the effect in question won’t happen.
(Okay, I’m extrapolating from observed behavior and from reading descriptions from Google of similar tech and from reading descriptions of astrophotography techniques. But I’m fairly confident that I’m right.)
The phone is operating under way less harsh conditions since there's usually quite a bit of light even in most night scenes.
The iPhone is actually too good at this. You can't do light trails: it over-weights the first image and removes anything too divergent when stacking, so you get a well-lit frozen scene of vehicles on the road. I can get around it shooting in burst mode and stack in something like Affinity Photo, but that's work.
I would argue that it’s both more and less harsh. The available light is probably better, but the astrophotography use case generally benefits from tripods and compensated or at least predictable movement of the subject. On the other hand, astrophotography needs to deal with atmospheric effects, and people aren’t usually taking night mode iPhone photos through the turbulent air above a flame.
It could also have been taken with too much motion (either the phone or the subject), meaning some of the closer-in-time exposures would be rejected because they're blurry/have rolling shutter.
The claim was that any camera without this feature is "entirely unusable" and the photos couldn't be saved even with "hours of manual edits". The fact is that for decades countless of beautiful photos have been captured with cell phone cameras without this feature and most of them needed no manual edits at all. Many of those perfectly fine pictures were taken by people who would not consider themselves to be photographers.
Anyone who, by their own admission, is incapable of taking a photo without this new technology must be an extraordinarily poor photographer by common standards. I honestly wasn't trying to shame them for that though (I'll edit that if I still can), I just wasn't sure what else they could mean. Maybe it was hyperbole?
The two of you might have a different threshold of what you consider to be usable photos, and that’s fine. However, there is no way around physics. Under indoor lighting, a single exposure of a cellphone camera will either be a blurry mess, a noisy mess, or both. Cellphones used to get around that by adding flashlights and adding strong noise suppression, and it was up to the photographer to make sure that the subject didn’t move too much. Modern smartphones let you take pretty decent photos without flash en without any special considerations, by combining many exposures automatically. I think I t’s quite amazing. The hardware itself has also improved a lot, and you can also take a better single exposure photo than ever, but it won’t be anywhere near the same quality straight out of the camera.
And, yes, I have been taking a lot of pictures with my Sony Ericsson K750i almost two decades ago and I did like them enough to print them back then, but even the photos taken under perfect lighting conditions don’t stand a chance to the quality of the average indoor photo nowadays. The indoor photos were all taken with the xenon flash and were very noisy regardless.
traditionally phones took good photos in good light but as the light decreases so does photo quality (quickly). the point of the ai photography isn't to get the best photograph when you control the lighting and have 2 minutes before hand to pick options, it's to get the best photograph when you realize you need the best possible photo in the next 2 seconds
> The fact is that for decades countless of beautiful photos have been captured with cell phone cameras
Which phones are you thinking of? Because you definitely have a very rose tinted view if you are thinking literally of cell phones, over smartphones, and even in the latter case, only the past couple of years have had acceptable quality where it might pass some rudimentary view as a proper photograph. Everything else was just noise.
I mean, at a certain point taking a less than perfect photo is more important than getting a fake image that looks good. If I see a pretty flower and want to take a picture of it, the result might look a lot better if my phone just searched for online images of similar flowers, selected one, and saved that image to my icloud, but I wouldn't want that.