Hacker News new | past | comments | ask | show | jobs | submit login
Super Resolution (adobe.com)
340 points by giuliomagnifico on March 18, 2021 | hide | past | favorite | 193 comments



Every time someone brings up Super-resolution, I like to pull up this hilarious example : https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-a...

Super-resolution is only guessing. It's ok for art, not for critical tasks.


One of those annoying things is that the name "super resolution" stuck, here.

Originally super-resolution was a hardware technique, and not "guessing". If you can [edit: this was poorly worded "control an imager positioning"] control imaging with finer resolution than the sensor has, you can take multiple images and reconstruct a higher resolution image in a principled way for say 2x resolution gain (cf super-resolution microscopy), also some telescope systems. Some modern photographic systems actually do this directly (piezo motors?) on the sensor.

Of course this only works if what you are imaging is reasonably static over the time needed to take all the images.

You can do an approximate version of this with video, with caveats because you don't control the motion. The key thing is, though, you actually have more data to work with.

This idea ran in parallel with image processing people attempting to estimate higher resolution from a single image for a while, and unfortunately the terminology stuck in image processing also. Something like resolution extrapolation is probably better but that ship sailed ages ago.


This is the actual original meaning of the word "super resolution". The idea is that you use a prior about a specific image, domain, type of image, etc. to enhance the resolution.

Super-resolution was already the name for this field in this 1987 review! https://kmh-lanl.hansonhub.com/publications/imrecov.pdf But 1987 is pretty recent as far as this field is considered. A. W. Lohmann and D. P. Paris talked about super-resolution this way in 1964! https://www.osapublishing.org/ao/abstract.cfm?uri=ao-3-9-103...

So no. It's not annoying that super-resolution is the name of this field. The name predates the entire field of image processing, it predates the invention of the digital camera! And it is the original use of the title "super resolution", people that do this in hardware adopted the name later.


I think you misread me (so clearly I articulated poorly); the thing that is annoying is that super resolution in the sense you refer to is categorically a different thing that the common usage of super resolution in (particularly single image) digital processing. The usage existed, as you and I noted, before it came into digital image processioning.

Because of this, conversations around it are often confused, and it would be clearer if there were a different terminology for the latter.


What would be a better term for this type, then? "Inferred/approximated super resolution", or something?


You can generalize this to signals by introducing noise to get higher resolutions from ADCs[1]. Human vision does this too[3], there was a really interesting theory that the reason 48 FPS in the Hobbit looked so bad was due to it falling right near the line Nyquist and the brain being unable to distinguish from a sequence of pictures(24FPS) or continuous frames(60FPS)[2].

[1] http://imajeenyus.com/electronics/20120908_improving_adc_res...

[2] https://accidentalscientist.com/2014/12/why-movies-look-weir...

[3] https://en.wikipedia.org/wiki/Ocular_tremor


Doesn't strike me as true, 50 fps video is very common all over the world and doesn't have many issues besides noticeable flicker on CRTs.

Note that 24 fps film cinema runs at 72 Hz flicker due to triple-exposing each frame. That's why they could get away with 24 fps: Since they are dealing with stored images, multiple exposures were possible to avoid flickering. Using as low fps as possible was desirable because film costs money, film transports get more finicky as fps are increased and exposure gets tricky (doubling or even tripling the frame rate leaves you with a much shorter exposure time, which you need to compensate either with a larger aperture (which may not be available, or if it is, cost a lot of money), or more sensitive film (which may not have been available, or has much worse quality), or by having more light in the scene -- all of these are either undesirable or expensive).

So they chose 24 fps because the motion perception isn't quite in the shitter at that fps, though it's pretty bad (nowadays idolized as "cinematic look"; even more hip and cool if you do it on Youtube and deliver 24p which gets converted to 60 fps by means of 3:2 alternation, resulting in a super janky look known as "That Pro Youtuber Look"). This is the reason why people did not like The Hobbit.

Meanwhile video people went on to invent interlacing, because video was fundamentally about a non-stored image. Here the reason why low fps are desirable are again cost, since higher fps needs higher video bandwidth, and that makes everything more expensive and lets you have fewer channels (likely not a concern in the early days). Interlacing lets you have a higher frame rate (a CRT at 25 Hz or even 30 Hz will trigger nausea in everyone) without incurring higher costs, just like multiple exposures in a cinema projector.


> Originally super-resolution was a hardware technique, and not "guessing". If you can [edit: this was poorly worded "control an imager positioning"] control imaging with finer resolution than the sensor has, you can take multiple images and reconstruct a higher resolution image in a principled way for say 2x resolution gain (cf super-resolution microscopy), also some telescope systems.

Isn't this also how our eyes work?


Not really. Our visual cortex does something closer to the new fancy photoshop algorithm (filling in imaginary detail from a mental model in memory). Super-resolution techniques such as the one described above is more like a mathemathical function based on a statistical values, so it extracts real information hidden in the difference between each image.


It does both. Your brain interpolates the image, but the eye also uses interferometry to generate extra detail by moving the eye back and forth really quickly (microtremors). The result is that we glean more true information than can be determined by just measuring rod or cone frequency.


Citation? I've read over 50 papers in MISR and SISR (I wrote a lit review) and I have never seen mention of actual hardware that would shift the imaging system; indeed such a system would have been my dissertation topic if I hadn't switched areas.


I think what he's talking about is what Sony calls Pixel Shift Multi Shooting. Other camera manufacturers do it too. https://support.d-imaging.sony.co.jp/support/ilc/psms/ilce7r...


Wow that is in fact exactly it. I'm actually quite impressed they're able to shift the sensor array by exactly one pixel (or even close to) since that's on the order of microns.


In fact the Olympus implementation of this feature moves the sensor by half a photosite diagonally. They do it by re-purposing the existing in-body stabilization mechanism to move the sensor around.

User 'twic' posted a link to a very interesting article that describes this and also explains the difference between photosites and pixels:

https://chriseyrewalker.com/the-hi-res-mode-of-the-olympus-o...


Piezo actuators can make very fine movements. You can get positioning systems that can adjust position by nanometers, even sub-nm [1].

Also, cameras have had stabilization systems for a while now; I would assume they need similar pixel-scale precision. Some cameras shift the lens, some shift the sensor, but either way they need to shift the image-on-sensor by a very small amount, and also do it very rapidly.

[1] For example: https://www.pro-lite.co.uk/File/psj_piezoelectric_nanopositi...


Some projector manufacturers are shifting a 4K DMD around 4 times to increase the refresh rate of a 1080P image 4x.


All of them are shifting either by 4 or by 2 pixels you don’t have native 4K DMDs yet afaik.


There are some, but they're crazy expensive


Pentax did it with the K70 back in 2016.


They also did it earlier in 2015 with Pentax K-3 II. Also Olympus OM-D E-M5 Mark II had a similar feature in 2015.


Most modern mirrorless cameras have such hardware (usually denoted "in-body image stabilization"). The "repurposing" of it to acquire multiple captures at various slight offsets and combine them intelligently is maybe a bit more recent, but still pretty common at this point.

https://en.wikipedia.org/wiki/Image_stabilization#Sensor-shi...

https://support.d-imaging.sony.co.jp/support/ilc/psms/ilce7r...

https://www.nikonimgsupport.com/na/NSG_article?articleNo=000...

https://www.canon.co.uk/pro/stories/8-stops-image-stabilizat...


I think the technique was used a lot prior in astronomy, where I encountered it the first time.

There is called stacking, and the subpixel offset comes naturally from path distortion in the atmosphere itself.

I remember using registax when the first batch of consumer telezoom point and click camera came out and got a 30x long exposure of the moon, with, well, mediocre results.


Google uses it in Google camera. They're not shifting the sensor themselves, but they take advantage of the camera shake users introduce by taking handheld photos.

https://ai.googleblog.com/2018/10/see-better-and-further-wit...


Yes I'm very familiar with this but this is just MISR i.e. purely a software solution (Peyman Milanfar is one of the original researchers associated with MISR). Fortunately elsewhere in this thread hardware implementations have been demonstrated.


There are hardware implementations referenced in the blog post, and in the linked published paper.

> In the early 2000s, Farsiu et al. [2006] and Gotoh and Okutomi [2004]formulated superresolution from arbitrary motion as an optimization problem that would be infeasible for interactive rates. Ben-Ezraet al. [2005] created a jitter camera prototype to do super-resolution using controlled subpixel detector shifts. This and other works inspired some commercial cameras (e.g.,Sony A6000,Pentax FF K1,Olympus OM-D E-M1orPanasonic Lumix DC-G9) to adopt multi-frame techniques, using controlled pixel shifting of the physical sensor. However, these approaches require the use of a tripod or a static scene.

https://arxiv.org/pdf/1905.03277.pdf


damn the Ben-Ezra paper is exactly what i imagined being the scope of work for my planned dissertation. guess it's a good thing i didn't pursue it lol.


You can also do it in software by just recording IMU with high precision time stamps (a lot of camera sensors have a gpio they can interrupt on even every scan line) and post-processing. There’s cool techniques where they can remove various issues in rolling shutter through this technique to get global-shutter like quality and removing camera-movement induced motion blur. I haven’t heard of it applied to super res but I don’t see why not. I think Google uses similar techniques to implement their software HDR solution which takes 3 back to back snapshots at different exposure levels and merges them.


Newer Sony mirrorless cameras embed gyroscope data into videos for further stabilization of the image when IBIS is not enough

Theoretically, with 30FPS cameras like Sony A1 and said gyroscope data, you can create super resolution images.

IIRC Olympus' some handheld super resolution modes use both shake and sensor shift to increase resolution.


Synthetic aperture radar [1][2] uses this principle. In that case the imaging system is fixed on a moving aircraft or satellite.

I think in general satellite imaging is a good place to look for such implementations, since they have a naturally and predictably moving imaging system.

1: https://en.wikipedia.org/wiki/Synthetic-aperture_radar

2: https://www.youtube.com/watch?v=u2bUKEi9It4


SAR is similar but not the same since there you're super-resolving in time rather than space. Also in that instance it's just conventional MISR since you're not driving the imaging system (more information is being passively collected as the targets passes).


I think it’s more useful to think in terms of how well sampled the observations are relative to the size of the output space. SISR is very undersampled, and MISR is oversampled. SAR reconstruction techniques can fall in either bucket.


SAR is similar to the video technique mentioned, agree it's not quite the same but if underlying assumptions hold still more estimate than "guess".


A major difference here is that SAR uses phase information, whereas to my knowledge optical techniques are not doing that.


Pentax DLSR have this feature too. The same motors that are used to control the sensor and prevent camera shake are used to offset the sensor slightly while multiple images are taken


Certain Hasselblad cameras have had this for a long time, and Pentax, Olympus, Panasonic, and Sony have it on various models, using the sensor shift image stabilization to implement it.


I just bought a camera that does it, so it definitely exists.

https://chriseyrewalker.com/the-hi-res-mode-of-the-olympus-o...


That's a really informative article; thanks for posting it.

I also have an Olympus E-M1 MkII, but I haven't tried the high resolution mode yet. You just gave me a TODO item!


I haven't tried it yet either. It really needs a tripod, and i don't have one.


Probably not exactly what he is talking about, but this also sounds similar to dithering. Where with repeated measurements and random noise you can statistically estimate the value of a signal below the quantization level.


Indeed, it includes things like the Drizzle algorithm that has been used by Hubble space telescope astronomers for a while: https://www.stsci.edu/ftp/science/hdf/combination/drizzle.ht...


The iPhone 12 Pro physically shifts the sensor for image stabilization, so precise positioning is possible.


A ASUS phone (zenfone 2) had this hardware superresolution feature you are talking about


Every time someone brings up enhancing images I like to pull up this Red Dwarf clip:

https://www.youtube.com/watch?v=2aINa6tg3fo


That is great. I like to bring up this clip from Castle:

https://www.youtube.com/watch?v=PaMdXjTn9rc


I really loved the way Castle played around with typical police drama tropes, though I got a little confused about what the show was trying to be/do in later seasons.


Haha. I’d never even heard of this show. Will be checking it out.


Such a good show and highly underrated and unknown by so many. What a great clip.


I love the gag at the end too.

“Wouldn’t it have been easier to just look them up in the phone book?”

Pure genius


Lolol!


There are forms of super-resolution that certainly aren't guessing. For example, you can take a video of a subject and integrate over time, so that the motion of the subject over the sensor allows you to infer sub-pixel detail.

https://www.cs.huji.ac.il/~peleg/papers/icpr90-SuperResoluti...


They started their model with RAW format, so the model should encoded some interactions between red / blue / green light sensors and that can help generate genuine sub-pixel details. OTOH, this is machine learning, unless you specifically has some discriminators (just an idea) to counteract, you don't really know how much these are genuine sub-pixel details and how much are hallucinations.


Turning a regular video into a super photo is different from turning a regular photo into a super photo.


Unfortunately someone is going to wrap up super-resolution for critical tasks and sell it likely causing many people harm or at least inconvenience. I have already tried to talk some companies out of using it for police/surveillance type work. People who do not understand the technology are determined to use it and someone is going to.


I wish Adobe would use a different name for this that made it more obvious what was happening, something like "detail fill" or "detail interpolation".

I worry this is going to be a case where the marketing is at direct odds with public education efforts.


Lots of things are "only guessing." Auto color correction is only guessing. Unsharp mask is only guessing. Smart selection is only guessing. Content-aware fill is only guessing.

They're still useful tools to have in your toolbox as a photographer or designer, even for critical tasks, and I don't really see how this is different. There may be certain failure cases, but everything has failure cases.


The difference is that those kind of guesses are not inventive.

It’s like how modern word-guessing while texting can have really weird results because the guess of what you meant to say has been turned into real words which creates new meaning.

Where previously you’d just have had a few mangled words it’s now been corrected into proper words, often with an unfortunate sexual innuendo.



Is there any proof that Ryan Gosling's face (or perhaps a photograph of Ryan Gosling's face) was in fact not there when the original photo was taken? :)


What I find interesting about that is that after seeing the face in the super-resolution image you can kinda see it in the original


Nobody has billed it as anything more than just guessing. In the literature, it is frequently mentioned as “perceptually plausible” upscaling.


> Super-resolution is only guessing.

Machine learning is educated guessing based on previously seen data. As mentioned by others there are ways to do super resolution that only uses the data available. I can't think of any that can upscale a single image, although I have vague memories of having seen something about using moiré patterns to infer the higher resolution texture of some features.


I feel like it's also ok for critical tasks if you're willing to accept that it isn't perfect. If all you have is a grainy photo, you'll only be able to make guesses yourself; why not have a superhuman guess too? (Because the people putting it to use would be morons about it, I know, let me dream)


For me super resolution mean a combination of multiple lower resolution images to gain additional information and from that higher resolution.

It is especially something that some cameras can do by deliberately doing sensor shifting.

They should not call it super resolution or at best emulated super resolution or artificial super resolution.


That reminds me of the old CSI episodes where they'd have grainy footage from a CCTV running around SIF/240p, and the lead investigator would say "Enhance... Enhance... Enhance... There!" And the face of the killer would be clear as day from inside a moving car a block away.


Nitpicky, but I think "It's okay for x, not for y" when describing nascent technology is a bit shortsighted.

Who knows how this evolves and what new applications people may devise? For today, I agree: it's just art.


>Photographer Jomppe Vaarakallio has been a professional retoucher for 30 years…

>To be clear, this isn’t a knock on the Gigapixel software. Vaarakallio tells PetaPixel that the software is “amazing” and he uses it all the time.


I don't think that it's a knock on the software, I think it's a knock on the common interpretation of what that software is doing.

Professional photo retouching is art. It's okay to use Gigapixel for an artistic task, it's not OK to use it to enhance a photo that you're going to show to a jury. That's what GP means by 'critical': use cases where it matters whether or not the pixels being added map to an objective reality rather than an algorithmic guess about what would look good.


There are serious use cases for super resolution in medical imaging for example.


I’d think medical imaging is exactly where you wouldn’t want to use this technique. If something isn’t clear on an X-ray, you don’t want to fill in the details by guessing like this software does.



Is this single example of someone using different software (by Topaz Labs) relevant to this article specifically? Or just every article about enhancement?


Example of critical tasks?


From the linked article:

>you may want to uncheck detect faces… unless you want Ryan Gosling popping up all over the place.

Sooo not really a case against super-resolution, just a funny result of having used the wrong settings


Guessing I think is too strong of a way to phrase what super resolution does. The broader concept of, for example, regularized solving of inverse problems is used widely in things like CT and MRI where the reconstructed imagery is used for analysis. The regularization is effectively the part you’re saying is guessing, but I would phrase it as enforcing assumptions about the data. Neural network-based approaches are similarly learning the distribution of the output data.


For anyone interested in 'real' super resolution, we use these techniques to overcome the diffraction limit in microscopy (my field is neuroscience):

- https://en.wikipedia.org/wiki/Super-resolution_microscopy

- Stimulated emission depletion microscopy (STED): https://en.wikipedia.org/wiki/STED_microscopy

- stochastic optical reconstruction (PALM/STORM)

- structured illumination microscopy (SIM)

Here is one of my favorite STED imaging papers, looking at the skeleton of neurons: https://www.sciencedirect.com/science/article/pii/S221112471...


Another "real" technique I've seen is having sensors that do not have the individual phototransistors layed out in a nice periodic grid pattern but with aperiodic pattern like Penrose tiling (put PTs as vertices of kites/darts).

The interpolation techniques (making a photo out of hardware input) manage to get 10-15x more resolution out of that sensor layout compared to normal grid.

You also avoid Moire patterns with that too.


Wow! The 3D-Sim of the nuclear envelope at the wikipedia article was amazing but seeing the structure in the neuron is astounding. Do you know how long imaging takes for this? I assume the post-processing is slow but will video be possible someday?


I am least familiar with SIM, but you can do live-cell SIM imaging for sure. The processing is a bit computationally intensive but not so bad (and can always be done post-hoc).

The big thing you have to look out for is the light intensity killing the cells or bleaching your signals. One of our collaborators is actively working on on-the-fly SIM processing for live cell imaging.

A quick glance at pubmed it looks like 11Hz was do-able several years ago https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2895555/ and this (sorry paywalled, I've heard sci-hub has the paper...) https://pubmed.ncbi.nlm.nih.gov/30478322/

STED is promising for live imaging too. Lots of beautiful pictures out there!


Yes, I'm disappointed by the name collision. Is there a tool that makes "real" superresolution photos easy? Is that built into photoshop as well under a different name?


unfortunately not, for real superresolution (i.e. resolving below the diffraction limit of light) all the current methods require expensive (and very dangerous) lasers and microscopes with all sorts of optics widgets, mirrors and computers. Lots of 'high resolution' imaging things are available for cameras, as well as some AI systems that will make up data for you so it looks better too!


I just wish Adobe CC wasn't the buggiest piece of software I've ever used.

I've had a number of issues over the years, but my current issue is that when I try to open CC the interface elements all freeze and are unclickable (even though the window is still scrollable – very strange behavior). So I went to uninstall it, but I can't because Photoshop is installed. So I went to uninstall Photoshop, but you guessed it, I can only uninstall PS through CC, which is unresponsive.

Smh.


They have a tool specifically crafted to remove their buggy software:

https://helpx.adobe.com/creative-cloud/kb/cc-cleaner-tool-in...

Note that this isn't the same as uninstalling everything via the official process since this leaves behind stuff like Adobe's Genuine client which verifies you're not using pirated software.


Nice, thanks for the tip, will give it a go!


People are doing this to themselves. Stop buying their shitty software, and see how quickly they start to fix it.

I don't understand why people are so obsessed with Adobe, since their software nowadays isn't that good. There are tons of alternatives out there that work better and do the same thing, if not more.

Is it just laziness/reluctance to learn something new?


Some of Adobe's other products have superior competition, but Photoshop is the flagship and despite its stable of bugs I don't think there's anything better. If you have alternatives to suggest I'm interested (I really hate Adobe). Ones I'm aware of:

Affinity Photo. Much more stable than Photoshop and I like it a lot, but there are things Photoshop does that Affinity doesn't and I can't think of anything that goes the other way.

Krita. Fairly sleek, especially for OSS. Becoming very competitive for digital illustration, but not (and not intended to be) a great photo editor.

paint.net. Good for fast edits but simplified, not a true competitor.

GIMP. Ancient, slow, ugly, clunky, severely lacking in features, and somehow even less stable than Photoshop.


A large part of it is that commercial use involves sharing files with other people. It's very hard to get constant results if you are not using the same software. The file formats are mostly propriety and complex. The effects applied are dynamic and probably also propriety. It's just a mess, so everyone just puts up with it and continues to use "the standard" software.

If you are just doing graphic work yourself, there are other applications like Affinity that can work, as long as you are not collaborating.


Well, I can't. I didn't choose to use this shitty software, my company is using it. Even if I make a statement to replace it with something better, my clients are still using it. I hate it with the passion, but it is hard(impossible) to change habit of others


It's not only Creative Cloud itself but all the newer versions of their apps I recently used.

I needed to update my CV recently and expected to spend 1h in InDesign. I spent 6h in the end.

– InDesign crashes while saving and destroys my document. 1h lost.

– InDesign crashes while exporting a PDF of my document (9 pages). I hadn't saved. 1h lost.

– InDesign crashes (reproducible) when adding/inserting a page (mind you, that's page 10). First time this happened I hadn't saved for half an hour. I was really considering changing the text because I couldn't solve this. Then I found [1]. Quote:

> [...] after speaking to Adobe chat help, they asked me to send my file to them. They sent it back to me and everything went back to normal. [...] "File was corrupted , we recovered it by using scripts and then saved as IDML."

– Because of the above I had the idea of exporting to IDML. Re-importing then allowed me to add the page but I had subtle formatting errors where the last character before a tab or a newline on lines that had the font changed via a character style had the wrong style. Fixing this: 1h.

– When I re-arranged parts of the CV via copy & paste entire sections I copied lost the small caps/italic styles they had assigned (acronyms/names). Going through the entire document to fix this: 1.5h.

I should have known better. Less than two years ago I helped a friend do a snail mail mass mailing where we used a CSV file with addresses to create hundreds of (two page) letters. All in InDesign. Everything worked until we tried to export as PDF, for printing. The solution was to export as 'interactive' PDF and only export about ~100 pages at a time.

I bought Affinity Publisher already when the thing with the letters happened. But I naively believed updating my CV would be quick in InDesign.

In retrospect typesetting the CV from scratch in Publisher would have been the better choice.

Last week I helped a friend with a commercial that was mostly 3D and some motion graphics done in After Effects (Ae). We couldn't get it to render in After Effects 2019. It would run out memory and then just not render the frame or crash. In the end we exported the project for an older version and went back to an Ae CC version from six years before. That worked without any issues.

All this is just shocking. I used InDesign from 1.0 and it was not that bad, a decade ago. Ae ... the same. See above.

As of a recent update, Acrobat Reader (free version) refuses to let me open any document w/o signing into CC first. Another wtf.

What a friend of mine replied when he heard about my InDesign adventure:

> I'm on CS6 for anything Adobe. Just junk now.

[1] https://community.adobe.com/t5/indesign/indesign-crashes-whe...


Pixelmator Pro (happy customer here) has great superresolution without all the cloud subscription baggage. I think it’s fair to make comparisons, which I will leave to those who have CC subscriptions, but anyone doing so should realize that Adobe is being compared to a moving target as outside options and even DIY options are only getting better.

Yes we’re not talking about accuracy here, just perceived resolution, no need to hammer on that.


I agree with this. I’ve used the Pixelmator Pro feature a lot and it’s damn good.

I hear it’s just monstrously fast on the M1 too.


This sounds like the same approach as Gigapixel AI from Topaz Labs.

I haven't tried Gigapixel but I have used Topaz' Video Enhance AI, which is phenomenal. I've been using it to upscale old TV shows which never got an HD remaster, to UHD.

Right now it's running through the first episode of Firefly, converting from 540p to 2160p (540p as the bluray rip was basically upscaled to 1080p from its original production, so I converted to 540p first in Handbrake with zero noticeable loss in quality since I used a near-lossless compression factor.. this provides better upscaling):

https://i.imgur.com/hcRYM5n.jpg

When it's done I'll run it through Flowframes for framerate interpolation. Then maybe another pass in Handbrake to figure out an optimal size for the end file.

Then I'll run through the rest of the season using the same settings I tested with this first episode.


Videos can use inter-frame information to help infer sub-pixel details.

This post about "Super Resolution" is interesting because it starts with RAW format (which contains information about camera sensor arrangements), hence, the machine-learned model should not only memorize artificial details (what hair should be look like, what a tree-leaf should be look like etc, and use that to "hallucinate" a higher-resolution details, I liked to call this "hallucination" for that reason), but also relationship between complex interference of different sensors in their corresponding arrangements.

You can read more about RAW format and why exposing RAW format for photography is exciting (on everyday's camera, i.e. your phone) from this post: https://blog.halide.cam/understanding-proraw-4eed556d4c54


It looks like the character changed into a silk version of his shirt with no chest pocket.


Yeah, I'm not sure I have the settings quite right on this one yet. There are several AI models to choose from to get an optimal result, some of which have configurable parameters to control for this.

I noticed with this model that really fine lines will have a tendency to get smoothed out a little. There's a similar model which should pull a little more detail but typically this one seems to work best. It's less noticeable once the video is in motion, compared to a still image.

I also probably removed too much grain from this, hence the more 'silky' look. It's nice for skin but less so for textures.


I might be old and grumpy, but I prefer the left image to the right one? The right one is watercolory and overly smoothed, like a cheap beautify photofilter


You can adjust the grain and denoise settings to change that, or switch AI models. Probably some other settings I'm not expert with as well. This was just what I had running at the moment, which was a first pass on the first episode.

I agree on the smoothness, it takes a little tweaking to get right.


That’s cool, thanks for sharing. How long does a conversion like that usually take?


In my personal experience with Topaz and a 3060 ti, usually 4-5 frames per second. Although it depends on the input and output resolutions.


A great misinterpretation of the photography is that it's an objective medium. Any combination of lenses and film stocks (or equivalent) is going to represent but a flat, skewed representation of the three-dimensional world; it's been interpreted before anyone performs any processing, computational or analog.

Susan Sontag's "On Photography" is a great read on this topic for anyone marginally interested in not just photography, but art in general.


>Any combination of lenses and film stocks (or equivalent) is going to represent but a flat, skewed representation of the three-dimensional world

this is no different than any other sensor. It doesn't mean replacing data with guesses is better than the sensor representation of the world, which is the issue at hand today.


Interesting thing’s happening here.

Previously there was a clear line between scene-referred image data, which was treated as objective record of a 2D slice from a 3D world by way of measuring light, and output-referred image data—one of the countless lossy adaptations of that data to fit the limitations of some particular medium (display, paper, etc.) in order to be actually viewed.

The scene- to output-referred data conversion is where objectivity inevitably went out the window, but not earlier—the original scene-referred data was mostly treated as immutable.

What these guys are doing actually happens at the demosaicing stage, and from what I understand the resulting “super resolution” image is still scene-referred—but it isn’t representing the actual captured light anymore! In other words, we’ll have raw images that are partially “guesses” and no longer an objective record.

This isn’t necessarily good or bad, but is somewhat of a paradigm shift I’d say.

As a side note, I wish Adobe released the mechanism so that it could be made one of the demosaicing methods available in open-source raw processors, but I take it this won’t be likely.


Interesting thought, does using a physical fisheye lens that warps in different way than the eye actually sees the same using a computer-generated fisheye effect on a source image?

Or any other sort of analog filter/lens that changes the picture for that matter


> Does using a physical fisheye lens that warps in different way than the eye actually sees the same using a computer-generated fisheye effect on a source image?

I can’t parse this for some reason, could you rephrase?

> Or any other sort of analog filter/lens that changes the picture for that matter

The glass on the camera affects the shape of the 2D slice of the 3D world captured by the camera and can attenuate light of different frequencies; technically the better you know which equipment was used, the stronger the element of objectivity to raw data captured by camera sensor.

Addendum: I take back most of my original comment. Adobe’s super resolution tool operates at the demosaicing stage, so image data it produces is no longer strictly scene-referred (it can’t be both demosaiced and scene-referred). The actual sensor capture is still scene-referred.

(I think we’ll see tools that take scene-referred data and output scene-referred data eventually though.)


more or less the same thing with sound and audiophiles. it's a lucrative business of selling placebos, dreams and hope, though.


Super resolution is adding data that is not here, it's asking an algo to produce part of the art.

At one point do we go from picture to painting ?


My worry is when this gets used for something forensic.

I imagine this will tend to reproduce things in the dataset, e.g. up-scaling blurry text may look like fonts that it has memorized more than the original. Or upscaling a feather will provide details like the feather of more common birds. Or upscaling blured out numbers will pick some numbers at random [1].

We need to make sure people don't rely on these details, e.g. in courts, HR reviews, when reddit sleuths try and investigate an incident, when someone looks for cheating partners etc.

[1] https://www.theregister.com/2013/08/06/xerox_copier_flaw_mea...


Yeah, the "uncrop" joke from red dwarf doesn't seem like a joke anymore (https://www.dailymotion.com/video/x2qlmuy).

Except if it's used in security camera, it's going to be a disaster. Those things have super low resolution, and software will be cheaper than upgrading. And models have huge bias.

Tons of fun.


Yeah, so much of forensic "science" is notoriously flawed, that this seems like a likely addition the usual junk-science pantheon of "gunshot analysis, footprint analysis, hair comparison and bite mark comparison" - https://innocenceproject.org/forensic-science-problems-and-s...

Personally, I think one reason we still do this has a lot to do with detective shows being so ridiculously popular that people think that it's some sort of scientific process, when it's not. As a thought experiment, we'd probably have flat-earther-ism be the dominant belief if was like 15/20 broadcast TV shows are dedicated to glorifying flat-earther-ism. This means the most dangerous thing about "Super Resolution" is the public has already been "primed" with cop shows having the "enhance" feature.


You’re worrying about the wrong thing. In formal settings, the problem will be taken care of.

The bigger problem is informal settings. Propaganda, for one.


The way DNA testing is misused, I wouldn't be so sure about formal settings.


Good point! You have me reconsidering what I said.


Although you have already replied to the DNA comment, one must take care with bureaucracies always. They will drag on with some preconception and the citizen always loses.


> In formal settings, the problem will be taken care of

Forensic evidence has been and still is systematically abused:

> * a 2002 FBI re-examination of microscopic hair comparisons the agency’s scientists had performed in criminal cases, in which DNA testing revealed that 11 percent of hair samples found to match microscopically actually came from different individuals;

> * a 2004 National Research Council report, commissioned by the FBI, on bullet-lead evidence, which found that there was insufficient research and data to support drawing a definitive connection between two bullets based on compositional similarity of the lead they contain;

> * a 2005 report of an international committee established by the FBI to review the use of latent fingerprint evidence in the case of a terrorist bombing in Spain, in which the committee found that “confirmation bias”—the inclination to confirm a suspicion based on other grounds—contributed to a misidentification and improper detention; and

> * studies reported in 2009 and 2010 on bitemark evidence, which found that current procedures for comparing bitemarks are unable to reliably exclude or include a suspect as a potential biter.

> Beyond these kinds of shortfalls with respect to “reliable methods” in forensic feature-comparison disciplines, reviews have found that expert witnesses have often overstated the probative value of their evidence, going far beyond what the relevant science can justify.

(https://web.archive.org/web/20170120002449/https://www.white... page 16)

Even more:

* Tire and shoe prints: https://www.apmreports.org/story/2016/09/27/questionable-sci...

* Lie detector tests: https://en.wikipedia.org/wiki/Polygraph#Effectiveness

* Burn patterns: https://www.pbs.org/wgbh/frontline/article/forensic-tools-wh...


I am not worried about it this at all. It’s not likely to randomly incriminate a real person.


I am from Brazil.

On the first day of trials of deep-learning based facial recognition here, a random person was arrested because the algorithm confused that person with another one.

Even more stupid, is that the person with "outstanding warrant" was actually ALREADY in prison.

So yes, AI managed to arrest the same person, twice, one time the real person, one time a random look-alike.


Seems like a different situation - the facial recognition algo was wrong. Not the same as an AI resolving a face that resembles someone and then prosecuting that person on the basis of the image.


You realize that the idea is the same, right? AI made an incorrect determination and people ran with it.


You do understand the difference between arrest and prosecution? People are arrested on the basis of a weak mistaken identity all the time, no AI needed


I remember that time a guy's life got turned upside down because his fingerprints "matched" those found on a bomb. Despite the fact that he had no motive, disposition, or access - they were determined to convict, because to do otherwise would be to admit that fingerprint analysis doesn't enjoy the scientific foundation that DNA evidence does.

So yeah, real people have been harmed by bad matching algorithms.



Ok that’s pretty far removed from a prosecutor relying on that image though


Imagine low-resolution face photo. Police found 3 men whose faces are similar to that photo. Now they're super-resolutioning that photo and suddenly first person looks very similar to it, while second and third persons do not. First person is in danger because of algorithm which was trained on some specific photos.


One can imagine a lot of future scenarios. In fact that describes is an entire genre of story writing known as “science fiction”


It isn't "science fiction" though, we have a mountain of examples that demonstrate how probabilistic AI fails when the training dataset doesn't perfectly align with the eventual application. It isn't restricted to AI either, there have been numerous studies showing that cross-race identification to be garbage data.

<Insert other race> all look the same: https://onlinelibrary.wiley.com/doi/abs/10.1002/acp.898

This bus is an ostrich: https://arxiv.org/abs/1312.6199

Racist autofocus: https://sitn.hms.harvard.edu/flash/2020/racial-discriminatio...

Google thinks black people look like gorillas: https://www.wired.com/story/when-it-comes-to-gorillas-google...

And here is an argument to look forward to: should the training dataset be build to represent the general population, or the specific subgroup that the software would be most often encountering? Because that FBI UCR...


Out of curiosity, do you think forensic science methods in general are likely to randomly incriminate people?


Gosling will probably go to jail for a lot of crimes...


Photography has always been more artistry than reality. From film selection, lens choice, framing and cropping, color correction, "dodging" and "burning" details in our out it's always been a compromise between what the camera sees (which itself may not be reality if you're using physical in-front-of-the-lens filters or other distortive techniques) and what the photographer want to convey. That's one of the reasons that Photographic Journalism often holds itself to ethical limitations with regards to how a photo can be manipulated after it's been taken. I'm almost certain that this kind of thing would at least walk that fine line if not fall right on over it.


Never mind the manipulations which can occur even before the shutter is ever pressed— I'm having troubling finding links now, but I know there was controversy a few years ago about a picture of bombing rubble with an object (a dress form or something, maybe?) standing up in the middle of it, apparently by chance which created a very artistic contrast, but there were later questions about whether the photographer had in fact arranged the object in that way rather than discovering and capturing a preexisting scene.

EDIT: Still can't find it, but here's a list which includes a number of other wartime photos which are proven or suspected to have been staged in various ways: https://militaryhistorynow.com/2015/09/25/famous-fakes-10-ce...


Exactly. That is also why claiming to only do "SOOC JPG" is ill-informed. People claiming it don't understand the process: to even produce a JPG, there has to be a process of interpretation already taking place.

When I volunteered for a small newspaper I did my usual, sometimes significant, editing (Lightroom-level, not PS) and didn't see anything wrong with it (neither did they; of course edits look better than plain JPGs). But I can appreciate how this becomes much more important with increasing range. Imagine if Pete Souza spoiled 8 years' worth of Obama presidency imagery just because he edited them in some obscure way.


On one hand as Ansel Adams said "You don't take a photograph, you make it.", and that still holds true today. You're influencing the picture when you choose your camera, your lens, your sensitivity (ISO), your color mode, your white balance, your raw software, your editor, etc etc all the way from the camera to the print.

On the other hand, there certainly is a difference between working with the information (pixels) you've captured, and inventing information by either drawing on the image or creating new data.

This methodology falls squarely into the gray area between those two.


Yeah, although technically the goal behind things like 'super resolution' is to add data that is not there anymore but probably was there before.

i.e. you had the original scene that was captured by a digital camera (a lossy operation) and then saved as an image file (often also a lossy operation), and then a tool like this makes an educated guess as to what information was lost in the 1st and 2nd steps.


You have to demosaic things anyway (as mentioned in the article) so it's not like you can escape algorithmic fudging of details.


Anyone who edits photos in Lightroom understands that modern photography captures both reality and artistry.

My goal when editing photos is to make them more clearly express how I felt or how I saw. This is often quite divorced from what shows up on the back screen of my camera.


I think photography has always been about artistry. You are taking a 2-dimensional slice of a 4-dimensional world. Interpretation is inherent.

Ansel Adams was surely no stranger to post-processing.

https://photofocus.com/photography/a-look-inside-ansel-adams...


There is a difference between normal post-production (cropping, defining colors, adjusting brightness, removing noise, ...) and adding details that just aren‘t there in the raw image.


How so?


One involves geometric meaning (e.g. edges, Bayer artifacts, clipped highlights), and the other involves semantic meaning (e.g. sky, faces, eyes). Dodging and burning is in the latter category and is equally deceptive.


Noise removal isn't semantic meaning?


It’s also very useful to be able to take a tighter crop of an existing image without worrying about pixelation


That was true of film photography and development too


A good experiment (which I cannot do): take a true photo of somebody, reduce it in size and then “super-resolve” it. How much must you reduce in order to produce a different “person” (or a non-person).


You can also repeat this indefinitely: 1. Scale down 2. "Super-resolve" 3. Goto 1 In Germany, we have this game "stille post" where you whisper someone a phrase and then multiple children try to whisper the exact phrase to the next. Most of the time a completely different phrase comes out in the end.


In the US we also play the same game, but call it "Telephone."


And we call it "Chinese Whispers" in the UK


It's always been more like painting than anything resembling a truthful representation of reality. In fact, the first ever photograph [1] much more closely resembles a painting than anything else, which is ironic.

All advancements have simply given us more control in how painterly we render our photographs, but have never _really_ brought us closer to the truth.

[1]: https://en.wikipedia.org/wiki/History_of_photography#/media/...


It is sort of. Their argument is that by taking into account the subpixel pattern of the sensor they can actually extract more detail than is readily visible in the picture.

Basically you can imagine that the blue subpixel is always to the top-left of the pixel. If you shifted the blue down and right one half a pixel you would have a more "accurate" production. In this way you can add a new pixel with the blue value closer to the right spot, then interpolate the blue of the original.

Of course you can also do logic such as detecting lines of different lightness and applying those on top.

So yes, especially with their machine learning they are adding new detail, but that is also likely some detail that was already there, but could not be conveyed with with the lower resolution. I wonder how different this would be from the simple approach of realigning the subpixels on a higher-resolution image and interpolating the "missing" subpixels. This approach may look better but wouldn't add any data.


Ansel Adams answering a similar question https://youtu.be/Ml__B0l9GIs?t=1514


Presumably you go from picture to painting the moment you open photoshop.

Photography for record keeping/science and photography for aesthetics/artistry diverge long before you get to techniques like super resolution. Which is still a fuzzy boundary because anyone who has taken a picture including a sunny sky can tell you raw photos generally don't capture how it looks to your eyeballs.


Not true, there’s always far more data in an image than a quick glance can decipher, astronomy field obviously pioneered numerous methodologies over the decades (many of which biologists do more shittyly, speaking as a biologist who did so). In the end I’d argue a lot of these “super resolution” methods are just glorified dexonvolution, not that it’s a knock on these methods but apparently it’s not cool to call deconvolution deconvolution anymore.


Most are, but in biology/physics STED[1] and STORM are physics based methods for overcoming the diffraction limit[2]. STED is pure physics, no math/deconvolution/AI tricks.

[1] https://en.wikipedia.org/wiki/STED_microscopy

[2] https://en.wikipedia.org/wiki/Super-resolution_microscopy


They use extra tricks at the image capture level to supercharge how much information you can load into the captured images (and then decipher them), but the methods are still related at least in STORM - you’re effectively deconvolving lots of sparse images and then merging them! Gaussian fitting of point sources is literally dexonvolution right? You’re just estimating the psf as a 2d Gaussian!


I am not qualified to get too in the weeds on the physics, but 'Resolution' is... complicated. Usually, when we talk about resolution we are talking about the ability to distinguish two points.

The 'resolution limit' (Abbe diffraction limit [1]) is related to a few things, but practically by the wavelength of the excitation light and the numerical aperture (NA) of the lens (d = wavelength/2NA). When we (physicists/biologists) say 'super resolution', we mean resolving things smaller than what was previously possible based on the Abbe diffraction limit. So rather than only being able to resolve two points separated by a minimum of 174nm with a 488nm laser and a 1.4NA objective, we can resolve particles separated by as little as 40-70nm with STED (but it varies in practice).

STED does not accomplish this by estimating PSFs and fitting Gaussians, it uses a doughnut shaped depleting laser to force surrounding fluorescence sources to a 'depleted' state, and an excitation laser to excite a much smaller point in the middle of the depletion (see the doughnut in the STED wikipedia page, Stephen Hell and Thomas Klar won the Nobel Prize in Chemistry for this in 1999 [2].

I know PALM/STORM uses statistics, blinking fluorescence point sources, and long imaging times to build up a super resolution image based on the point sources and computational reconstruction.

Not as familiar with that one or SIM, but I know the "Pure physics/optics" folks I work with regard STED as the most pure physics based one that doesn't rely on fitting, deconvolution, or tricks (not that any of that is bad or wrong!).

[1] https://en.wikipedia.org/wiki/Diffraction-limited_system#The... [2] https://en.wikipedia.org/wiki/STED_microscopy


No these are not deconvolution at all. They are using AI to add information to the image.


Superresolution in consumer cameras is definitely adding detail to the image using prior information about the universe (hair has a pattern, what looks like an edge has to be sharper than what the image says, etc). This is definitely the questionable artsy aspect of this superres boom. But the question I answered was more specific which supposed how you can make up info which doesn’t exist, but that’s definitely not true either. Modern superres tech (especially the non-deep learning kind) can extract more info from the image if you systematically account for the psf of the camera, distortions etc.


What makes you so confident that this "AI" doesn't end up just doing deconvolution?


Because the images being enhanced are in focus


We talked about this a few days ago?

https://news.ycombinator.com/item?id=26448986


I wish they now added a subscription plan or a pricing model for people who use the software several hours per month.


Back in the day when they first had their sub model it was soooo much cheaper than it is now :(. Here's hoping some guy that writes super-resolution ML models on the side is a big fan of Gimp.


On a semi-serious note: would GIMP accept a merge request that depends on something like LibTorch[1] or the huge binary blobs like the ones produced by TFLite? I say this as a enthusiastic hobbyist in the ML space who is always looking for productive ways to procrastinate my grad research...

[1] = https://pytorch.org/tutorials/advanced/cpp_export.html


GIMP allows plugins for features that don't belong in the main application. I've found a lot of researchers had written GIMP plugins for their work, I am under the impression that that's how many of them are testing.


I randomly just found this but I don't know if it's available or where it's hosted:

https://deepai.org/publication/gimp-ml-python-plugins-for-us...

edit:

https://github.com/kritiksoman/GIMP-ML


There are non-Adobe alternatives with ML super-resolution features that are much cheaper. For example, Pixelmator Photo can do it for $8 (iPad, no subscription). They have a Mac version as well but I don't use macOS so I can't judge that.


I’ve used this a few times already and it works perfectly about 80% of the time. If your picture has a lot of grain or low-level noise it’s just going to make the noise worse, then you throw in a median to de-noise and you’ve lost the benefit. But it’s otherwise a nice tool to have, especially for older low-res photos (like 800x600 stuff you want to print).

I think longer term stuff like neural rendering will make super resolution less relevant. If you can re-create a 3D scene from a single photo or otherwise reconstruct the photo in a less-resolution-dependent way, then playing the super resulting game is less interesting (for users and researchers alike).


Who has old 800x600 camera raw images lying around?


According to TFA, camera raw isn't required to use "Super Resolution", but regardless the answer to your question is "people who cropped their original photo".


I can’t get the Enhance feature to work without using the Camera Raw plugin, but yes it does work for ordinary jpegs and such. You have to open the file through Bridge. If you open Camera Raw through Photoshop (in the Filters menu) then the Enhance option doesn’t show up :P


I had a bunch of 25-year-old photos I wanted to print! At that res the enhancement really helps.


It seems like "Enhance!" is now an actual thing https://www.youtube.com/watch?v=Vxq9yj2pVWk


How is this different to Gigapixel AI ?

https://topazlabs.com/gigapixel-ai/


I enjoyed this review https://stephenbayphotography.com/blog/adobe-super-resolutio...

Both of the options had winners.


That is really good comparison, although there might bit of confirmation bias on my part there. For most parts I felt like the improvements are not really worth it compared to the artifacts when it fails. The rusty car texture is maybe the one place where it seemed to really make a distinct improvement.


Does it have to be? I think the point is it's built into Adobe products now, which will be fantastic for Photoshop/Lightroom etc users.

Unless ofc you're genuinely curious as to how it's different to Gigapixel and not just knocking Adobe :)


Looking at results so far, it's strictly better. (Less noisy, less subjective errors on fine detail)

If I want to up-rez strictly for printing purposes, the PS one looks like the winner. But, obviously, it's subjective.


If you’re having an existential crisis over interpolated/extrapolated/hallucinated images, and have been assuming that every stage of a camera throws away bits instead of interpolating, here is a list of stages in most camera pipelines that try to interpolate information already:

* demosaicing: interpolates color from nearby pixels. Each pixel gets just one of the tree color components. The other two are interpolated.

* decompressing jpeg: tries to guess information the compressor lost.

* black field correction: adjusts the brightness at every pixel to compensate for the different sensitivity at each pixels.

* de-vignetting: compensate for the border of the image being darker than the center.

* auto white balance: compensates for the fact that your eye’s color consistency doesn’t work as it would in the natural setting. This is a complicated way to get you to see the color you would have you seen the full scene.

All of these try to recover some aspect of the signal that was irretrievably lost by a previous step. They do this by making plausible guesses.


True. Our brain also makes similar guesses, e.g. in the blind spot, and in most optical illusions.


One thing I've always wanted was an AI algorithm for extracting additional detail from multiple RAW camera frames of the same scene. Many photographers will typically take 10-100 shots of a subject to ensure that at least one picture is a "keeper".

Keeping one frame and discarding the rest is a bit wasteful in a sense. The other frames have useful information that could be extracted by a well-trained AI to provide super-resolution, increased DoF, additional blur or shake reduction, etc...

I've deliberately kept all of my RAW frames, even the not-so-sharp or slightly shaky ones, because I foresee that at some point in the future this will be an automatic thing that tools like Adobe Lightroom will do that maximise the available image quality.

Storage is cheap, but I can never go back in time and photograph my memorable occasions with a better camera from the future...


Vaguely relatable, there is/was stuff like Photosynth [1] which (as far as I recall, from researching this stuff as a CS undergrad) was making use of Bundle Adjustment [2] and detecting landmarks in images using Scale Invariant Feature Transform (SIFT) [3], to estimate camera poses of photos relative to each other in a 3D scene -- even from completely different cameras.

By projecting the discarded frames onto your keeper frame, you've potentially got multiple samples of the same pixels and their neigbours...

(Although I guess the "every image is a plane" 3D transform is a bit simple and doesn't account for lens distortion)

[1] https://en.wikipedia.org/wiki/Photosynth [2] https://en.wikipedia.org/wiki/Bundle_adjustment [3] https://en.wikipedia.org/wiki/Scale-invariant_feature_transf...


When I see software enhanced photography like this, and look at the relatively primitive processing that is happening on my DSLR and my high priced mirrorless cameras, I realize that despite their huge sensors and amazing glass (which I paid a small fortune for), they will soon be outclassed by the simple smartphone in my pocket.

My wife routinely shoots photos on her Pixel 3 that get a better response on our family whatsapp group than the painstakingly post-processed DSLR shots I create and post.

This could be an indictment of my failures as a photographer. Or perhaps my family has no taste in photos. But it's also entirely possible that a Pixel 3 is all the camera you really need for family documentary work ... and I've wasted so much money on unnecessary hobby gear.


Well, is it a hobby, or is it family documentary? Feels like similar sentiment if you said that money spent on woodworking gear is wasted because family prefers IKEA furniture.


This is a good point to tell my wife. She wants me to sell my camera gear.


Previous discussion on the subject, five days ago: https://news.ycombinator.com/item?id=26448986


This sounds similar to what Nvidia's Deep Learning SuperSampling (DLSS) is doing in real time. It boggles the mind.


I would have liked if they had more comparisons to ground truth images instead of resampled ones. The foilage and bear comparisons also look like the "super resolution" images had contrast boosted, which is either awkward artifact from the scaling or misleading post/pre-processing.


Right. I wish somebody would do a proper technical analysis of this instead of all these current "well the super-res version looks good to me" reports.


fast forward 10 years, and there's a jury for a supposed crime. The only proof is an old and grainy picture taken of the suspect from far away.

And then, the jury decides to use "Super resolution" to "enhance" the picture. The ML model decided that what it saw was a gun instead of a rose.


One of those happy moments where science fiction becomes real: https://blog.adobe.com/hlx_ea7b90bf2b9492a9fdfdcbe74b3197ca1...


> Using Super Resolution is easy — right-click on a photo (or hold the Control key while clicking normally) and c̶h̶o̶o̶s̶e̶ ̶“̶E̶n̶h̶a̶n̶c̶e̶…̶”̶ ̶f̶r̶o̶m̶ ̶t̶h̶e̶ ̶c̶o̶n̶t̶e̶x̶t̶ ̶m̶e̶n̶u̶ say "Enhance"


Unmesh from Piximperfect did a nice review and comparison https://www.youtube.com/watch?v=cfTbrJP5TXs


There are many open source projects on github which achieve comparable results (with the added flexibility of being able to train your own model).


Finally, reality catches up with spy movies and police-procedural TV shows.


ENHANCE!!


CSI technology




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: