Hacker News new | past | comments | ask | show | jobs | submit login

Author here -- Generally in single image super-resolution, we want to learn a prior over natural high-resolution images, and for that a large and diverse training set is beneficial. Your suggestion sounds interesting, though it's more reminiscent of multi image super-resolution, where additional images contribute additional information, that has to be registered appropriately.

That said, our approach is actually trained on a (by modern standards) rather small dataset, consisting only of 800 images. :)




It feels like it's multishot nl-means, then immedeately those pre-trained "AI upscale" things like Topaz with nothing in between. Like, if I have 500 shots from a single session and I would like to pile the data together to remove noise and increase detail, preferably starting from the raw data, then - nothing? Only guys doing something like that are astrophotographers, but their tools are .. specific.

But for "normal" photography, it is either pre-trained ML, pulling external data in, or something "dumb" like anisotrophic blurring.


I'm not a data scientist, but I assume that having more information about the subject would yield better results. In particular, upscaling faces doesn't produce convincing outcomes; the results tend to look eerie and uncanny.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: