Edge detection is used a lot in 2d & 3d graphics, microscopy, telescopy, photography, and a ton of other fields.
If this could be done almost instantaneously and without additional energy, it would be very useful.
I can imagine having tiny analogue edge detection component in cameras, microscopes, telescopes, and maybe even graphics cards.
This could be a part of the future of computing - tiny specialized analogue components for specific tasks.
On the other hand, maybe digital compute, whether silicone or futuristic alternatives, will always be cheap enough that the economics will never work out - it'll always just be cheaper to throw more general purpose compute at a problem.
To add onto this... some amount of what they are calling edge detection here seems to overlap with what has already been implemented in microscopy using phase contrast... which has been around since 1932.
I'm not really sure how any of this applies to software development, as they detected edges in actual physical films, which have different layers of chemicals. Still a very interesting approach and that knowledge probably can be transfered to other fields, as humans consist of layers of chemicals, that are different from cars, houses and trees for example, but could you and other people just actually read the paper before writing anything here?
PS The title of article is on a clickbait level, as the meaning for images nowadays does not correspond to what is used in article.
You do raise a good point, but for certain applications like photography, microscopy, telescopy this can be avoided by putting the analogue chip before the digital conversion happens, so that you receive an image with layers, one containing color data, another containing edge data.
I'm fairly sure that modern cameras already have multiple layers for depth and color.
If this could be done almost instantaneously and without additional energy, it would be very useful.
I can imagine having tiny analogue edge detection component in cameras, microscopes, telescopes, and maybe even graphics cards.
This could be a part of the future of computing - tiny specialized analogue components for specific tasks.
On the other hand, maybe digital compute, whether silicone or futuristic alternatives, will always be cheap enough that the economics will never work out - it'll always just be cheaper to throw more general purpose compute at a problem.