Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Kind of interesting. Somebody has to do it I guess. But its not as flashy as they think - refocus the picture? Or focus right the 1st time I guess. Zoom? Enough megapixels and what else? Nothing I suppose.

We've seen some really interesting stuff on HN about tracking thru crowds, reconstructing images from fragments etc. If these folks can do anything like that, they aren't showing it.



It's extremely interesting, for two groups of reasons: (1) creative possibilities (playing with depth of field is my favorite part of photography); (2) market applications.

Regarding (2), if I understand the paper properly, this should allow for a massive increase in lens quality while also allowing for lenses to be much smaller. Both are worth tons of money... together, it's massive. As a guy who carries around a $1900 lens that weighs 2.5 lbs, because of the creative options it gives me that no other lens can, this appeals to me greatly!


Big lenses are always better as they collect more light. If one uses a 10M pixel sensor in a light field camera, one will have to reduce the resolution of the output image by a factor of 10 in both directions (depending on what microlenses one chooses).


Before I read the paper, I was adamant about "more light is better." But read the paper:

"We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise."

You're right that you still need good, small sensors to enable good, small lenses, but my ultimate point is that digital camera sensors scale with advances in silicon. Lens technology is much, much slower to advance. The more of this we can do in software (and thus, silicon) the better.


Related to this and presumably not common knowledge is that there was recently a breakthrough in camera CMOS sensor technology. A few companies now offer scientific cameras (price tag 10000 USD, for example http://www.andor.com/neo_scmos) that allow read out at 560MHz and 1 electron per pixel readout noise as opposed to 6 electrons per pixel in the best CCD chips at 10MHz.

This means one can use the CMOS at low light conditions and at an extremely fast frame rate (the above camera delivers 2560 x 2160 at 100fps). You will actually see the poisson noise of the photons.

Unfortunately representatives (the few I spoke with) of those companies don't seem too eager to bring these sensors to mobile phones.


Sounds like you guys should be writing the marketing prose for those guys.


Refocusing the picture after the fact isn't just about being able to focus "right" after the fact, so it's not fair to just compare with "focusing right the first time". With normal cameras, when you focus, you lose information, and it is simply not possible to do what they do in their demos, namely, to capture a continuous range of fields of view. (With several cameras or more than one lens, you can capture multiple discrete fields of view, but not a continuous range like this.) This is only possible because they're capturing 3-dimensional information about where each object is.

Granted their demo isn't impressive, but they're underutilizing their technology, and honestly I can't think of a better demo either, but don't be misled. This light field camera is capturing far more information. Meaningful information. I wonder if it's possible to like, create 3D models of objects in these images? That would probably be more "computational camerawork". What's impressive that that could be done after the fact.


How about macro photography. When you get really close to something your depth of field shrinks. This lead to the invention of focus stacking. If instead of having to take 4 pictures, you can now take only 1, you can capture incredible things.

https://secure.wikimedia.org/wikipedia/en/wiki/Focus_stackin...


This is indeed an interesting topic. Especially in high numerical aperture objectives incredible things are possible.

One can put a diffractive optical element infront of the sensor and obtain 25 instantaneous images, each at a different depth.

http://waf.eps.hw.ac.uk/research/live_cell_imaging.html

Couple this with high resolution techniques and your at the current research front and could possibly solve the question:

What happens in a synapse?

It is theoretically possible to image at 40nm resolution with 100fps and therefore see the transport of vesicles.

One can expect important discoveries on how the brain works using these techniques.


Mark Levoy (and his group) do all these things. However, if one wants to track through crowds one needs a much bigger aperture. So one would combine many small cameras into an array as opposed to a microlens array infront of the sensor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: