Could this be used for completely automated exemption of objects based on the focus range? I think, with a clever algorithm which analyzes the sharpness of all these layers it migght be possible.
I don't know if this technique also expands to moving images, but if so, maybe it could also be used to automatically composite those. Without the need of a green-screen whatsoever. Basically, you would be separating the image layers based on distance instead of chroma.
So for sports photography it seems very useful.
Replacing the green screen doesn't seem to make sense.
For macroscopic objects depth reconstruction with two cameras like in the Kinect seems the better alternative.
Object excemption seems to be a nice idea. I can't remember having read anything about this. In principle it should be possible to recover an unsectioned stack of images.
One can then use an iterative algorithm to subtract in-focus information from on slice of the stack from each of the other slices and end up with a deconvolved image of sectioned images.
Then one could delete one object in the stack and recalculate a superposition of blurred sectioned images to recover a reconstruction representing the object without the image.
This is quite complicated. Just imagine to remove a wine glass from a scene. One needs to delete all the rays that went through the wine glass and bend them such as though the wine glass wasn't there.
One can argue that polarization and absorbtion effects will be very hard or even impossible to handle correctly.
Certainly light fields contain A LOT of potential.