While the described window is not a hologram per se and I think the hologram-like effect can easily break for other images (i.e. it can not reliably represent depth), it should be noted that it has a certain relation to holograms. Holograms modify plane wave in a way which reproduces wavefront from a recorded scene. Usually it's done by capturing interference pattern, but alternatively hologram can introduce phase shifts by varying thickness (ideally it's worth to use both phase shift and absorption). I think it should be possible to create hologram with the same setup by changing optimization function. Although I am not sure if you will be able to perform precise enough carving with your setup for reproducing complex enough scenes. Try to start with representing several dots at different distance from hologram plane each radiating radial wave. Thinking about diverging lens as of hologram representing one such point should help with intuition.
That brings to mind this description of a technique for hand-engraved holograms[1].
> The required tools are so simple that ancient peoples could have drawn these images in hardened sooty resin pools with wooden tools, had they but known the trick.
You can form this hologram trivially by considering each pixel as a source of rays in a variety of directions. You might then expect a series of hemispherical shapes on the surface, with areas of the dome missing where rays should not be fired. In this case you hit the age old problem of holograms - the sheer volume of information that needs to be encoded onto the surface for a hologram of reasonable resolution to be formed. Whether there is some more subtle surface that reproduces the light field but exploits its compressibility to reduce the surface complexity, I don’t know. Would be fun to try to optimise for it, but alas I have too many things to do already. How does one get grad students?
For more information, the parent PDF describes a similar process using reflection discovered in China.
If you cast and polish a bronze mirror with raised letters/shapes on the back, the very slight deformations caused by polishing over the different thicknesses will create an image in the reflected caustics, despite the fact that the mirror appears smooth when using it as a mirror.
I've thought about it for an hour now, but I don't understand how the given algorithm works.
I understand Step 1, growing and shrinking the cells such that the area of the cell on the window is proportional to the brightness of the corresponding cell on the image plane. Intuitively this feels like the cells on the window represent the total light budget, and growing a cell means taking a larger proportion of the incoming light to aim towards a given point.
But I'm stuck at Step 2. If a window cell has a single normal, i.e. if it's a flat plane, I don't see how it would increase the brightness. Like the author previously describes, the brightness is dependent on the second derivative of the height, not the first. A larger flat window cell will not make for a brighter image cell, it'll simply make the image cell larger. Brightness could be increased by having more window cells aimed at the same point, not larger cells.
The way I understand this could work is if the second step of converting the window cells into normals is done at a much higher resolution than the given map, and the target point is fixed for each cell. That way, the normal could vary over the course of a window cell, and every cell would function as a tiny lens, where now bigger lenses would in fact lead to brighter spots.
I assume I'm missing something, can anyone tell me where I go wrong?
Author here. That's a fantastic question! I think you're right that a better solution would include adding curvature to the individual "pixels" in the lens mesh. Unfortunately I don't know how to manufacture anything with microstructure that small! The microlenses would need to be of order .2mm by .2mm square and have curvature that is very slight, because the image plane pixel is about 20 cm away. Perhaps this could be achieved by choosing a ball-nose machine tool of the exact right radius and sweeping it back and forth in the channels formed between the "pixels"? It would leave a grid of tiny features that might just get the job done!
As is I just gloss over that completely and I don't address it. So tiny, lone, bright pixels end up more smeared than I'd like.
I don't think you really need to carve individual pixels. If you've got the logic worked out to ensure the light falling at (x,y) ends up at (u,v) then the part you're missing (assuming we're dealing with sunlight for now) is a function f(x,y) = (u,v) that transforms a constant density into the density corresponding to the image you desire.
Using the change of variables formula this basically means that we want h(f^(-1)(x,y)) |det Df| to be constant. Which is quite easy in 1D (it's just the inverse cumulative density function), but significantly trickier in 2D.
In 2D the problems seems to be underdetermined. One solution would be to first solve the horizontal problem for each row and then solve the vertical part for the total densities of each row. Or the other way around. There might be a way to make this optimal in some sense, but I'm not quite sure what to optimize for.
Yeah I think that part is a bit iffy, and if you look at the resulting image it does seem to have some fringes around the outside of the cat, indicating that it's showing something related to the curvature of the heightmap, rather than the intended image.
It's probably a good enough approximation to at least generate a recognizeable image. Especially if you use a point source because then each 'window' will reflect all its light in more or less the right direction which will converge on a single point as well. Not sure if they'll all focus on the same plane but it's probably close enough.
yep that paper is extremely impressive! I would love to implement that approach as well, but it is much more computationally expensive. In the Q&A after a talk from that paper's author, he mentions that it takes about 6 hours for the algorithm to run. The approach I took, modeled off of Yue et al, takes about 30 seconds. The trade-off is that the Schwartzburg paper is capable of much more general purpose remapping. It does not require continuity, which is why that paper results in ray folding which you can see as creasing in the lens.