Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
This is what Doom looks like on a holographic display (twitter.com/jankais3r)
196 points by ttflee on April 19, 2021 | hide | past | favorite | 63 comments


I've built a non-trivial prototype application for the Looking Glass, and once you are past the initial 'cool' moment the downsides start to rear their head:

- Split a 4k signal into 45 view planes means the effective resolution sucks, bad.

- Likewise, if your content is complicated at all you need a monster GPU because you are rendering your scenes 45 times per frame, meaning 45 times the draw calls.

- Field of view is very limited

- The amount of z-depth you can put content in without major blurring is much lower than even this video would indicate.

- You need to design your scene so important content never reaches the edges or things look yuck.

- Some patterns lead to artifacts like moire, and it can be hard to predict. Your content needs to work around this.

- They are (understandably) very expensive.

All that said, these things are cool and would be a perfect fit for some use-cases. After the prototype I built was green lit for full production, the decision was made to ditch these displays for all the reasons above.


Uninformed outsider coming in with questions:

1) Isn't this just based on "naive" implementations? Surely there are optimizations to not have to render all 45 view planes at all times similar to how 3d rendering doesn't render scenes that are not visible until the character moves to change perspective?

2) The difficulty with layout and design doesn't seem that different from any 3d or game design challenges to make sure the person can't see the wizardry behind the curtain. Is it?

I guess what i'm thinking is if you take an Introduction to Computer Graphics course, you can learn all the math, all the concepts, and all the abstract ideas about building 3D worlds. Then you try to implement them and you immediately realize that none of the ideas are performant enough to actually put into practice in a video game without MAJOR optimizations that on the surface seem 1/ Hacky, 2/ Extraordinarily difficult. We've been doing a range of them for 30 years. But some are only TODAY entering the mainstream despite being the simplest possible concept to explain to a student (ray tracing).

Is it not the same situation with these kinds of displays (and other VR/AR)? We're in the super early stages and we need the John Carmack equivalents to identify all the super insanely clever optimizations to squeeze something practical out of them?


The big difference is, unlike VR, you don't know where the viewer's eyes are, so you can't jump from 3D to 2x 2D so easily, you genuinely need a volume resolution instead of an area resolution. I suppose another way to think about it is, if they're splitting 4k into 45 planes, you're chopping 8.3 MP into 0.184 MP (~430x430px) x 45 "pixels"/voxels of depth resolution. 3840x2160 pixels is the same number of samples as a 202x202x202 voxel cube. A 1000^3 voxel display equals 31600x31600 pixels of resolution, which is maybe possible to render at 30fps with a very, very simple application and a top-end GPU. 3000^3 = 164300x164300. We would need to rethink the entire graphics stack to figure out a "GPU" that could more efficiently "draw" to a hologram at high resolution and refresh rate. Maybe something like DLSS but 3D. Even then, the bandwidth will be insane. 1000^3, 1 billion voxels, * 32 bits of color/alpha, is 32 gigabits per frame. 30fps would need 960 gigabits per second of bandwidth. Displayport 2.0 is 80 gbps. The fastest commercial optical transceivers are 100-400 gbps, although top-end GPU memory bandwidth is approaching 8000 gbps, good enough for 2000^3 @ 32bit @ 30fps or 1000^3 @ 240fps. The "GPU" will need to be tightly integrated into the display.

It would be better if we could just have 3x 2D projectors that somehow interfere in a medium to display, but I don't know of a way to accomplish this. Not to mention none of this allows you to be inside the displayed content as VR does.


> 1) Isn't this just based on "naive" implementations? Surely there are optimizations to not have to render all 45 view planes at all times similar to how 3d rendering doesn't render scenes that are not visible until the character moves to change perspective?

Maybe you could add head tracking to prune some angles, but if you were going to do head tracking, other comments have reports of good results without a lenticular setup. You'd probably lose out on the multiple viewers aspect.

I would think naively that there's some amount of work that could be shared between so many renders from nearly the same viewpoint. But that probably requires driver changes to make it available, I did find an article from NVidia about rendering four viewpoints in one pass, which might be applicable and help somewhat: https://developer.nvidia.com/blog/turing-multi-view-renderin...


For some context, I've been making games across console/mobile/VR for about 12 years so I can provide (hopefully) some educated responses.

For 1)

- Their SDK implements this rendering strategy, so either they haven't been able to justify the effort for additional optimization, or (much more likely) there isn't a tenable general purpose solution that doesn't fall apart horribly in many circumstances. This latter point is true of many optimizations in games, they work because you can very narrowly scope them to what you are doing.

- Frustum and occlusion culling are used, like you mention, to avoid rendering meshes that are off-screen. This isn't a panacea though, there can be significant CPU cost to doing so to the point where it can be more performant to disable it. Case in point, I used Umbra occlusion culling in my VR game in only a few scenes because my CPU budget for 90 FPS was so small that the PS4 couldn't keep up.

- Those 45 planes are a series of slices that are very closely projected in space, so the odds that there isn't any work to do is almost nil. Even more true when you consider dynamic scene elements, such as lighting and shadow casting.

For 2)

- All of the challenges present for 3D content creation are unchanged with this device, it just adds more on top (limited FOV, depth blur, and fringe blur). You can't wizard away seeing color separation at the border of the screen, it's just fundamentally the hardware's limitations.

Another example, the PSVR headset's OLED panels cause a purple blur to happen on edges that have too much light/dark contrast. The solution? Call an API and change a floating point "base brightness" level for the headset, or change your lighting -- for each level (this is Sony's recommendation!). Entirely game and scene dependent, and not something that an automatic solution could accomplish.

- Some of the "magic" techniques invented for VR performance may have the possibility to be applicable (skipped frame re-projection, foveated rendering), but those are based on the math between two camera projections not 45.

- Any improvements that could be created by a Carmack-level person are going to be market driven. The population of people that fit in that category would have the opportunity to work on whatever they want, and VR was compelling to many of them because of how damned cool it is (was?). By comparison, this tech is a novelty that a regular consumer will probably never see.


Just use 45 GPUs. No big deal.


This is autostereoscopic, not holographic. It is however really nice, 10 yrs ago we were playing Quake on a WowVx display from Phillips when not working on autostereoscopic digital signage content. Phillips stopped with as3d and spun off Dimenco which are also selling As3d products, newsight, tridelity, and a few others are around... but all of this is NOT holography. Its high res display with a lenticular sheet splitting viewpoints.


Thanks for the info -- I had assumed this was the case[0].

When I looked at the animations/video, I thought it was really slick and as I read more about the tech it appeared to be as I expected, but I'm not sure if I should be disappointed by that.

I think it was quite intelligent for the company to offer a smaller, relatively inexpensive ($249 early-bird isn't bad) display because I fully expect it'll be impossible to properly evaluate the quality of the tech without actually seeing it live. The few times I've played with some of the more exotic screens (even oddball LCD screens like the promising, but brief IGZO LCD panels) are very difficult to evaluate. The videos are usually far from what it looks like in-person (and I'd wager about 75% of the time, the video makes displays look worse than it is).

I'm curious if you can speak to the down-sides of doing things this way. Were this screen a "real holographic display" but with the similar constraints[1], would it be substantially better? From the videos, I get the impression that this screen can only display 3D within the bounds of the "box", so if a real holographic display behaved similarly, what could it do that an autostereoscopic screen cannot?

I'm curious because I was left behind during the 3D craze (that disappeared, as I predicted, a few years later). I can handle about 30 minutes of 3D-glasses before I start getting the early symptoms of Migraine. I was hopeful when the TVs came out that I might be able to watch Avatar[2], finally, but I tried various sets with different types of 3D glasses (active/passive, I recall?) and they felt more uncomfortable than at the theaters[3]. I've never been diagnosed with lazy eyes/other eye problems, but I feel like my eyes go cross-eyed with the glasses on.

I'd have to see this, physically, to be comfortable with buying it. At the price-point of their smaller option, that's getting pretty close, though. If they had a very convenient/complete return policy, I'd probably check it out.

[0] That's not to diminish the value of the comment, I had only assumed it was some form of "Glasses-free 3D", but know nothing about the tech or how it works.

[1] As in, the dream is a "projector like" screen where a hologram could just be displayed in an arbitrary location in a room, is out.

[2] I have not seen it, yet. When it had finally died down in theatres, I had read that the story was "not very interesting/creative" and "kind of dumb". This was supported by the fact that everyone I know who went to see it told me nothing about what the movie was about and not one of them mentioned anything they liked about the "story". I think the best summary I received was from a close friend who said "I left the theater and 'the world' seemed a little less real" and I was interested to see the result of this camera that was invented for the purpose of filming that movie, but I haven't seen a second of it, yet.

[3] I thought it was normal to feel "off". I call it "almost dizzy" because I don't feel off balance, it just feels like it takes an amazing amount of effort to pinpoint objects with my eyes -- screen or otherwise -- with any kind of 3D glasses on. I wear a very low prescription pair of glasses (not required for driving, I'm nearly 20/20) and other than really cheap sunglasses, I generally have no difficulties otherwise.


The migraines is due to several factors, example:

1. Your brain recreates the 3D environment from physical objects and is "trained" to certain distances and movement, etc. When you use 3D glasses, either movie or VR, the "stereo" effects are anything but natural. There's a lot of stuff flying around at "non-natural" distances, especially near you. The effects are exagerated otherwise at some point you will "stop noticing it's 3d unless you pay attention". The exagerations are compensated by your brain and also causes eyestrain. Look at a pen, closer, closer, at some point you see 2 pens, now do that back and forth for 30 minutes, You will have the same pain.

2. Perceived movement vs. internal ear and balancing your body. VR is awesome at screwing everything up. I love VR but can't stand it for a long period because of that. Your brain expect you falling, or this thing comming at you to hit, you walking should have some motor feedback, driving too. Nope. No G's, no wind, no nothing. To put it in simple terms... brain tilts

you can look up a lot of stuff about this on the internet with VR sickness, motion or 3D sickness... lots of articles, i.e. https://en.wikipedia.org/wiki/Virtual_reality_sickness

I had problems playing Quake on a 2D screen. It went out with intense exposure. I would do 30 minutes, get sick, start the next day, eventually I've managed to pull out 12 hours in a row. Life was fun and simple back then ;). I don't have that luxury anymore for VR but I'm sure it would be the same process to desensitise through gradual increased exposure.


This is... way cheaper than I thought it would be at $250(pre-order price?).

But I don't understand how it works. Seems like... head/hand tracking? But then why the weird-looking screen?

A bit disappointed, at first I thought it was something similar to those cards we had when we were kids, that change image depending on which angle you look at them (forgot the name).

Edit: seems like it's a bit of both: https://docs.lookingglassfactory.com/KeyConcepts/how-it-work...

Edit2: Okay so it does work like the cards we had when we were kids. The screen shows 45 different views of the scene at once. I'm back at being non-disappointed.


The display shown in the video is the 15.6" display which costs $3,000 (https://lookingglassfactory.com/product/15-6).


It takes probably less than $300 to DIY this from a off-the-shelf 15.6" display and a piece of dot lenticular sheet...


Which would look entirely different.


Me: Mom can we have Dropbox?

Mom: We have Dropbox at home

Dropbox at home: ftp + curlftpfs + svn


I laughed so hard when I read this, but then I realized something. Dude, you have a really cool mom.

But I'm "that Dad". For home-related projects, about 25% of the stuff I replace with "home-built" ends up being the things my kids show off to their friends[0]. The other half is duct-tape and rubber bands. Unfortunately, the duct tape exists in the worst places -- like DHCP, and my routers, so while they don't require extra effort to use, they flake out on some of their devices[1].

My favorite is when I have to be called over to explain that, no, Son is not lying us doing that ourselves. He used to walk into his room and hit a few keys on the remote, which would trigger Plex to play a song, dim the regular lights and enable some really slick music visualizations. Hit pause/quit the app and the room goes back to previous state. If his phone started ringing, the app would auto-pause[2] and play-back after the phone returned to idle state.

[0] It goes something like: "Son: Can I buy this (overpriced) LED/gaming PC thing?" and usually ends with me saying "... or, we can buy $10 worth of things I don't already have and make something that when you're friends as 'where can I buy that', you can say 'You can't, but I sell them for (overpriced) if you want to buy one!'". .

[1] Recently better -- ESP8266 devices and Android devices didn't play well with one of my DD-WRT based routers.

[2] Home Assistant and Android allowed me to read phone state. It worked but there was horrible delay and after a phone update, it just broke entirely, causing it to interrupt playback constantly. I ended up wiping out every related piece of that automation with the intent of doing it right (with a bluetooth sensor in his room which would allow me to trigger actions when he arrives/exits the room) but haven't completed that, yet.


Isnt this exactly how it works (a lenticular lens)? Why would it look different?


I've used a Looking Glass and it doesn't look like a a lenticular display. It looks closer to a hologram (despite the fact it's not.)

The image appears suspending inside the block of glass and responds sufficiently well to head movements to be convincing. You can't see the giveaway "ruled lines" that lentincular sheets have.

Once you know it's not truly holographic it becomes fairly obvious and you notice that up/down doesn't provide any depth (only left/right) but the effect is convincing and magical at first.

So although it might be lenticular in principle - it's far from what you could achieve with your proposed method.


Thanks for the information -- this was one of the concerns I expressed in a previous comment. It's going to be very difficult to evaluate this product via video. The videos/animations look so good that I had a hard time believing that the display performs that well in-person[0].

I can understand how this might really look a bit better, now, with the clarity on the left/right vs up/down depth (I recall reading something about depth perception that makes this less important, but I don't recall where/when), so it's sounding promising.

[0] I'm not saying it doesn't -- this is just based simply on the fact that I have yet to see one that does (and, in fact, if this isn't the technology I think it is, I haven't seen one at all).


Is not really my proposed method, it’s just how I understood them to work. IIRC they had a diagram with the original models that showed the lens, and also labelled things like a diffuser, etc. I’m not sure how much the technology has changed since their last generation.


found the diagram i mentioned (from their original kickstarter): https://ksr-ugc.imgix.net/assets/021/999/384/67b20ae703c7636...


... Ah.



Cool, but, it looks exactly like watching a display with a Wii remote on your head [0, 2007]. What’s so special? Is it actually 3D this time?

[0] https://www.youtube.com/watch?v=Jd3-eiid-Uw


> https://www.youtube.com/watch?v=Jd3-eiid-Uw

I did this at the time. I downloaded his Wii app that shows the little red/white discs, and bought a battery powered Wii-bar thing that had the IR leds in it and strapped it to my head.

Can confirm, the 3D effect is 100% spot on, even the smallest move of your head caused the display to adjust perspective and your brain is totally fooled into thinking the Wii display is a just a window into a 3D scene.

tbh, it was the only cool thing I actually did with my Wii!


And that video is over 13 years old! Too bad this idea didn't catch on more widely.


It doesn’t seem to exist anymore, but I had a variant (literally same graphics) on my iPhone a decade ago. No fancy headgear and the effect was just as good—but instead of moving your head, you held still and tilted your iPhone.

Apple later added the effect to the homescreen in iOS 7, but much more subtle.


TrackIR is a consumer product using the exact same tech for headtracking in video games. I think it might actually be older than the Wii


Thanks. I just created a nice storage box for my Wii, got everything organized and put it on the bottom of a large pile of things to sell.

This sounds way too interesting, so I'll be doing some digging, apparently. Any idea if this would work effectively on a large projector screen?


> Any idea if this would work effectively on a large projector screen?

Funny you should ask that! My set up IS a large projector screen (110"), and yes it does work. In fact the size enhances the experience. :D


Ha! I'm planning to do the same on the 110" I've got in the basement.


Surely the depth effect only worked with one eye closed?


Nope, works with both eyes.

Your brain takes a lot more cues for depth than stereoscopic vision. The way the perspective changes as you move your head around is sufficient even for a flat display. It’s uncanny.

Only works for one person though. If you stand at an off angle to the sensors, there’s no effect at all.


Its amazing how the brain works...

I am very cross-eyed at the moment (surgery resulted in both my eyeballs moving) and require a little plastic "stickon" that goes on the left lens of my glasses; which has prisms to redirect the light so that both eyes are looking in the same direction (prisms are fairly normal; it's the _amount_ of correction I need that is abnormal).

The correction on the left lens is so bad that looking through it is completely blurry (with lots of rainbows if I'm looking towards a light). However, when looking through my glasses with both eyes, everything appears normal. I can see in stereo (which I cannot do w/o the correction) and everything is in focus. There is still _some_ rainbow effect.

So, my brain is taking the "clear image" from my right eye and overlaying it with the distance information that the left eye adds to that... but ignoring the blurry image from the left eye completely.

Honestly, I find it mind boggling.


TrackIR and variants are quite popular for simulator type games like flight simulator or truck simulator. They let you look around in the game just by moving your head.


Hm. I haven't really looked too deep into TrackIR, but I always thought it tracked your head movement to simulate head rotation within the game. So you could look around more naturally. The linked Wii "tracking" demo used the tracking to find your position relative to the display to change the rendered perspective accordingly. Does TrackIR also allow this?


As a TrackIR user, it feels to me like it should be possible to do something like this with the 6dof headtracking data from TrackIR, but basically nobody uses a 1-1 mapping that would produce a camera trick like this.


TrackIR DOES do this. In flight simulator, with full 6dof, you can move your virtual head around and look under and around things


In my TrackIR config, I move my head left an inch, the projected head moves 6 inches. This is more useful than a even mapping in practice, but solidly breaks the 'looking through a window' illusion.


> What’s so special?

That it's not based on head tracking, and in a product you'll be able to reasonably buy.


I'm just going to link https://www.cl.cam.ac.uk/research/rainbow/research/autostere... here - an autostereo display that was working way earlier than that. "The first displays were built in the late 1980s and early 1990s". They worked by projecting different images out to different horizontal areas in front of them, so your right and left eyes would see a different image, you could move your head and see around objects, and multiple people could look at it and see an appropriate view for their position.

Not sure if they ever ran Doom on it though.


with an holographic display two persons looking from different angles would see the image corresponding to their angle


I think that sums it up pretty perfectly. It's the single reason I watched the video and my first thought was "there's no way that works that well"

From the comments, here, it sounds like it might. There's some interesting applications, here. Thinking back to last night, it would have been really nice if when I was looking at the split-screen in my driving game with my kids, my own corner had that kind of depth to it. Hell, driving/flying games, in general, suffer from realism issues due to not being able to faithfully produce the effect of looking out a window.

Doing this on a larger screen (and throw in a transparent LCD after you work out how to bend all of this into that, assuming it can be done) and you could have a light-switch that changes the scenery from "outside" to "outside somewhere else"[0].

[0] I remember seeing that in Back to the Future II as a kid and thinking it'd be really nice to have on a cold, gloomy, February day in Michigan.


Well, if it’s as simple as head tracking with a IR emitter on your headphones or the like, then I’m a little surprised first person shooters don’t enable peeking around corners with head motion.


It doesn't seen to be, or rotating the thing with a fixed camera wouldn't produce the effect, I guess ?

https://lookingglassfactory.com/tech


Incredible stuff! I first saw Looking Glass Factory [0] at Maker Faire in San Mateo around 2016. It was simple back then, probably 64 x 64 pixels "display". Imagine a bamboo garden made of 64 LED strips standing up, about 3'x3'x3' cube. Playing screensaver type light displays, entertaining at low res.

Then I saw a more recent version at a coffee shop in Providence. They're up to legit resolution now, and as a poster said, the light is split so all you're doing is moving your head. No head tracking/whatever.

It's one of those things that's tough to "get" without seeing yourself. Feels like a definitive piece of the future. I joined their Kickstarter awhile back as well.

Go Looking Glass crew!

[0] https://lookingglassfactory.com/


Cool, but it's not using the correct FOV to make sure the viewer/eye is at the focal point. That would look way cooler, I've seen a demo like that before. You get the impression you're looking through a window to another world.

Edit: the video linked here by teekert does it correctly


I wish people would stop calling these holographic


Why?


Because they aren't holograms. A hologram is "a recording of an interference pattern which uses diffraction to reproduce a 3D light field" [0]. This is something completely different.

[0] https://en.wikipedia.org/wiki/Holography


Because it's a lenticular screen.


I guess that’s not how people imagine holograms based on sci-fi.


Like other people have said, this looks like a Looking Glass dev unit or something.

And here's a Linus Tech Tips video of it: https://www.youtube.com/watch?v=-EA2FQXs4dw

They have some good footage (though obviously it's setup as a hype reel too).

Edit: Skip to follow for "beauty shots": 4:45 and 8:50. 6:12 for funky shot of the 'flattened' image to give you a sense of what 'trick' they're playing


Cool but technically looking glass is not an holographic display.


Neither was Time Traveller, but it was also cool.

So far “holographic” seems to describe the experience of the viewer more than a technical spec.

https://en.wikipedia.org/wiki/Time_Traveler_(video_game)


> This is not available to you

¯\_(ツ)_/¯


That's a twitter bug/feature that requires one or more browsers refreshes to find and show a tweet.


This seems to happen a lot. Not the first time I've seen this crop up in an HN thread.


Thanks, not one I'd encountered.


Probably happened at least 20 times to me... the first time was probably a year ago...


I’ve always wondered why it happens. Best guess is they have a short timeout like 1s to retrieve that content.


i love these displays, but the smaller one feels too small for anything other than a photo-frame, and the large(r) one is CRAZY expensive (and still too small...)


The next wall hackers


Every hacker's basement[0] (with enough money) is going to be walls of these. Can't wait for the first take on a holodeck[1]

[0] Garage/shed/lab

[1] Not implying this would come even close, especially not being a "true hologram", but it'd still be the closest we could probably get with current tech outside of headgear.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: