Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m confused what the 3d display screen is? I thought we didn’t have technology like that without glasses?


Appears similar to the tech in the Nintendo 3DS: lenses over the screen so each eye sees a different picture. See https://en.wikipedia.org/wiki/Autostereoscopy


Why does it appear similar to that tech other than you get a 3d image? As far as I can tell there's no info on the display other than it uses "Light field technology" which would make it different than the parallax barrier display on the 3DS.


The term light field technology is broad enough to cover lenticular arrays in displays. The 3DS had the major limitation that it only rendered two views. If you increase the resolution to be able to display more views for more viewing directions within a wider cone, you already have a light field display - simply because all these views combined form a sampled light field representation by definition. This is exactly the same as glassless 3d displays for multiple viewers of decades past. But advances in display pixel density and computing power apparently make the resulting illusion much more convincing these days.


Isn't it clear glass and not a layer over a traditional display like the 3DS? Glasses would either actively time slice with shutters, or spectrum slice with passive filters on the lens and of course you need the glasses.

How could any of those technologies be what is used here?

E: Looking again, perhaps it could be some layer over a traditional screen. You see through some of the broadcasts but that could just be the digital far plane that shows through.


I'm not sure I'm following. If this is based on a flat panel display, there must be a lens array in front of it. There is no other way to achieve this effect without requiring the users to wear glasses. The lens array can be covered by a protective flat glass pane without issue.


From this Wired story, link found in another post above https://www.wired.com/story/google-project-starline, there is this passage: Move to the side just a few inches and the illusion of volume disappears. Suddenly you’re looking at a 2D version of your video chat partner again.

This implies, AFAIK, that it either uses lenticular lenses (which is the tech 3D-cards typically use), or a parallax barrier (screen tech from 3DS). There are a thus sectors from the screen to the viewer, and you need to have your head placed so that your one eye sees one sector, and the other eye sees another. What the reporter describes is when both her eyes end up in the same sector, which immediately makes the result 2D. Note that there might be more than two sectors, so that you can move further sideways and get a realistic view, but each eye must all the time be in a different sector. It can also use head tracking to achieve such correction of your view wrt. movement of your head, since it evidently constructs a full 3D scene of you and the other side, it can render that from any angle.


I think it's something like https://www.youtube.com/watch?v=pI__qNx8Gdk

Track both eyes, and then project an image to each eye based on its image in the room. The part I don't really understand is how it's possible to target the image to each eye. Maybe we have displays now which are like the 3DS screen, but with variable focal locations?


If it's that, why does the camera see a gradually different image as it pans around?

See: https://storage.googleapis.com/gweb-uniblog-publish-prod/ori...

Notice how the angle of her face changes as the camera moves: first you only see her left ear, at the end of the animation you only see her right ear.


Use a demo mode to disable eye tracking and follow your ARcore/Vive puck located camera ? Or just ask the guy to close his eyes and put googly eyes on your camera...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: