Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's something like https://www.youtube.com/watch?v=pI__qNx8Gdk

Track both eyes, and then project an image to each eye based on its image in the room. The part I don't really understand is how it's possible to target the image to each eye. Maybe we have displays now which are like the 3DS screen, but with variable focal locations?



If it's that, why does the camera see a gradually different image as it pans around?

See: https://storage.googleapis.com/gweb-uniblog-publish-prod/ori...

Notice how the angle of her face changes as the camera moves: first you only see her left ear, at the end of the animation you only see her right ear.


Use a demo mode to disable eye tracking and follow your ARcore/Vive puck located camera ? Or just ask the guy to close his eyes and put googly eyes on your camera...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: