Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm going to die on the hill that gaze sucks. I really hate focus follows gaze and not having a workable keyboard or buttons makes it hard to actually use this for productivity. It's fine for an alt mode type thing but as the main form of input I kind of hate it.

That said, forcing your hands to be full of controllers also sucks but at least you can play a game. I don't have a solution but it needs to get solved for these things to be seamless enough to wear and useful enough to want to.

And I would call myself an enthusiast.



I liked the thumb gesture idea they presented in Peripheral. Enough movement to be discernible by a camera, but really only requires a small amount of energy and dexterity.

Though ideally we probably want some sort of “force gloves” that allow us to feel the weight of and manipulate objects in a virtual world using. We need way higher resolution on inputs than we have today, though, which currently amounts to an x/y/z and some button presses.


imo for AR purposes the gaze is a huge improvement than just pinching (a-la HoloLens). I agree though that for productivity, a more tactile device is needed. In practice I end up connecting my keyboard via Bluetooth, or just end up mirroring my MacBook, when I need to do real work

For work purposes, the privacy factor is a huge benefit -- nobody in the room can peek at your "screen"


Have you tried the Vision Pro? Not doubting your take here, just curious if it's informed by the actual implementation.


Yes I have. It's the best implementation so far but still very limiting.


^ agree with this take. If you're slow with tech, then this is amazing! And I think Apple has been great at making interfaces that are natural to use and easy to get into if you're not tech-savy. But for someone who's used to multi-task, or perform operations without looking at what I'm actually operating on, it sucks. I tried using keynote to create diagrams in Vision pro and the experience was so infuriating, I had to painfully look at everything and keep my focus on them while I also tried to repeatedly snap my fingers as the Vision Pro failed to detect it half of the times. I just want controllers.


Can’t you just use a mouse and keyboard as controllers? It’s hard to imagine any hardware peripheral that would improve on them for making a keynote or we’d already be using it on our desktops.


This was just one example, but yeah if the answer is always “just use a mouse and keyboard” then the experiences are going to be quite limited. I guess I’m used to the high interactivity of experiences on the Quest and the Vision Pro is just a glorified screen to me.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: