Hacker News new | past | comments | ask | show | jobs | submit login

I’m highly skeptical about voice for most interactions. It’s inherently inappropriate in most public settings.



+1. I see it being helpful for the differently abled but for everyone to just speak out loud their every action would drive everyone nuts in a public setting. Not to mention I can type “code” much faster than if I were to speak it.


There are a few ideas about subvocal recognition kicking about which might change that. If your voice assistant is in an earpiece that can (somehow) read what your vocal muscles are doing without you actually needing to make a sound, it makes it practical to the extent that it could become the default. There's a lot of ocean between here and there, though. Particularly in the actual sensor tech. That's got to get to the point where you can wear it in public on a highly visible part of the body without feeling like a loon, and that's not trivial.


That maybe makes it vaguely less anti-social but still imprecise and frankly invasive. Typing by comparison is great. You can visualize the thoughts as you compose something and make edits in a buffer before submitting. The input serves as a proxy for your working memory. Screenless voice interfaces are strictly worse.


That assumes you have to be right first time, that you only have one chance to submit your buffer. We don't make that assumption when we talk to other humans.


We were putting this into classrooms where teachers were speaking all day anyhow. The system completely automated teaching tools, smart boards, and browsers. I don't think it gained a lot of traction, nonetheless, the company raised $100,000,000 to focus on the automation part of the product as a vertical.

My point is that as a UI developer, I was moving from all output to screens to output which is automated tasks. There are different types of output and they almost all relate to the senses which is the were the interface between the human and the machine exists. For example, to screen, eyes, to sound, ears, and haptic feedback on mobile devices, touch. Because in my space, the browser, I was using JavaScript and browser APIs the same but the end result was different.

Automation as an output is fundamentally different from all the UI I built the decade prior.


What’s the value prop here? That seems antithetical to actual education.

Shit writing on a white board has better physical feedback than some janky smart board.


True, but we could move to lip-reading for that:

https://news.ycombinator.com/item?id=43400636




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: