Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The other was companies who had tricked people into learning a new kind of handwriting [...] It's interesting parallels with spoken word interfaces. I don't know what the answer is.

Well, here's a possibility: we'll meet in the middle. Companies will train humans into learning a new kind of speech which is more easily machine-recognisable. These dialects will be very similar to English but more constrained in syntax and vocabulary; they will verge on being spoken programming languages. They'll bear a resemblence to, and be constructed in similar ways to, jargon and phrasing used in aviation, optimised for clarity and unambiguity over low-quality connections. Throw in optional macros and optimisations which increase expressive power but make it sound like gibberish to the uninitiated. And then kids who grow up with these dialects will be able to voice-activate their devices with an unreal degree of fluency, almost like musical instruments.



Of course, voice assistants are already doing this to us a bit. Once I learn the phrasing that works reliably, I stick with it, even though repeating "Add ___ to shopping list" for every item feels like the equivalent of GOTO statements.

The challenge with learning anything new from our voicebots is the same as it already is: discoverability. I don't have the patience to listen to Siri's canned response when it's done what I've asked it to, and I'm probably not going to listen to it tell me "Next time, you can say xyz and I'll work better."

The easiest way to learn a new language is to be a kid and listen to native speakers converse. Unless Amazon produces a kids' show starring a family of Echos speaking their machine code, I don't see that happening.


I miss the times before voice assistants, when it was expected of user to train a voice interface. That way I could, if needed, negotiate pronunciation with the machine.


Your comment reminded me of an interesting video I saw a while back. The video is a talk [0] by a programmer explaining and demonstrating how he developed and used a voice recognition system to program in emacs, after a significant RSI injury.

The commands and syntax he uses to interact with the system might be a decent approximation of the custom English syntax you suggest.

[0]: https://youtu.be/8SkdfdXWYaI?t=537


If that happened, but went the way of Palm Pilot's "Graffiti" system, what would be the analogue of touchscreen keyboards?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: