I think you have a good point. Something that was fascinating for me was that one of the things the search engine did after being acquired by IBM was looking at search queries for common questions people would ask a search engine. Taking 5 years of search engine query strings (no PII[1]) and turning them into a 'question' that someone might ask.
What was interesting is that it was clear that search engines have trained a lot of people to speak 'search engine' rather than stick with English. Instead of "When is Pizza hut open?" you would see a query "Pizza Hut hours". That works for a search engine because you have presented it with a series of non stopword terms in priority order.
It reminded me of the constraint based naming papers that came out of MIT in the late 80's / early 90's. It also suggested to me ways to take dictated speech and reformat it into a search engine query. There is probably a couple of good academic papers in that data somewhere.
That experience suggests to me that what you are proposing (a pidgin type language for AI) is a very likely possibility.
One of the bits that Jimmy Fallon does on the Tonight Show is an interviewer who asks people a question but uses nonsense words that sound "right" in the question. The most recent example was changing the question
"Do you think you will see Avengers: Infinity War"
With things like "Do you think you will see Avengers something I swear." Similar sounds and people have already jumped to the question with enough context that they always answer as if he had said "Infinity War." How will conversational systems deal with that, I have no idea.
> One of the bits that Jimmy Fallon does on the Tonight Show is an interviewer who asks people a question but uses nonsense words that sound "right" in the question. The most recent example was changing the question
Yeah, that's exactly what I'm talking about. I suspect this works because the interviewees have a model of what they think Jimmy Fallon is going to ask, and when hearing a question, if they can't make out words or they sound wrong, the simplest thing is to assume you heard them wrong and think what it's likely he would have asked. To that, the answer is fairly obvious.
We've all experienced those moments when we're caught off guard when this fails. For example, you're at work and someone you're working with says something that sounds like it was wildly inappropriate, or out of character, or just doesn't follow the conversation at all, and you stop and say "excuse me? What was that?" because you think it's more likely you heard them wrong than to believe you fuzzy matched their statement "Hairy Ass?" and you were correct.
I've been using search engines since Lycos and before, so I reflexively use what you call "search engine" language. However, due to the evolution of Google, I've started experimenting with more english-like queries, because Google is getting increasingly reluctant to search for what I tell it in a mechanistic way, often just throwing away words, and from what I've read it's increasingly trying to derive information from what used to be stopwords.
One thing that is very frustrating now is if there are two or three things out there related to a query, and Google prefers one of them, it's more difficult than ever to make small tweaks to the query to get the one you want. Of course, sometimes I don't know exactly what I want, but I do know I don't want the topic that Google is finding. The more "intelligent" it becomes, the harder it is to find things that diverge from its prejudices.
Then I stop myself. Isn’t it possible that he expects Alexa to recognize a prompt that’s close enough? A person certainly would. Perhaps Dad isn’t being obstreperous. Maybe he doesn’t know how to interact with a machine pretending to be human—especially after he missed the evolution of personal computing because of his disability.
...
Another problem: While voice-activated devices do understand natural language pretty well, the way most of us speak has been shaped by the syntax of digital searches. Dad’s speech hasn’t. He talks in an old-fashioned manner—one now dotted with the staccato march of time. “Alexa, tell us the origin and, uh, well, the significance, I suppose, of Christmas,” for example.
>One of the bits that Jimmy Fallon does on the Tonight Show is an interviewer who asks people a question but uses nonsense words that sound "right" in the question.
There was a similar thing going around the Japanese Web a few years ago, where you'd try to sneak in soundalikes into conversation without anybody noticing, but the example I remember vividly was replacing "irasshaimase!" ("welcome!") with "ikakusaimase!" ("it stinks of squid!").
Humans are really good at handling 'fuzzy' input. Something that is supposed to have meaning, and should if everything were heard perfectly, but which has some of the context garbled or wrong.
At least, assuming they have the proper context. Humans mostly fill in the blanks based on what they /expect/ to hear.
Maybe someday we'll get to a point where a device we classify as a 'computer' hosts something we would classify as an 'intelligent life form'; irrespective of origin. Something that isn't an analog organic processor inside of a 'bag of mostly salty water'. At that point persistent awareness and context(s) might provide that life form with a human-like meaning to exchanges with humans.
What was interesting is that it was clear that search engines have trained a lot of people to speak 'search engine' rather than stick with English. Instead of "When is Pizza hut open?" you would see a query "Pizza Hut hours". That works for a search engine because you have presented it with a series of non stopword terms in priority order.
It reminded me of the constraint based naming papers that came out of MIT in the late 80's / early 90's. It also suggested to me ways to take dictated speech and reformat it into a search engine query. There is probably a couple of good academic papers in that data somewhere.
That experience suggests to me that what you are proposing (a pidgin type language for AI) is a very likely possibility.
One of the bits that Jimmy Fallon does on the Tonight Show is an interviewer who asks people a question but uses nonsense words that sound "right" in the question. The most recent example was changing the question
"Do you think you will see Avengers: Infinity War"
With things like "Do you think you will see Avengers something I swear." Similar sounds and people have already jumped to the question with enough context that they always answer as if he had said "Infinity War." How will conversational systems deal with that, I have no idea.