I'm not so sure. Maybe I'm naive but I could see imitation and reinforcement learning actually becoming the little bit of magic that is missing to bring the area forward.
From what we know, those techiques play an important roles in how humans themselves go from baby glibberish to actual language fluency - and from what I understand there is already a lot of research ongoing how reinforcement learning algorithms can be used to learn walking cycles and similar things.
So I'd say, if you find a way to read the non-/semiverbal cues humans use to communicate understanding or misunderstanding among themselves and use those as penalties/rewards for computers, you might be on a way to learn some kind of languages that humans and computers can communicate with.
The same goes into the other direction. There is a sector of software where human users not only frequently master highly intricate interfaces, they even do it out of their own motivation and pride themselves on archieving fluency: Computer games. The secret here is a blazing fast feedback loop - which gives back penalty and reward cues in a similar way to reinforcement learning - and a carefully tuned learning curve which starts with simple, easy to learn concepts and gradually becomes more complex.
I would argue, if you combine those two techniques - using penalties and rewards to train the computers and communicate back to the user how well the computer understood you, you might be able to move to a middle-ground representation without it seeming like much effort on the human side.
From what we know, those techiques play an important roles in how humans themselves go from baby glibberish to actual language fluency - and from what I understand there is already a lot of research ongoing how reinforcement learning algorithms can be used to learn walking cycles and similar things.
So I'd say, if you find a way to read the non-/semiverbal cues humans use to communicate understanding or misunderstanding among themselves and use those as penalties/rewards for computers, you might be on a way to learn some kind of languages that humans and computers can communicate with.
The same goes into the other direction. There is a sector of software where human users not only frequently master highly intricate interfaces, they even do it out of their own motivation and pride themselves on archieving fluency: Computer games. The secret here is a blazing fast feedback loop - which gives back penalty and reward cues in a similar way to reinforcement learning - and a carefully tuned learning curve which starts with simple, easy to learn concepts and gradually becomes more complex.
I would argue, if you combine those two techniques - using penalties and rewards to train the computers and communicate back to the user how well the computer understood you, you might be able to move to a middle-ground representation without it seeming like much effort on the human side.