A chatbot, voice or text, is a response generator, where responses are prompts, requested information, or state change (e.g., place order, cancel service). It's no more interactive than other tools, and the more divorced the interface from text, the more levels of complexity are thrown at it.
I've been thinking about world models and communication a great deal. In this context, talking (or typing) with a chatbot is a great deal like arguing with an idiot. If the chatbot's world model doesn't include the actions or questions you're trying to address, there's simply no way to get it to understand. Quite literally, its world doesn't include such things. Though unlike the proverbial wrestling with a pig, in this case, you're the one far more likely to be annoyed.
Which gets to another major problem: such systems are constructed, and serve, the interests of their creators and clients: the companies which create them, frequently not the same as those who deploy them, the clients.
Users' interests are at the bottom of the priority stack.
Some weeks ago I had the pleasure of calling the local (and pathetic) newspaper company after our Sunday paper failed to arrive. I was greated, on three successive phone calls, to a several-minutes-long pitch for services I had absolutely no interest in, when trying to address a problem created by the company in the first place. To put it mildly, I was not pleased.
And my standing recommendation to cancel the subscription (substituting it with a much superior national paper) seems to be bearing out.
Chatbots which treat interactions as captive sales opportunities will, I suspect, not greatly enhance customer affinity to brands deploying such strategies.
Talking at a CLI interface is no more a "conversation" than typing at one. Despite the fact that interactive computer systems were called "conversational". (Don't believe me? There are books on the subject, from 1968: https://www.worldcat.org/title/conversational-computers/oclc... https://www.worldcat.org/search?q=kw%3Aconversational+comput...)
I've been thinking about world models and communication a great deal. In this context, talking (or typing) with a chatbot is a great deal like arguing with an idiot. If the chatbot's world model doesn't include the actions or questions you're trying to address, there's simply no way to get it to understand. Quite literally, its world doesn't include such things. Though unlike the proverbial wrestling with a pig, in this case, you're the one far more likely to be annoyed.
Which gets to another major problem: such systems are constructed, and serve, the interests of their creators and clients: the companies which create them, frequently not the same as those who deploy them, the clients.
Users' interests are at the bottom of the priority stack.
Some weeks ago I had the pleasure of calling the local (and pathetic) newspaper company after our Sunday paper failed to arrive. I was greated, on three successive phone calls, to a several-minutes-long pitch for services I had absolutely no interest in, when trying to address a problem created by the company in the first place. To put it mildly, I was not pleased.
And my standing recommendation to cancel the subscription (substituting it with a much superior national paper) seems to be bearing out.
Chatbots which treat interactions as captive sales opportunities will, I suspect, not greatly enhance customer affinity to brands deploying such strategies.