Do we have examples of LLMs being used successfully in these scenarios? I’m skeptical that the insufferable users will actually be satisfied and able to be helped by an LLM, unless the LLM is actually presented as a human, which seems unethical. It also hinges on an LLM being able to get the user to provide the required information accurately, without lying or simply getting frustrated, angry, and unwilling to cooperate.
I’m not sure there is a solution to help people who don’t come to the table willing to put in the effort required to get help. This seems like a deep problem present in all kinds of ways in society, and I don’t think smarter chatbots are the solution. I’d love to be wrong.
> Do we have examples of LLMs being used successfully in these scenarios?
If such a dataset exists, I don't have it. Most I have is the anecdotal experiences of not having to be afraid of asking silly questions from LLMs, and learning things I could then cross-validate to be correct without tiring anyone.
I’m not sure there is a solution to help people who don’t come to the table willing to put in the effort required to get help. This seems like a deep problem present in all kinds of ways in society, and I don’t think smarter chatbots are the solution. I’d love to be wrong.