I am not sure about your experience, but these types of channles seem to mostly have the issues of people being to bussy to repply, but when they do, there is often an interesting interaction & this is how users often become contributors to the project over time.
Sure, if you want to make sure you don't get any more contributors, you can try to replace that with a chatbot that will always reply immediately, but might just be wrong like 40% of the time, is not actually working on the project and will certainly not help in building sociel interactions between the project and its users.
I have participated in such channels for multiple years on the assisting side, and have been keeping in touch with some of the folks I knew from there still doing it. Also note that the projects I helped around with were more end-user focused.
Most interactions start with users being vague. This can already result in some helpers getting triggered, and starting to be vaguely snarky, but usually this is resolved by using prepared bot commands... which these users sometimes just won't read.
Then the misunderstandings start. Or the misplaced expectations. Or the lies. Or maybe the given helper has been having a bad day, but due to their long time presence in the project, they won't be moderated out properly. And so on. It's just not a good experience.
Ever since I left, I got screencaps of various kinds of conversations. In some cases, the user was being objectively insufferable - I don't think it's fair to expect a human to put up with that. Other times, the helper was being unnecessarily mean - they did not appreciate my feedback on that. Neither happens with LLMs. People don't grow resentful of the never ending horde of what feels like increasingly clueless users, and innocent folk don't get randomly chewed out for not living up to the optimality expectations of those who tend to 1000s of cases similar to theirs every week.
I think the solution is neither AI nor human in this case.
While direct human support is invaluable in many cases, I find it really hard to believe how our industry has completely forgotten the value of public support forums. Here are some pure advantages over Discord/Slack/<Insert private chat platform of your liking>
- Much much better search functionality out of the box, because you can leverage existing search engines.
- From the above it follows that high value contributors do not need to spend their valuable time repeatedly answering the same basic questions over and over.
- Your high value contributors don't have to be employees of the company, as many enthusiastic power users often participate and contribute in such places.
- Conversations are _much_ easier to follow without having to resort to hidden threads and forums posts on Discord that no one will ever read or search.
- Over time you build a living library of supporting documentation instead of useful information being strewn in many tiny conversations over months.
- No user expectation to be helped immediately. A forum sets the expectation that this is an async method of communication, so you're less likely to see entitled aggravating behavior (though you won't see many users giving you good questions with relevant information attached even on forums).
I think you're forgetting about how e.g. StackOverflow, a Q&A forum, exhibited basically the exact same issues I just ran through. In general, the history of both the unnecessary hostility of helpers and the near-insulting cluelessness and laziness of users on public forums is a very long and extensive one. It's not a format issue, I don't think.
I'm surprised you read my post and thought I was trying to say that using more public forums and less private chats will solve the so-called "human issue". My argument is not about making customer support more pleasant, or users less hostile. It's about making information more accessible so people can help themselves.
If we make information more accessible, support will reduce in volume. Currently there's a tendency for domain experts to hoard all relevant information in their heads, and dole it out at their discretion in various chat forums. Forums whose existence is often not widely known to begin with (not to mention gated behind making accounts in certain apps the users may or may not care about/want to).
So my point is: instead of trying to automate a decidedly bad solution to make it scalable and treating that as a selling point of AI, we could instead make the information more accessible in the first place?
The number of messages in the #help channels I participated in was limited not by the number of participants on either side, but by the speed of the chat. If it went on too quick, people would hold off from posting.
This meant you had a fairly low and consistent ceiling for messages. What you'd also observe over the years is a gradual decline in question quality. According to every helper that is. How come?
Admittedly we'll never really know, so this is speculation on my part, but I think it was exactly because of the better availability of information. During these years, we tried cultivating other resources and implementing features with the specific goal of improving UX. It worked. So the only people still "needing" assistance were those who failed to navigate even this better UX. Hence, worse questions, yet never ending.
Another issue with this idea is that navigating through the sheer volume of information can become challenging. AWS has a pretty decent documentation for example, but if you don't know the given service's docs you're paging through somewhat well, it's a chore to find anything. Keyword search won't be super helpful either. This is because it's a lot of prose, and not a lot of structure. Compare this to the autogenerated docs of AWS CLI, and you'll find a stark difference.
Finding things, especially among a lot of faff, is tiring. Asking a natural language question is trivial. The rest is on people to believe that AI isn't the literal devil, unlike what blogposts like the OP would like one to believe.
Do we have examples of LLMs being used successfully in these scenarios? I’m skeptical that the insufferable users will actually be satisfied and able to be helped by an LLM, unless the LLM is actually presented as a human, which seems unethical. It also hinges on an LLM being able to get the user to provide the required information accurately, without lying or simply getting frustrated, angry, and unwilling to cooperate.
I’m not sure there is a solution to help people who don’t come to the table willing to put in the effort required to get help. This seems like a deep problem present in all kinds of ways in society, and I don’t think smarter chatbots are the solution. I’d love to be wrong.
> Do we have examples of LLMs being used successfully in these scenarios?
If such a dataset exists, I don't have it. Most I have is the anecdotal experiences of not having to be afraid of asking silly questions from LLMs, and learning things I could then cross-validate to be correct without tiring anyone.
Sure, if you want to make sure you don't get any more contributors, you can try to replace that with a chatbot that will always reply immediately, but might just be wrong like 40% of the time, is not actually working on the project and will certainly not help in building sociel interactions between the project and its users.