Hacker News new | past | comments | ask | show | jobs | submit login

that wouldn't happen though because the program would see that the user didn't in fact change their policy.



Didn't they? Or was it a system error? Perhaps the LLM has no record of it because the user's sister's boyfriend called in with a different phone number and misstated the policy number. But the change was definitely supposed to go through. In fact it's an absurd failure on the insurance company's fault for not processing that change correctly. Let me speak to the manager! I'm calling the Department of Insurance and reporting you!

A million variations of this play out every day in call centers. Even with fastidious notes and records, people are able to abuse process and benefit-of-the-doubt to create situations where they benefit. LLMs will definitely catch many situations humans will miss. But currently they are quite gullible and people-pleasing, which is great for their role of "AI Assistant" but less good for the role of "engaged in a never-ending escalating adversarial game with highly motivated fraudsters".


And at the other end of the scale, you calibrate it too be too suspicious and you end up with "You have been a bad customer, I have been a good Bing" style arguments with lots of legitimate claims and a class action suit. You can absolutely have a bot answer general questions and especially recurring ones, but you're going to want it to escalate to humans pretty early on. Even human agents often escalate to human agents.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: