Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a bit torn. My first thought was "If the current state of the art LLMs made the mistakes it's unlikely an LLM would be able to correct them." But I'm not sure that's true if the support LLM (chat bot) is given very specific instructions so as to limit the possible answers. Still I think that's gonna break down pretty quick for other reasons.

Maybe the chat bot can recognize a botched request and even suggest a fix but what then? It certainly won't be able to convert the user's next request into a working application of even moderate complexity. And do you really want a chat bot to be the first interaction your customers have.

I think this is why we haven't seen these things take off outside of very large organizations who are looking to save money in exchange for making customers ask for a human when they need one.



> I'm a bit torn. My first thought was "If the current state of the art LLMs made the mistakes it's unlikely an LLM would be able to correct them."

But, I mean, that doesn't make sense even for humans, right? 99% of the errors I make, I can easily correct myself because they're trivial. But you still have to go through the process of fixing them, it doesn't happen on its own. Like, for instance, just now I typoed "stil" and had to backspace a few letters to fix it. But LLMs cannot backspace (!), so you have to put them into a context where they feel justified in going back and re-typing their previous code.

That's why it's a bit silly to make LLM code, try to run it, see an error and immediately run to the nearest human. At least let the AI do a few rounds of "Can you spot any problems or think of improvements?" first!


The first time I asked ChatGPT to write a function, have it run it and repeat until the function meets given test cases, in one shot, was pretty cool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: