Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I might starting calling this "the original sin" with LLMs... not validating the output.

I would rephrase it - the original sin of llms is not "understanding" what they output. By "understanding" it is meant the "the why of the output" - starting from the original problem and reasoning through to the output solution - ie, the causation process behind the output. What llms do is to pattern match a most "plausible output" to the input. But the output is not born out of a process of causation - it is born out of pattern matching.

Humans can find meaning in the output of LLMs, but machines choke at it - which is why, LLM code looks fine at a first glance until someone tries to run it. Another way to put it is, LLMs sound persuasive to humans - but at the core are rote students who dont understand what they are saying.



I agree that its important to understand they are just predicting what you should expect to see. And that is a more fundamental truth about an LLM itself. But im thinking more in terms of using LLMs. That distinction doesn't really matter if the tests pass.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: