> It is absolutely true, and AI cannot think, reason, comprehend anything it has not seen before.
The amazing thing about LLMs is that we still don’t know how (or why) they work!
Yes, they’re magic mirrors that regurgitate the corpus of human knowledge.
But as it turns out, most human knowledge is already regurgitation (see: the patent system).
Novelty is rare, and LLMs have an incredible ability to pattern match and see issues in “novel” code, because they’ve seen those same patterns elsewhere.
Do they hallucinate? Absolutely.
Does that mean they’re useless? Or does that mean some bespoke code doesn’t provide the most obvious interface?
Having dealt with humans, the confidence problem isn’t unique to LLMs…
what do you object to about it? I don't see an issue with referring to "the corpus of human knowledge". "Corpus" pretty much just means the "collection of".
I mean, as far as a corpus goes, I suppose all text on the internet gets pretty close if most books are included, but even then you’re mostly looking at English language books that have been OCR’d.
But I look down my nose at conceptions that human knowledge is packagable as plain text, our lives, experience, and intelligence is so much more than the cognitive strings we assemble in our heads in order to reason. It’s like in that movie Contact when Jodie Foster muses that they should have sent a poet. Our empathy and curiosity and desires are not encoded in UTF8. You might say these are realms other than knowledge, but woe to the engineer who thinks they’re building anything superhuman while leaving these dimensions out, they’re left with a cold super-rationalist with no impulse to create of its own.
The amazing thing about LLMs is that we still don’t know how (or why) they work!
Yes, they’re magic mirrors that regurgitate the corpus of human knowledge.
But as it turns out, most human knowledge is already regurgitation (see: the patent system).
Novelty is rare, and LLMs have an incredible ability to pattern match and see issues in “novel” code, because they’ve seen those same patterns elsewhere.
Do they hallucinate? Absolutely.
Does that mean they’re useless? Or does that mean some bespoke code doesn’t provide the most obvious interface?
Having dealt with humans, the confidence problem isn’t unique to LLMs…