We can hope to start to actually rely on such models once they start learning not only in the language domain, but also in the epistemic domain. True vs false, known vs unknown, precise vs vague, agreement vs contradiction vs unrelated, things like that.
Achieving that is going to be a serious technical, and also philosophical, challenge for humans.
Today's LLM are a literary device. They say what sounds plausible in the universe of texts they were fed. What they say technically isn't even wrong, because they have no notion of truth, or any notion of a world beyond the words. Their output should be judged accordingly.
Achieving that is going to be a serious technical, and also philosophical, challenge for humans.
Today's LLM are a literary device. They say what sounds plausible in the universe of texts they were fed. What they say technically isn't even wrong, because they have no notion of truth, or any notion of a world beyond the words. Their output should be judged accordingly.