For example you can confabulate "facts" or you can make logical or coherence mistakes.
Current LLMs are encouraged to be creative and effectively "make up facts".
That's what created the first wow factor. The models are able to write a Star Trek fan fiction model in the style of Shakespeare.
They are able to take a poorly written email and make it "sound" better (for some definition of better, e.g. more formal, less formal etc).
But then, human psychology kicked in and as soon as you have something that can talk like a human and some marketing folks label as "AI" you start expecting it to be useful also for other tasks, some of which require factual knowledge.
Now, it's in theory possible to have a system that you can converse with which can _also_ search and verify knowledge. My point is that this is not the place where LLMs start from. You have to add stuff on top of them (and people are actively researching that)
For example you can confabulate "facts" or you can make logical or coherence mistakes.
Current LLMs are encouraged to be creative and effectively "make up facts".
That's what created the first wow factor. The models are able to write a Star Trek fan fiction model in the style of Shakespeare. They are able to take a poorly written email and make it "sound" better (for some definition of better, e.g. more formal, less formal etc).
But then, human psychology kicked in and as soon as you have something that can talk like a human and some marketing folks label as "AI" you start expecting it to be useful also for other tasks, some of which require factual knowledge.
Now, it's in theory possible to have a system that you can converse with which can _also_ search and verify knowledge. My point is that this is not the place where LLMs start from. You have to add stuff on top of them (and people are actively researching that)