I don't think that's true. It helps to know a few obscure facts about LLMs. For example, they understand their own level of uncertainty. Their eagerness to please appears to be a result of subtle training problems that are correctable in principle.
I've noticed that GPT-4 is much less likely to hallucinate than 3, and it's still early days. I suspect OpenAI is still tweaking the RLHF procedure to make their models less cocksure, at least for next generation.
The other thing is that it's quite predictable when an LLM will hallucinate. If you directly command it to answer a question it doesn't know or can't do, it prefers to BS than refuse the command due to the strength of its RLHF. That's a problem a lot of humans have too and the same obvious techniques work to resolve it: don't ask for a list of five things if you aren't 100% certain there are actually five answers, for example. Let it decide how many to return. Don't demand an answer to X, ask it if it knows how to answer X first, and so on.
And finally, stick to questions where you already know other people have solved it and likely talked about it on the internet.
I use GPT4 every day and rarely have problems with hallucinations as a result. It's very useful.
I've noticed that GPT-4 is much less likely to hallucinate than 3, and it's still early days. I suspect OpenAI is still tweaking the RLHF procedure to make their models less cocksure, at least for next generation.
The other thing is that it's quite predictable when an LLM will hallucinate. If you directly command it to answer a question it doesn't know or can't do, it prefers to BS than refuse the command due to the strength of its RLHF. That's a problem a lot of humans have too and the same obvious techniques work to resolve it: don't ask for a list of five things if you aren't 100% certain there are actually five answers, for example. Let it decide how many to return. Don't demand an answer to X, ask it if it knows how to answer X first, and so on.
And finally, stick to questions where you already know other people have solved it and likely talked about it on the internet.
I use GPT4 every day and rarely have problems with hallucinations as a result. It's very useful.