chatGPT hallucinates more the further removed it is from the data. I'm asking it about laravel, and it knows nothing about laravel 9 or 10 changes, but if I feed it an entire article or document it'll hallucinate a lot less because it's fresh.
kinda like how we can recall things closer to the event than months later.
it knows a ton from it's training but it still got it from the web so always question it, but if we can add meta data and other things to strengthen the llms understanding it shouldn't hallucinate much at all.
kinda like how we can recall things closer to the event than months later.
it knows a ton from it's training but it still got it from the web so always question it, but if we can add meta data and other things to strengthen the llms understanding it shouldn't hallucinate much at all.