I wonder if, in any of those legal cases, the users turned on web search or not. We just don't know -- but in my experience, a thinking LLM with web search on has never just hallucinated nonexistent information.
I'm sorry to be so blunt but this is a massive cope and deeply annoying to see this every. fucking. time. the limitations of LLMs are brought up. There is every single time someone saying yeah you didn't use web search / deep thinking / gpt-5-plus-pro-turbo-420B.
It's absurd. You can trivially spend 2 minutes on chatgpt and it will hallucinate on some factually incorrect answer. Why why why always this cope.
Well I agree with you that LLMs really like to answer with stuff, that is not grounded in reality, but I also agree with the parent, that grounding it in something else absolutely helps. I let the LLM invent garbage how ever it feels like, but then tell it to only ever answer with a citing valid existing URLs. Suddenly it generates claims, that something doesn't exist or it truly doesn't know.
This really results in zero hallucination (but the content is also mostly not generated by a LLM).
Well I don't know what to say, except that this is obviously, trivially, not true. The LLM will plain make up links that don't exist, or at least "summarise" an existing link by just making stuff up that is tangentially (but plausibly) related to the link. It's impossible to have used LLMs for this purpose for more than a quarter of an hour and not have seen this.
I never had the case, that an URL did not exist. For me it shows stuff like "generating web search", so I guess it tries to fetch the URL first, before suggesting it. LLMs like to give tangentially related links, but this is typically paired with a sentence, that the link I really asked for, does not exist.
> It's impossible to have used LLMs for this purpose for more than a quarter of an hour and not have seen this.
You may be generalizing too much from your experience.
Maybe you're seeing this argument come up all the time (and maybe everyone else in this thread is disagreeing with you) because your experiences actually don't reflect everyone else's. I guess the other alternative is we're all morons and you're the only smart person here.
Also: if it's so trivially reproducible, then can you provide a ChatGPT transcript link of this happening?
> I'm sorry to be so blunt but this is a massive cope
Coping for what? I don't work for an AI company. If AI vanished tomorrow I wouldn't particularly care.