That's one of the things I like about the current implementation of Gemini - they seem to really be leaning in on grounding, and there are ref links for pretty much all of the stuff that I'd normally want to fact check form a chatbot.
It doesn’t take much effort to verify and cross reference check in most of the scenarios. But I have no idea how they will fight against LLM-optimized SEO-hell. Like I could see products flat out flying in the ads, hoping for LLMs to pick that up and suggest to users. Source of truth will matter even more.
How many times has that happened? Those cases make the headlines, but it’s so rare that, in my opinion, they can be disregarded. Nothing is perfect, just assess your risks and tolerance to error. That’s subjective, so one acts accordingly.
It’s really not though. It goes back to the whole “at that point you shouldn’t drive/fly/cross the road because you might die” argument. Literally millions (billions?) people commute every day using navigation apps. There will be accidents, but so far it’s been statistically insignificant, so people keep using them. Everyone has their own risk tolerances.
I mean that's one of the value propositions these folks have to weigh into their product offerings. At some point you either have a reputation for delivering accurate responses or not and that will dictate who uses it and how much they're willing to pay for it.