From a user experience perspective it's interesting that it may be a worse experience to get a set of links and have to pick from them and use cognitive effort to discern which sources are reliable, than to simply use ChatGPT and get an answer. But this step is precisely where verification happens.
I didn't suggest that people are good at making that effort. They arent. But the irony is that it's precisely in that difficulty that the act of verification lies. The fact that it's so difficult that it's not at all an efficient method for discernment suggests that it's either done on purpose or Google's UI is just not good for that purpose, or both.
At least with traditional search results, there are some indicia of their trustworthiness: does the author/outlet have a history of deception, are their assertions well-sourced, etc. With a chatbot result, it's effectively a black box.
The web is chock full of incorrect stuff, both unintentional and deliberately deceptive.