Hacker News new | past | comments | ask | show | jobs | submit login

On the other hand, the same is also true of Google results.

The web is chock full of incorrect stuff, both unintentional and deliberately deceptive.




From a user experience perspective it's interesting that it may be a worse experience to get a set of links and have to pick from them and use cognitive effort to discern which sources are reliable, than to simply use ChatGPT and get an answer. But this step is precisely where verification happens.


I think you are vastly overestimating the amount of cognitive effort most people apply to Google search results.

It's more like "First one on the list? Well, that one's right -- unless it disagrees with my personal biases, then maybe I'll look for another one."


I didn't suggest that people are good at making that effort. They arent. But the irony is that it's precisely in that difficulty that the act of verification lies. The fact that it's so difficult that it's not at all an efficient method for discernment suggests that it's either done on purpose or Google's UI is just not good for that purpose, or both.


Internet content is very rarely verifiable. Very few websites cite any sources aside from Wikipedia.


At least with traditional search results, there are some indicia of their trustworthiness: does the author/outlet have a history of deception, are their assertions well-sourced, etc. With a chatbot result, it's effectively a black box.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: