Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really don't get this dependence problem. People want information to be 100% accurate?

This is the internet -- lies and false claims are the standard. It's up to individuals to corroborate that the info they're looking at is reliable. Google provides a list of relevant results to a keyword search -- every one of those results can be bogus and no one bats an eye. ChatGPT provides text output based on a text input and the text can be bogus. Why should ChatGPT be held to a higher standard than Google?



Is entirely possible to hit some search result for a real human had a code hallucination. And to the topic, on stackoverflow that would have been deleted, downvoted or corrected.

GPT will tell you it’s right… well, it’ll apologize, offer another solution that could also be wrong and now you are in a loop.


Only a computer can get stuck in a loop or a person who is insane. It's not like if you ask a person to divide by zero they will fall in to a coma or sit there computing.

I 100% believe that ChatGPT can output absolute garbage but for the same reason you don't cite Wikipedia/Google you won't be citing ChatGPT. Just like Wikipedia or Google It's just an excellent starting point to rapidly get a working draft going.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: