Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doesn't it bother you that it sounds just as confident when it's making stuff up as when it's accurately summarizing? Do you just think you can tell when it's wrong?


I can easily tell whether the code that ChatGPT writes for me is wrong by using an IDE. But it’s always closer to meeting my exact requirements than a Google search.


Sometimes I can tell it's wrong. But it faster to try and fail than searching for an answer on Google.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: