Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've found that when cross checked against my own expertise, LLMs have dubious "knowledge" at best. Trusting the output with anything you already don't know would just be Gell-Mann amnesia.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: