Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> things that on the surface seem very plausible but are complete fabrications

LLMs are language model, it's crazy people expect them to be correct in anything beyond surface level language.



Yeah, I was probably being a bit too harsh in my original comment. I do find them useful, you just have to be wary of the output.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: