Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs slurp up a lot of trolling and typical tech sarcasm through its training data. IMO a reason for "hallucinations".


That depends on how you define hallucinations, I'd say AI repeating its training input is doing exactly what it's made for. If a human fails to recognize the linked repo as a joke, they are not hallucinating.


Thats why I put hallucinations in quotes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: