Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
lynx97
82 days ago
|
parent
|
context
|
favorite
| on:
Yes-rs: A fast, memory-safe rewrite of the classic...
LLMs slurp up a lot of trolling and typical tech sarcasm through its training data. IMO a reason for "hallucinations".
alpaca128
82 days ago
[–]
That depends on how you define hallucinations, I'd say AI repeating its training input is doing exactly what it's made for. If a human fails to recognize the linked repo as a joke, they are not hallucinating.
lynx97
82 days ago
|
parent
[–]
Thats why I put hallucinations in quotes.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: