Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting you say “confidentially hallucinate things” - a “hallucination” isn’t any different from any other LLM output except that it happens to be wrong… “hallucination” is anthropomorphic language, it’s just doing what LLMs do and generating plausible sounding text…


I'm using the phrase everyone else is using to describe a common phenomenon that the discourse seems to have converged on using that phrase for. I take your point that we have until now used hallucinate to describe something humans do, that is, "perceive something that isn't there and believe it is", but seeming as the only way we know if someone is hallucinating is if they say something strange to us, I think we could also say that there is a sense that hallucinate means to "talk about something that isn't there as if it it". LLMs producing text, in the style of a conversation is kind of like talking about things. So we can have a nonconcesous non-human system do something like talking, and if it is talking, it can talk in a way that could be called hallucinating.


Although some people insist (as you do) that "hallucination" is unreasonably anthropomorphic language, it is an extremely common term of art in the field. eg https://dl.acm.org/doi/abs/10.1145/3571730

Secondly, to be anthropormorphic, hallucination would have to be exclusively human, and why should hallucination be a purely human phenomenon? Consider this Stanford study on lab mice https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6711485/ . The purpose of the study is described as being to understand hallucination and it is described by the scientists involved informally as involving hallucinating mice eg here https://www.sciencedaily.com/releases/2019/07/190718145358.h... . It does involve inducing mice to see things which are not there and behave accordingly. Most people would call that a hallucination.


Yes agree. Am sure it's because LLM developers want to ascribe human-like intelligence to their platforms.

Even "AI" I think is a misnomer. It's not intelligence as most people would conceive it, i.e. something akin to human intelligence. It's Simulated Intelligence, SI.


> hallucination ... it’s just doing what LLMs do

So using that term shows the need to implement "processing of thought", as decently developed human intellects do.


GPT4 will often tell you when it isn't confident or doesn't know something.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: