Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you heard of "glitch tokens"? To my knowledge, all these are hand patched. There are probably more lurking somewhere in its data.

Also https://news.ycombinator.com/item?id=37054241 has quite a few examples of GPT-4 being broken.



Plenty of humans glitch out on random words (and concepts) we just can't get right.

Famously the way R/L sound the same to many asians (and equivalently but less famously the way that "four" and "stone" and "lion" when translated into Chinese sound almost indistinguishable to native English speakers).

But there's also plenty of people who act like they think "Democrat" is a synonym for "Communist", or that "Wicca" and "atheism" are both synonyms for "devil worship".

What makes the AI different here is that we can perfectly inspect the inside of their (frozen and unchanging) minds, which we can't do with humans (even if we literally freeze them, we don't know how).


> Plenty of humans glitch out on random words (and concepts) we just can't get right.

We don't lose our marbles the way GPT does when it encounters those words. It's like it read the Necronomicon or something and gone mad.


Some of us clearly do.

As we have no way to find them systematically, we can't tell if we all do, or if it's just some of us.


Do we? Only if you think the brain actually works like an LLM.


Not only; it's demonstrable behaviour regardless of mechanism.


>What makes the AI different here is that we can perfectly inspect the inside of their (frozen and unchanging) minds,

Kinda, but not really...

It depends exactly what you mean by it. So yes we can look at one thing in particular, there is not enough entropy in the universe to look at everything for even a single large AI model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: