Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One interesting way I heard to around this is by mixing human languages in the prompt which probably never appear together in any training data, and seeing that chat gpt can do still output sensible replies. That seems to imply that something unique is happening beyond token lookup, if it’s taking different languages and mapping that to the underlying information, that looks a lot more like what people call “understanding”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: