Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They seem to have partially fixed it. In the past it would produce a special token "<|endoftext|>" and then immediately after that the LLM would ramble on about a completely random topic, which was either a hallucination or possibly leaking the answer to someone else's query (which would be much more interesting!)

Now it looks like they've added a band-aid to suppress text output, but the chat titles indicate that some kind of text processing is still taking place in the background.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: