Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah that stuff generated embarrassingly wrong scientific 'facts' and citations.

That kind of hallucination is somewhat acceptable for something marketed as a chatbot, less so for an assistant helping you with scientific knowledge and research.



I thought it was weird at the time how much hate Galactica got for its hallucinations compared to hallucinations of competing models. I get your point and it partially explains things. But it's not a fully satisfying explanation.


I guess another aspect is - being too early is not too different from being wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: