Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a major area of focus to improve the hallucination (or whatever the technically correct term is) of the model. I would bet we're pretty close to GPT actually evaluating sources for information and making judgements in how to weight those sources. I suspect this is going to upset a lot of people, and especially those in power.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: