Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

*Some generic biases. Some others like recency bias, serial-position effect, "pink elephant" effect, negation accuracy seem to be pretty fundamental and are unlikely to be fixed without architectural changes, or at all. Things exploiting in-context learning and native context formatting are also hard to suppress during the training without making the model worse.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: