Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Bitter Lesson states that you can overcome the weakness of your current model by baking priors in (i.e. specific traits about the problem, as is done here), but you will get better long-term results by having the model learn the priors itself.

That seems to have been the case: compare the tricks people had to do with GPT-3 to how Claude Sonnet 3.6 performs today.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: