Hacker News new | past | comments | ask | show | jobs | submit login

There is a belief that we've peaked in terms of bigger model = better results. I think it was GPT 4 is actually smaller in parameter count and better than 3.5 for instance. There is also a finite amount of useful data that people think we have hit, so adding more parameters isn't helpful if you don't have new data to train them on.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: