Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel we are already in the era of diminishing returns on LLM improvements. Newer models seem to be more sophisticated implementations of LLM technology + throwing more resources at it, but to me they do not seem fundamentally more intelligent.

I don't think this is a problem though. I think there's a lot of low-hanging fruit when you create sophisticated implementations of relatively dumb LLM models. But that sentiment doesn't generate a lot of clicks.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: