Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow these AI people are really shilling for their “scale is all you need” hypothesis eh? Now they are planting this stuff to gaslight us into thinking the same? :)

https://lastweekin.ai/p/the-ai-scaling-hypothesis



Looks like it. The linked article does NOT show what HN title does. Even the original title does not imply what the HN title does. The article is about one specific area of the brain, which is not the most obvious difference.


There were bigger models than ChatGPT 3.5 before it was released and they didn't perform better. In fact, the hype isn't built around large parameters but in fact on an interactive LLM architecture. ChatGPT and GPT do very different things but it is ChatGPT that gets the hype despite essentially having the same parameter count.

The parameter count thing appeals to people believing in linear scaling per parameter.


What do you mean by interactive LLM architecture?


I assume they mean trained on human feedback to understand human intent and answer questions, and then glued to a front end that lets us chat with it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: