Wow these AI people are really shilling for their “scale is all you need” hypothesis eh? Now they are planting this stuff to gaslight us into thinking the same? :)
Looks like it. The linked article does NOT show what HN title does. Even the original title does not imply what the HN title does. The article is about one specific area of the brain, which is not the most obvious difference.
There were bigger models than ChatGPT 3.5 before it was released and they didn't perform better. In fact, the hype isn't built around large parameters but in fact on an interactive LLM architecture. ChatGPT and GPT do very different things but it is ChatGPT that gets the hype despite essentially having the same parameter count.
The parameter count thing appeals to people believing in linear scaling per parameter.
https://lastweekin.ai/p/the-ai-scaling-hypothesis