Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans are not very smart, individually, and over a single lifetime. We become smart as a species in tens of millennia of gathering experience and sharing it through language.

What LLMs learn is exactly the diff between primitive humans and us. It's such a huge jump a human alone can't make it. If we were smarter we should have figured out the germ theory of disease sooner, as we were dying from infections.

So don't praise the learning abilities of little children, without language and social support they would not develop very much. We develop not just by our DNA and direct experiences but also by assimilating past experiences through language. It's a huge cache of crystallized intelligence from the past, without which we would not rule this planet.

That's also why I agree LLMs are stalling because we can't quickly scale a few more orders of magnitude the organic text inputs. So there must the a different way to learn, and that is by putting AI in contact with environments and letting it do its own actions and learn from its mistakes just like us.

I believe humans are "just" contextual language and action models. We apply language to understand, reason and direct our actions. We are GPTs with better feedback from outside, and optimized for surviving in this environment. That explains why we need so few samples to learn, the hard work has been done by many previous generations, brains are fit for their own culture.

So the path forward will imply creating synthetic data, and then somehow evaluating the good from the bad. This will be task specific. For coding, we can execute tests. For math, we can use theorem provers to validate. But for chemistry we need simulations or labs. For physics, we need the particle accelerator to get feedback. But for games - we can just use the score - that's super easy, and already led to super-human level players like AlphaZero.

Each topic has its own slowness and cost. It will be a slow grind ahead. And it can't be any other way, AI and AGI are not magic. They must use the scientific method to make progress just like us.



Humans do more than just enhance predictive capabilities. It is also a very strong assumption that we are optimised for survival in many or all aspects (even unclear what that means). Some things could be totally incidental and not optimised. I find appeals to evolutionary optimisation very tricky and often fraught.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: