Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Doesn't that assume that people will forever be better learners than AI?

Better creators, not learners. AI can't create, it can only remix what's already been produced by humans. Human progress is created, not learned. The olds who are conditioned to try new things when an existing solution doesn't work still have the capacity to create something new (wholly new, not just remixed new).



AlphaGo and AlphaStar both started out based on human training and then played against versions of themselves to go on and create new strategies in their games. Modern LLMs can't learn/experiment as far as I know in exactly the same way but that may not always be true.


Yeah, but they had a limited set of rules to work within (they were just hyper-efficient at calculating the possible outcomes relative to those rules). Humans, in theory, only have the rules they believe as there technically are no rules (it's all make-believe). For example, what was the "rule" that told people to make a wheel? There wasn't one. The human had to think about it/conceive it, which AI can't (and I'd argue never will be able to) without rules.


Reinforcement learning is a completely different strategy compared to how most LLMs work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: