Hacker News new | past | comments | ask | show | jobs | submit login

I wonder if this is one reason why AI seems to have scaling limits. If building stairs is decomposed into n-steps, and each of those steps is decomposed into n-steps, and you keep on decomposing for L levels, then each time you want an extra level of detail you need n-times the training data/training time.



No it’s rather obvious why. It’s not a scaling issue otherwise humans wouldn't be able to exist. It’s missing data. It’s like trying to learn to swim by watching other swimmers only. You can keep watching more and more of this but there’s limits.

There are two things that enable us to do it. First we have prebuilt application specific brains. Our neural networks are geared towards specialized thinking related to human living without any training needed. LLMs are free form intelligences with no bias. To bring it back to the analogy, do you really need to teach a beaver to swim if it was always raised on land?

Second text and visual data alone is not enough information to build a model of the world. As humans we have more data and we can control data. Meaning we see, hear, listen, and importantly we can place inputs into the data and observe differences in outputs.

This is why alpha go is so powerful. It’s able to get the input data and observe the output and learn. LLMs don’t do this on an automated basis yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: