Of course, progress could stall out, but we appear to have sufficient compute to do anything a human brain can do, and in areas, AIs are already far better than humans. With the amount of brain power working in, and capital pouring into, this area, including work on improving algorithms, I think this essay is fundamentally correct that the takeoff has started.
> in areas, AIs are already far better than humans.
You could say the same thing about a CPU from 40 years ago - they can do math far better than humans. The problem is that there are some very simple problems that LLMs can’t seem to reliably solve that a child easily could, and this shows that there likely isn’t actual intelligence going on under the hood.
I think LLMs are more like human simulators rather than actual intelligent agents. In other words, they can’t know or extrapolate more knowledge than the source material gives them, meaning they could never become more intelligent than humans. They’re like a very efficient search engine of existing human knowledge.
Can anyone give me an example of any true breakthrough that was generated by an LLM?
You might be correct about LLMs. Let's say that you are.
40 years ago we were clearly compute bound. Today, I think it's fairly clear we are not; if there is anything a human can do that an AI can't, it's because we lack the algorithms, not the compute.
So the question becomes, now that we have sufficient compute capacity, how long do you think it will take the army of intelligent creative humans (comp sci PhDs, and now accelerate by AI assistance) to develop the algorithmic improvements to take AI from LLMs to something human level?
Nobody knows the answer to the above, and I could be very wrong, but my money would bet on it being <30 years, if not dramatically sooner (my money is on under 10).
It seems to me like the building blocks are all here. Computers can now see, process scenes in real time, move through the world as robots, speak and converse with humans in real time, use tools, create images (imagine?), and so forth. Work is continuing to give LLMs memory, expanded context, and other improvements. As those areas all get improved on, tied together, recursively improved, etc., at some point I think it will be hard to argue it is not intelligence.
Where we are with LLMs is Kitty Hawk. The world now knows that flight (true human level intelligence) is possible and within reach, and I strongly believe the progress from here on out will continue to be rapid and extreme.
> So the question becomes, now that we have sufficient compute capacity, how long do you think it will take the army of intelligent creative humans (comp sci PhDs, and now accelerate by AI assistance) to develop the algorithmic improvements to take AI from LLMs to something human level?
This assumes that the eventual breakthroughs start from something like LLMs. It's just as likely or more that LLMs are an evolutionary dead end or wrong turn, and whatever leads to AGI is completely unrelated. I agree that we are no longer compute bound, but that doesn't say anything about any of the other requirements.
They could become better at humans bc there's a RL training loop? I don't understand how this isn't directly clear. Even raining purely on human data it's possible to be mildly superhuman (see experiments of chess AI trained on human data only) but once you have verifiable tasks and RL loop the human data is just Kickstarter
Of course, progress could stall out, but we appear to have sufficient compute to do anything a human brain can do, and in areas, AIs are already far better than humans. With the amount of brain power working in, and capital pouring into, this area, including work on improving algorithms, I think this essay is fundamentally correct that the takeoff has started.