Fundamentally there is human with limited brain capacity that got trained to that. It’s just a question of time when there are equally capable, and then exceedingly capable models. There is nothing magical or special about human brain.
The only question is how fast it is going to happen. Ie what percentage of jobs is going to be replaced next year and so on.
> There is nothing magical or special about human brain.
There is a lot about the human brain that even the world's top neuroscientists don't know. There's plenty of magic about it if we define magic as undiscovered knowledge.
There's also no consensus among top AI researchers that current techniques like LLMs will get us anywhere close to AGI.
Nothing I've seen on current models (not even o1-preview) suggests to me that AIs can reason about codebases of more than 5k LOC. A top 5% engineer can probably make sense of a codebase of a couple million LOC in time.
Which models specifically have you seen that are looking like they will be able to surmount any time soon the challenges of software design and architecture I'm laying out in my previous comment?
Defining AGI as “can reason about 5MLOC” is ridiculous. When do the goal posts stop moving? When a computer can solve time travel? Babies have behavior all the time that is no more differentiable from what an LLM does on a normal basis (including terrible logic and hallucinations).
The majority of people on the planet can barely reason about how any given politician will affect them, even when there’s a billion resources out there telling them exactly that. No reasonable human would ever define AGI as having anything to do with coding at all, since that’s not even “general intelligence”… it’s learned facts and logic.
Babies can at least manipulate the physical world. Large language model can never be defined as AGI until it can control a general purpose robot, similar to how human brain controls our body's motor functions.
As generally intelligent beings, we can adapt to reading and producing 5M LOC, or to live in arctic climates, or to build a building in colonial or classical style as dictated by cost, taste, and other factors. That is generality in intelligence.
I haven't moved any goal posts - it is your definition which is way too narrow.
You’re literally moving the goalposts right now. These models _are_ adapting to what you’re talking about. When Claude makes a model for haikus, how is that different than a poet who knows literally nothing about math but is fantastic at poetry?
I’m sure as soon as Claude can handle 5MLOC you’ll say it should be 10, and it needs to make sure it can serve you a Michelin star dinner as well.
The only question is how fast it is going to happen. Ie what percentage of jobs is going to be replaced next year and so on.