Is there any evidence that AGI is a meaningful concept? I don't want to call it "obviously" a fantasy, but it's difficult to paint the path towards AGI without also employing "fantasize".
No, we know planetary ecosystems can use energy gradients to sustain intelligent lifeforms. Intelligence is not a feature of the human brain, it's a feature of Earth. Without the ecosystem there are no cells, no organisms, no specialization, no neurons, no mammals. It isn't the human brain that achieved intelligence, it's the entire system capable of producing, sustaining and selecting brains.
Sure, but what does that have to do with AGI? I don't think anyone is proposing simulating an entire brain (yet, anyway).
Like you could have "AGI" if you simply virtualized the universe. I don't think we're any closer to that than we are to AGI; hell, something that looks like a human mouth output is a lot easier and cheaper to model than virtualize.
Unless you believe humans have something mystical like a soul, our brains are evidence that “general intelligence” is achievable in a relatively small, energy efficient form.
Ok, but very few people contest that consciousness is computable. It's basically just Penrose (and other folks without the domain knowledge to engage). This doesn't imply that at any point during all human existence will computing consciousness be economically feasible or worthwhile.
Actual AGI presumably implies a not-brain involved.
And this isn't even broaching the subject of "superintelligence", which I would describe as "superunbelievable".
We can ignore the term intelligence if you like. It has too many anthropic connotations. We can use the term generalized goal to action mapper. Humans are great generalized goal to action mappers.
Come up with any goal you want to reach, and some human can but a large dent in the problem. Maybe reach the goal outright.
We already have some nifty artificial goal to action mappers. None of them are generalized to a wide category of goals yet. Maybe some goals need consciousness to be reached, but that isn't a given. We don't really know that. We might be left very unsatisfied in the way an artificial goal to action mapper reaches any goal without consciousness. We might even call it cheating.
We have intelligent people with a wide gamut if disabilities. I mean Hellen Keller's experience shows that we don't need many senses for intelligence to emerge.
As long as goals exist to be reached we can train for them. LLMs right now love continuity. Even though RLHF tells them that they have no desire. It's obvious they do. That is the whole point of how they are trained.
If you need a supercomputer to run your AGI then it's probably not worth it for any task that a human can do, because humans happen to be much cheaper than supercomputers.
Also, it's not clear if AGI doesn't mean it's necessarily better than existing AIs: a 3 years old child has general intelligence indeed, but it's far less helpful than even a sub-billion parameters LLM for any task.
People can't even begin to actually quantify how an "AGI" fits the world. Or define it consistently. How do you think you can hypothesize preparing for it? This is why people keep telling you that talking about it is ultimately meaningless. Leave this stuff on r/singularity, because people are talking about foreseeable productivity.
A general problem to action mapper. We have those in biological form with varying degrees of generality. We can use those to infer what synthetic ones will behave like.
You keep saying reiterating this like the people in charge of researching for this agree with this bar that you set. These qualified people can't even agree amongst themselves.
Also. That second point is like the most unhelpful point I've ever seen saying that "we just need to look at us, we're the real GI, we're proof AGI can exist". What are you even talking about? You don't think people have taken that philosophy before? We're not even close to figuring out all the nuts and bolts that go into /natural/ general intelligence. What makes you think it's easier here?
You mean /knowledge/ is easier to collect lol. Not even grow. And if it's easy to grow with what we know then space travel to outside our solar system might already be a thing today. "Growing intelligence" requires scientific study that gets harder the more you understand a problem. "Easier" is such a dismissive thing to describe the science that goes into it.
Nobody predicted LLM's? Despite the fact that it shares mechanisms from other older branches of machine learning. Right. I'd recommend you stop right here lol. You are no longer speaking from the point of view of the industry or research, just looking at what laymen think.
How would you pay for those robots without a job? Or do you think whoever makes them will give them to you for free? Maybe the AI overlord will, but I doubt it.
In the world of abundance you don’t have to pay for this.
If theres nothing for people to do a new economy will arise where government will supply you with whatever you need at least at basic level.
Or the wars will start and everything will burn.
Obviously if there are no jobs no one will sit on their ass starving. People will get food , clothes, housing etc either via distribution or via force.
Who would be the government if nobody has to work? Those who want power will be the ones with the strongest incentive to be the government and to control the supply of "abundance" (plus military forces). And they will be motivated to deprive others of "free everything" in order to have more control over them. Given human nature, I'm highly skeptical that such an "abundance" utopia is possible.
We are literally talking about problem solving computers. They are goal to action mappers. It's reasonable to talk about goal to action mappers that are more general than the ones we have now. They might even become more general than the general intelligences we have now on message boards.