I'm not sure if there's anything interesting here, but I did notice the author was interviewed on the podcast Machine Learning Street Talk about this paper,
In statistics, sample efficiency means you can precisely estimate a specified parameter like the mean with few samples. In AI, it seems to mean that the AI can learn how to do unspecified, very general stuff without much data. Like the underlying truth about the world and how to reach one's goals within it is just some giant parameter vector that we need to infer more or less efficiently from "sampled" sensory data.
Picture a machine endowed with human intellect. In its most simplistic form, that is Artificial General Intelligence (AGI)
Artificial human intelligence. Not what I'd call general, but I guess so long as we make it clear that by "general" we don't actually mean general, fine. I'd really expect actual general intelligence to do a lot better than human, in ways we can't understand any more than ants can comprehend us.
My answer: while 99% of the AI community was busy working on Weak AI, that is, developing systems that could perform tasks that humans can do notionally because of our Big Brains, a tiny fraction of people promoted Hard AI, that is, AI as a philosophical recreation of Lt. Commander Data.
Hard AI has long had a well-deserved jet black reputation as a flakey field filled with armchair philosophers, hucksters, impressarios, and Loebner followers who don't understand the Turing Test. It eventually got so bad that the entire field decided to rebrand itself as "Artificial General Intelligence". But it's the same duck.
The only difference is the same hucksters are trying to sell the notion that LLMs are or will become AGI through some sort of magic trick or with just one more input.
It's been a moving goalpost but I think the point where people will be forced to acknowledge it is when fully autonomous agents are outcompeting most humans in most areas.
So long as half of people are employed or in business, these people will insist that it's not AGI yet.
Until AI can fully replace you in your job, it's going to continue to feel like a tool.
Given a useful-enough general purpose body (with multiple appendage options), one of the most significant applications of whatever we end up calling AGI should be finally seeing most of our household chores properly roboticized.
When I can actually give plain language descriptions of 'simple' manual tasks around the house to a machine the same way I would to, say, a human 4th grader, and not have to spend more time helping it get through the task than it would take me to do it myself, that is when I will feel we have turned the corner.
I still am not at all convinced I will see this within the next few decades I probably have left.
The military would pay 1000x what a household would for the same capability, and they are nowhere near the ability to do that. Which should tell you all you need to know.
I agree with blooalien - that's a great point. To me it doesn't feel quite enough to overcome the baity/provocative effects, but since several commenters have made good points about this, I we might as well put the original title back.
I've kept "f*ck" in the title since that's in the original and arguably adds some subtlety in this case. Normally we'd replace it with the real word since we don't like bowdlerisms.
I don’t know. They typically read entirely differently to me, in the sense that what I would expect to see after clicking the link is different.
I admit though the in this case “What is AGI?” better matches expectation to reality. Before I noticed the domain, “What the f*ck is AGI?” would have led me to expect more of a technical blog post with a playful presentation rather than the review article it actually is.
From what I can see, Artificial General Intelligence is a drug-fueled millenarian cult, and attempts to define it that don't consider this angle will fail.
The limitation of your definition is that any intelligence that is untrained will have a high rate of failure.
So, an intelligence may have evolved in geological time or in laboratorical time, but the ability of the intelligence to learn to think and solve problems will distinguish it from the high rate of general failure.
Artificial general intelligence is a term of art for charlatans who rely on continued blind investments on the order of billions to continue their quixotic (or just cynical) efforts to anoint LLMs with the Holy Water of tech hype[0], blessed by dense forests of jargon and needless anthropomorphisms, and satiated only by ritual human [labor] sacrifices.
—
0: Tech hype holy water is 99.99% Red Bull
"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."
Im a big AI/ML enthusiast (published one paper!) and was always flabbergasted to see scientists go off the typical provable/ testable lane and venture into philosophical and emotional territories
https://www.youtube.com/watch?v=K18Gmp2oXIM&t=3s