> IMHO, that would qualify as an AGI even though it isn't writing essays or producing business plans.
I'm not sure it would, though. The "G" in AGI stands for "General", which a dog obviously can't showcase. The comparison must be done against humans, since the goal is to ultimately have the system perform human tasks.
The definition mentioned by tedsanders seems adequate to me. Most of the terms are fuzzy ("most", "outperform"), but limiting the criteria to economic value narrows it down to a measurable metric. Of course, this could be exploited by building a system that optimizes for financial gain over everything else, but this wouldn't be acceptable.
The actual definition is not that important, IMO. AGI, if it happens, won't appear suddenly from a singular event, but as a gradual process until it becomes widely accepted that we have reached it. The impact on society and our lives would be impossible to ignore at that point. The problem with this is that along the way there will be charlatans and grifters shouting from the rooftops that they've already cracked it, but this is nothing new.
I would say... yes. But with the strong caveat that when used within the context of AGI, the individual/system should be able to showcase that intelligence, and the results should be comparable to those of a neurotypical adult human. Both a dog and a toddler can show signs of intelligence when compared to individuals of their similar nature, but not to an adult human, which is the criteria for AGI.
This is why I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI. It could certainly be considered a step in that direction, but not strictly AGI.
But again, I don't think that a strict definition of AGI is helpful or necessary. The impact of a system with such capabilities would be impossible to deny, so a clear definition doesn't really matter.
> I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI
What makes you say that it underperforms? I ask because evidence strongly suggests that it is actually vice-versa - AI models currently outperform humans in most of the tasks.
I'm not sure it would, though. The "G" in AGI stands for "General", which a dog obviously can't showcase. The comparison must be done against humans, since the goal is to ultimately have the system perform human tasks.
The definition mentioned by tedsanders seems adequate to me. Most of the terms are fuzzy ("most", "outperform"), but limiting the criteria to economic value narrows it down to a measurable metric. Of course, this could be exploited by building a system that optimizes for financial gain over everything else, but this wouldn't be acceptable.
The actual definition is not that important, IMO. AGI, if it happens, won't appear suddenly from a singular event, but as a gradual process until it becomes widely accepted that we have reached it. The impact on society and our lives would be impossible to ignore at that point. The problem with this is that along the way there will be charlatans and grifters shouting from the rooftops that they've already cracked it, but this is nothing new.