> They're trying to make a full Artificial General Intelligence.
> then they can immediately use it to make itself better.
"AGI" is a notoriously ill-defined term. While a lot of people use the "immediately make itself better" framing, many expert definitions of AGI don't assume it will be able to iteratively self-improve at exponentially increasing speed. After all, even the "smartest" humans ever (on whatever dimensions you want to assess) haven't been able to sustain self-improving at even linear rates.
I agree with you that AGI may not even be possible or may not be possible for several decades. However, I think it's worth highlighting there are many scenarios where AI could become dramatically more capable than it currently is, including substantially exceeding the abilities of groups of top expert humans on literally hundreds of dimensions and across broad domains - yet still remain light years short of iteratively self-improving at exponential rates.
Yet I hear a lot of people discussing the first scenario and the second scenario as if they're neighbors on a linear difficulty scale (I'm not saying you necessarily believe that. I think you were just stating the common 'foom' scenario without necessarily endorsing it). Personally, I think the difficulty scaling between them may be akin to the difference between inter-planetary and inter-stellar travel. There's a strong chance that last huge leap may remain sci-fi.
> then they can immediately use it to make itself better.
"AGI" is a notoriously ill-defined term. While a lot of people use the "immediately make itself better" framing, many expert definitions of AGI don't assume it will be able to iteratively self-improve at exponentially increasing speed. After all, even the "smartest" humans ever (on whatever dimensions you want to assess) haven't been able to sustain self-improving at even linear rates.
I agree with you that AGI may not even be possible or may not be possible for several decades. However, I think it's worth highlighting there are many scenarios where AI could become dramatically more capable than it currently is, including substantially exceeding the abilities of groups of top expert humans on literally hundreds of dimensions and across broad domains - yet still remain light years short of iteratively self-improving at exponential rates.
Yet I hear a lot of people discussing the first scenario and the second scenario as if they're neighbors on a linear difficulty scale (I'm not saying you necessarily believe that. I think you were just stating the common 'foom' scenario without necessarily endorsing it). Personally, I think the difficulty scaling between them may be akin to the difference between inter-planetary and inter-stellar travel. There's a strong chance that last huge leap may remain sci-fi.