The job loss depends on the average speed up, however. If the AI is only effective in 10% of tasks (the basic stuff), then that 3x improvement goes down to 1.3x.
That's such a economical fallacy that I'd expect the HN crowd to have understood this ages ago.
Compare the average productivity of somebody working in a car factory 80 years ago with somebody today. How many person-hours did it take then and how many does it take today to manufacture a car? Did the number of jobs between then and now shrink by that factor? To the contrary. The car industry had an incredible boom.
Efficiency increase does not imply job loss since the market size is not static. If cost is reduced then things are suddenly viable which weren't before and market size can explode. In the end you can end up with more jobs. Not always, obviously, but there are more examples than you can count which show that.
This is all broadly true, historically. Automating jobs mostly results in creating more jobs elsewhere.
But let's assume you have true, fully general AI. Further assume that it can do human-level cognition for $2/hour, and it's roughly as smart as a Stanford grad.
So once the AI takes your job, it goes on to take your new job, and the job after that, and the job after that. It is smarter and cheaper than the average human, after all.
This scenario goes one of three ways, depending on who controls the AI:
1. We all become fabulously wealthy and no longer need to work at all. (I have trouble visualizing exactly how we get this outcome.)
2. A handful of billionaires and politicians control the AI. They don't need the rest of us.
3. The AI controls itself, in which case most economic benefits and power go to the AI.
The last historical analog of this was the Neanderthals, who were unable (for whatever reason) to compete with humans.
So the most important question is, how close actually are we to this scenario? Is impossible? A century away? Or something that will happen in the next decade?
> But let's assume you have true, fully general AI.
Very strong assumption and very narrow setting that is one of the counter examples.
AI researchers in the 80s already told you that AI is around the corner in the next 5 years. Didn't happen. I wouldn't hold my breath this time either.
"AI" is a misnomer. LLMs are not "intelligence". They are a lossy compression algorithm of everything that was put into their training set. Pretty good at that, but that's essentially it.