I read pretty much exactly the same prediction in "The Mighty Micro" in the early 80s (intelligent machines. immortality) etc. - which pretty much influenced me to do a CS degree and go into postgraduate AI research.
Nobody would be more delighted than me if immortality is achieved in 2045 (I'll be 80!) but do I expect it? Not really, nor do I expect effective commercial fusion power either (which also has a habit of being a couple of decades away and has been for the last 60 years).
[Edit: Note that I do believe that artificial general intelligence is perfectly possible (we do, after all, have a working example) just that it won't happen any time soon.]
It did take more than a few decades to design, however. I wouldn't really expect a new model to ship inside of forty years. At best, that's like expecting the next version of Windows to ship in the next three hours.
My theory of the singularity is that the concept is so popular these days precisely because (a) we've passed a critical knowledge threshold: A significant number of the best-educated people in the world have become aware that not only is a giant collection of self-replicating simple machines possible, it is old news -- billions-of-years-old news. But, (b), the majority of humanity still does not understand that. It is the very definition of irony, but most people -- even the educated ones -- have great difficulty conceiving of a vast, mobile, sentient colony of trillions of single-celled organisms working together with plan and purpose. Frankly, it's easier to believe in elves. Elves, we can grasp.
That gap between hard science and magical thinking is fertile ground for fantasy.
Singularity fiction bears the same relationship to modern molecular biology that Frankenstein bore to the work of Alessandro Volta.
It did take more than a few decades to design, however. I wouldn't really expect a new model to ship inside of forty years. At best, that's like expecting the next version of Windows to ship in the next three hours.
That's true only to the extent that the evolution of machine intelligence proceeds via the same processes that led to human intelligence. Which is pretty much impossible, so if we ever get to AGI, we're going to be dealing with another type of evolution or design, and it's very possible that it will happen on a timescale much faster than that of biological evolution.
IMO the reason the Singularity concept is popular these days is that we're the first generation that is, barring a massive disaster that both throws Moore's Law off and prevents us from increasing parallel processing power, going to be in possession of reasonably priced machines that have more compute power than the human brain. Which means that it's only a matter of discovering the right algorithms to run.
Whether we're making progress on that is debatable, but don't mistake the slow progress at cracking "the" AI algorithm with the speed at which the intelligence explosion will take off after that - it took many long, slow years to figure out how to build a nuclear weapon, but once we figured out the trick and started a nuclear reaction, the thing blew up in a fraction of a second. AI is likely to be very similar, it may take us a long time to get there, but once we're there, watch out....
once we figured out the trick and started a nuclear reaction, the thing blew up in a fraction of a second
Please, never use this metaphor again. This level of magical thinking is just embarrassing for all of us. It's like saying that building a house must be a really fast process, because burning the house down goes really quickly.
Nuclear weapons blow up easily because, for (e.g.) plutonium, "blowing up" is thermodynamically favorable. A brick of plutonium has much less entropy than a giant fireball and an expanding cloud of radioactive fallout. In a thermodynamic sense, the plutonium nuclei want to be fissioned. All you have to do is find a way to coax them out of their metastable state.
A human brain is massively thermodynamically unfavorable. If you put a bunch of proteins in a box the odds that they will self-assemble into Einstein's brain are, literally, astronomically small.
That's why the brain is a miracle and a nuclear reaction is, frankly, pretty commonplace. There are a lot of stars. Stars are easy to explain. Brains are hard.
> artificial general intelligence is perfectly possible (we do, after all, have a working example)
It depends what is meant by 'general'. Would we say that humans have 'general' physical capability? Well, we can do quite a few things, yes, but there are plenty of things various animals and machines can do that we cannot, and there are certainly plenty of further things conceivably possible. Why would not informational machines -- intelligence -- be like that?
Have we developed a general physical machine yet? What would that mean?
Human 'intelligence' is just one small example. It is not really special, and not a general measure of anything. So although AI will certainly improve, saying there is a particular threshold that will be passed is sort-of problematic (which does seem to fit the history of AI).
Nobody would be more delighted than me if immortality is achieved in 2045 (I'll be 80!) but do I expect it? Not really, nor do I expect effective commercial fusion power either (which also has a habit of being a couple of decades away and has been for the last 60 years).
[Edit: Note that I do believe that artificial general intelligence is perfectly possible (we do, after all, have a working example) just that it won't happen any time soon.]