[Regarding power of artificial intelligence] "...If Moore's law continues to hold then the lower bound will be reached sometime between 2004 and 2008, and the upper bound between 2015 and 2024."
I guess his prognostication here depends on super-powerful computing and brain-emulation software. China's Tianhe-2 has already hit 3.3^15 ops, Bostrom was anticipating for 10^14 - 10^17 to be the runway. Now, I am not sure what the state of brain emulation is at the moment but it looks like our biggest snag is there. Researchers are having a hard time bubbling up new paradigms for artificial intelligence software. Anyone have any insight into that?
Off by a few orders of magnitude. Tihane-2 hit 33 pflops or 3.3*10^16 flops or approx 1/3 of the upper bound. Brain simulation is a snag, but it isn't our only snag.
Like you said, it's a general algorithm issue. We do not remotely understand the brain well enough to simulate it. We have very little idea of what an intelligent algorithm (other than brain sim) would look like.
Also, all of these estimates are based on flops and none of them consider bandwidth. We are a few orders of magnitude lower in gigabits/s than we are in flops. I personally think that is where the bottleneck is. 100 billion neurons with a 100 gigabit/second pipe could interact once per second and then only at the level of a toggle switch. Granted not all neurons have to interact with one another, but we are significantly behind in bandwidth and structural organization.
Bandwidth is intimately tied to processing capacity. I dont think the bandwidth will be there until 2045-2065 and like you say we have serious software/algorithm/understanding deficiencies to resolve before then. I would be very surprised if we get general AI before 2065 if ever. I do not expect it in my lifetime and would be pleasantly surprised if it happened.
Oops, excuse my mistaken quote of the Tianhe flops.
Regarding the bandwidth bottleneck, it's fascinating to see that as one hardware problem is overcome, the next one looms even greater. The same is happening with the software, as machine learning, etc. is advancing (as contentious as that statement may be to people deep in the industry) the coming hurdles look even more intimidating.
The algorithms that need to be developed to reach the milestones of intelligence are incredibly difficult. What excites me is evolutionary algorithms that may be harnessed to reach those milestones. This may be a brute-force method, and researchers would have to know what to tell the algorithms to select for at first, but with increasing computational power, running significant amounts of these algorithms in parallel could be negligible. If you see this comment dhj, have you considered evolutionary computation in your predictions? I'd be interested in what you think, as your clarification of the bandwidth problem was enlightening to me.
I agree that some form of evolutionary algorithm will be our path to intelligent software (or a component of it). However, as genetic algorithms are currently implemented I would say the following analogy holds neural_net:brain::evolutionary_algorithm:evolution ...
In other words GAs/EAs are a simplistic and minimal scratching of the surface compared to the complexity we see in nature. The problem is two fold: 1) we guide the evolution with specific artificial goals (get a high score for instance) 2) the ideal "DNA" of a genetic algorithm is undefined.
In evolution we know post-hoc that DNA is at least good enough (if not ideal) for the building blocks. However, we have had very little success with identifying the DNA for genetic algorithms. If we make it commands or function sets we end up with divergence (results get worse or stay the same per iteration rather than better). The most successful GAs are where the DNA components have been customized to a specific problem domain.
Regarding the target goal selection that is a major field of study itself with reinforcement learning. What is the best way to identify reward? In nature it is simple -- survival. In the computer it is artificial in some way. Survival is an attribute or dynamic interaction selected by the programmer.
I believe that multiple algorithmic techniques will come together in a final solution (GA, NN, SVM, MCMC, kmeans, etc). So GA is still part of a large and difficult algorithmic challenge rather than a well defined solution. The algorithmic challenge is definitely non-exponential -- there are breakthroughs that could happen next year or in 100 years.
The bandwidth issue is the main reason I would put AGI at 2045-2065 (closer to 2065), but with the algorithmic issue I would put it post 2065 (in other words, far enough out that 50 years from now it could still be 50 years out). Regardless of the timeframe, it is a fascinating subject and I do think we will get there eventually, but I wouldn't put the algorithmic level closer than 50 years out until we get a good dog, mouse or even worm (c.elegans) level of intelligence programmed in software or robots.
Good question. In 2013 they hit 0.5 pflops (0.5*10^15) by putting together 26,496 cores in one of their data centers. So I expect they have scaled proportionally and would be around 1-1.5 pflops. That would put them at #50-80 on top500.org. Bandwidth wise they are probably at 10-50 gigabit/s which is where 10G ethernet is and Infiniband FDR starts -- a lot of systems in that range use those technologies for communications (with custom and higher bandwidth options in the top 10).
EDIT: As far as a whole data center is concerned, i'm not sure it would be a direct comparison as bandwidth would not be as high between cabinets. Amazon using their off the shelf tech to make a supercomputer is probably a better indication of how they compare. Of course at 26,496 cores that may be a data center!
This very reminiscent of the hype around DNA sequencing.
It turns out that genetics is vastly more complicated than the old "gene = expression" model. By the time you've added epigenetics, environmental control of expression, proteome vs genome, and all kinds of other details, you get something we're barely beginning to understand even now.
"neurons = intelligence" looks like another version of the same thing. My guess is neural nets will turn out to be useful for certain kinds of preprocessing, just as they are in the brain. But GI is going to have to come from something else entirely.
Right. I mean, it could be that the bottom of the exponential curve looks nothing like the top. We might just be beginning to see the lift off the x-axis in the form of practical AI applications ("soft takeoff"). Who knows what it will be like when these technologies begin to be stacked.
>Researchers are having a hard time bubbling up new paradigms
One of the most promising approaches at the moment seems to be Deep Mind trying to reverse engineer the human brain. Demis Hassabis their main guy did a PhD in neuroscience with that in mind and they are currently trying to kind of replicate the hippocampus.
("Deep Learning is about essentially [mimicking the] cortex. But the hippocampus is another critical part of the brain and it’s built very differently, a much older structure. If you knock it out, you don’t have memories. So I was fascinated how this all works together. There’s consolidation [between the cortex and the hippocampus] at times like when you’re sleeping. Memories you’ve recorded during the day get replayed orders of magnitude faster back to the rest of the brain.")
Modern ML is glorified statistics, and our current chips are completely the wrong architecture for doing statistics (since they try to achieve maximum precision and zero stochasticity at all stages).
I guess his prognostication here depends on super-powerful computing and brain-emulation software. China's Tianhe-2 has already hit 3.3^15 ops, Bostrom was anticipating for 10^14 - 10^17 to be the runway. Now, I am not sure what the state of brain emulation is at the moment but it looks like our biggest snag is there. Researchers are having a hard time bubbling up new paradigms for artificial intelligence software. Anyone have any insight into that?