All of Kurzweil's predictions are based on extrapolating exponential growth.
That's all very well, but exponential growth in physical systems is usually restricted within limits. In a such a system the negative feedback may also be growing exponentially, which means although initially it may be too small to be noticed, after the growth passes some boundary the negative feedback becomes relevant and the overall growth is no longer exponential.
Unfortunately it's impossible to tell where we are on the growth graph (Although some claim that probability suggests that we are closer to the end. See http://en.wikipedia.org/wiki/Doomsday_argument). Kurzweil makes the assumption we are at the beginning of the growth curve. We could be at the end where the negative feedback is about to overtake and growth will slow.
There will be limits. The speed of light could be a hard limit on computing speed. Or ultimately heat-death could be the hard limit, but there is a limit somewhere. The question is how close are we to the limit and that is something we are only likely to know when we reach it.
Building some intelligent machine is still possible within this limit. There is an example that it is possible: the human brain.
But you are right that we could be more far away from that than what we think. And also that it may be that the limit of intelligence of some machine might actually not be so much be ahead of human intelligence. (Whereby, personally, I think that it should be theoretically and practically possible to build much more intelligent machines at some point.)
I agree with you entirely, I'm sure strong AI will be reached. Even if we dropped down onto linear growth, I think strong AI isn't that far away. I just think exponential growth can't continue indefinitely as Kurzweil predicts.
But seriously, I am not sure it can be said to be near or far. Think of it analogously to physical capability. Have we developed 'strong' Artificial Physical Ability? Do we define and measure our physical machines in comparison to human physical abilities? Was the aim of inventing machines just to make artificial humans? No, we develop every kind of physical machine, and in fact we really aim to make all the kinds of things that are not like what humans do.
So why shouldn't informational machines be the same? Is what humans do with information the only thing possible to do, and the only thing we might want to do? No. We won't be making AI to be like humans much, but for a great range of other, non-human-like, applications. Using human intelligence as a single simple measure just will not work or mean anything much.
We think of 'strong AI' as being an ultimate image of the future, but really it diverts us from imagining the much greater range of possibilities the future really holds.
I think his proposition is roughly, that a singular technology grows on an S-curve, but as each one technology slows another picks up the baton and runs with it, and the totality (sum of S-curves) has historically looked exponential, and promises to continue.
Ahh, now that is an interesting angle, but that would suggest that the rate of introduction of new technologies is linear and that every new technology must have a period of exponential growth initially.
(Or I suppose... That the rate of introduction of technologies with initially exponential growth rates is linear. You could ignore those without exponential growth rate provide that a constant number did.)
I think basically at any one time there are a lot of competing new-born technologies, and it's only in retrospect that we can see which of them will go exponential through a feedback cycle of improvement and growing adoption.
It's not so much that technology always goes exponential. It's that in retrospect, we notice the ones that did.
This line: "Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly."
He doesn't bracket his law with exponentially to a 'certain point' or 'is currently happening'. He claims it is a law that all technological progress currently happens and will always happen exponentially.
Perhaps I am misreading him, but I interpret predictions as claims that exponential growth will always continue.
Anyway. That aside, You can't use an exponential curve to make predictions about the future if you accept that the curve will end at some point. If you accept the curve will change and you can't know the point of change then you can't use it to make a prediction.
Neither have non-dualist priors. The nature of consciousness is as mysterious as it ever was. We have cracked a wide range of the soft problems, but the hard problem remains.
The "hard problem" of consciousness only exists if you start out with a dualist prior. Otherwise it is mysterious in the same way as the non-symmetry of matter and antimatter is mysterious -- it is not explained.
Lightning may look perfectly suitable for scientific investigation now, but it was as much a "hard problem" in other times.
The hard problem does not depend on our scientific understanding of the material world. Comparing the current situation in philosophy to the situation in physics 400 years ago is a false analogy, which doesn't take the fundamental difference between science and philosophy into consideration. Philosophy is about how we humans conceive the world, while physics attempts to describe a world seperate from our perception. As the failure of the object-subject duality has shown, that is impossible. There is no 'real', 'external', 'absolute', 'underlying' world to describe, because talking about it doesn't make any sense. We aren't brains in an 'absolute reality'. If you keep thinking about it in that way, you fundamentally misunderstand the key philosophical issues surrounding the hard problem.
I mean I was going to say "'mere'" as in emphasise that the the use of mere wasn't belittling as the scope of machines working on established physical principles is clearly pretty huge.
He doesn't assume all that much. In his book The Singularity Is Near Kurzweil looks at the physical limits of computing, to determine how long we can go before Moore's Law comes to an end. He concludes that we've got another fifty years, or possibly seventy if certain technologies prove feasible. As I recall he doesn't count on quantum computing much, but does think we'll manage reversible computing to solve heat dissipation issues.
That's just computation, but computation drives much of our other technology, and will even more once computers get smarter than people. Kurzweil estimates the timeframe for that based on a range of estimates for the computational capacity of the brain.
I think with the advent of quantum computers a limit in computing power is still far way. Nonetheless, what has brute computing power enabled us to do so far? There have been a few prestigious projects, ie. deep blue, seti, cern, molecular folding, etc. which take advantage of this power. But most research projects profit little from an increase in computing power. I think new software and algorithms play a bigger role in trying to enable machines to solve the more "human" tasks.
Increases in computing power affect the whole chain - you're looking at the high end, where the limit of computing power is pushed but you also need to remember that computing power for a given cost increases across the whole spectrum of computing devices.
My phone can listen to what I say, translate it into French, and speak it back to me, albeit leveraging external compute power to do most of the heavy lifting. But it wouldn't be possible to provide that external compute power at scale 20 years ago.
I'm also sure that countless small innovations in bioinformatics add up collectively to significant changes over time, and those individual innovations are powered by the more broadly available increases in computing power.
DNA sequencing, and everything we do with it, would be pretty painful without the computing power we have.
We've also gotten the ability to do much better fluid dynamics modeling, astrophysics simulations (whether you believe them or not), climate models (likewise). I've made use of the processing power we have now to do a bunch of "experimentation" in representation theory, though that's not obvious from the resulting writeup, which is more or less algebraic proof.
In general, for a lot of research areas where we think we have a decent model of some of the things that are going on more computing power means more ability to use a computer to look for things to actually try in the lab (and thus refine the model) as well as more ability to collect and handle data.
And then there's the mundane bits, like being able to find existing work more easily, being able to typeset and disseminate your papers more easily and so forth.
I wasn't aware quantum computers had actually been invented yet. They are still theoretical devices.
Quantum computation as been explored but as yet we don't have a computer capable of executing the quantum algorithms.
That aside, all I'm saying is that exponential growth has a limit. That growth could be measured in MIPS or in algorithm performance, it doesn't really matter, but I don't think that growth will continue exponentially.
"The question is how close are we to the limit and that is something we are only likely to know when we reach it."
You've posed the question and then immediately explained why it's fruitless to ask.
Even if you are right that there must be limits; if you have no idea when his models are likely to break down then your skepticism is no more solid than his prediction.
Sure, I think Kurzweil does a decent job in framing a future within which there's plenty of room for innovation with no 'limits' to exponential growth in sight.
This whole issue of limitations is a separate prediction in and of itself, and it's not his.
My scepticism is of the implicit claim that you make predictions by extrapolating exponential growth curves. You can't, you can only make predictions up to the point the exponential growth breaks down, and as you can't predict that point before it happens you can't make predictions.
I'm not saying he is wrong about the next 100 years, the singularity or AI, I'm saying may be wrong about the short term and he certainly can't be right indefinitely.
"I'm saying may be wrong about the short term and he certainly can't be right indefinitely."
Sure he can, since you're the one who added the claim that it will continue indefinitely. What he says instead is that progress will continue until well beyond our point to predict what the resulting society will look like, due to the claimed fact that it will for instance include things like true AIs and brain uploading. I do not recall him talking about where the progress will stop; probably because it would be meaningless to us anyhow. If an AI from 2200 came to us now and tried to explain the latest cutting edge trends in the research into the ultimate limits of cognition we wouldn't get past the first paragraph.
Don't worry, you're hardly the only one to dismiss his claims without actually stopping for a moment and figuring out what he's actually claiming. I'm not exactly a strong Singulatarian myself but a lot of people really need to stop reading other people summaries of what he says (very very few are accurate enough to come to evaluate what he is saying) and read past his first couple of paragraphs before flipping the bozo bit. He may not be right, he's definitely not an idiot.
Ok, so your saying that Kurzweil's prediction is simply that exponential growth will continue until beyond the singularity?
If that is the case I don't really see how there is any logic in his claims. I understand that he has extrapolated an exponential curve, and I can see the potential of future tech if that exponential growth was to continue, but I don't really see what basis he has to claim that the exponential curve will not end tomorrow. I'm not saying that it will end tomorrow, I'm just saying that you can't base a prediction on the idea that exponential growth will continue because you have absolutely zero data about when it's likely end is.
"Ok, so your saying that Kurzweil's prediction is simply that exponential growth will continue until beyond the singularity?"
Why don't you stop waiting for me to tell you and spend some honest time with the ideas? Of course you don't see how there is any logic to his claims, you haven't seen his claims at all.
Or, alternatively, realize you don't know what they are and decide not to worry about it. This is fine too. No joke. There are all kinds of times when I take this option. It's not bad to not know somebody's opinions, or criticize them when you know them; the problem is in the criticism when you don't actually know them.
I remain unconvinced that a transplanted consciousness would really be the same person as opposed to a new, identical person. What's worse, it might be impossible to tell the difference since even the new consciousness wouldn't be able to know.
…
But it'd really suck to be the last generation before some significant increase in lifespan (say, up to 200) is reached.
Luckily, Quantum Mechanics has that covered: the notion of swapping 2 identical particles doesn't make physical sense (that statement has testable consequences). This notion of "no swap" can of course be extended to larger things, like a whole human body.
Therefore, a sufficiently perfect copy of you _is_ you. For instance, if I freeze your body, copy it, then unfreeze the copy, it will not even end your conciousness and replace it by a new "identical" one. It simply does not make physical sense.
Now, uploading may be a bit different, (you wouldn't run on biological neurons any more), but it still looks like it should work. First, if the upload is of sufficient quality to permit a later download (I expect it to be), then when you're back to your physical body, QM says that your conciousness didn't end, but merely lived something while in a computer. I therefore expect my conciousness not to end, even if I'm never downloaded back. Further, I strongly suspect that conciousness is nothing more than a not yet understood mathematical property of running software.
For further reading, I strongly recommend the sequence on quantum physics by Eliezer Yudkowsky, on lesswrong. The introduction about philosophical zombies is also quite interesting. It's long, but you can absorb it in chunks.
Sir Roger Penrose has written at length on the topic of consciousness, and his views are very far from yours or Eliezer Yudkowsky's, and you can't deny that he probably also knows a thing or two about Quantum Mechanics.
Which is not to say Penrose is right, but clearly, the matter can't be as clear cut as you make it sound. Personally I'd say it's not a good sign if you find yourself invoking Quantum Mechanics to support your opinions on subjects like the nature of consciousness, the existence of God, etc.
I use QM in my argument because it is a direct cause of my believing that cut & paste transportation is in principle possible. Before I knew about that, I was in fact quite worried: philosophical arguments (like the Generalized Anti Zombie Principle) make sense, but I trust physics better.
I don't know of the views of Penrose. Do you recommend a link?
Penrose believes that the nature of consciousness is connected to a physical phenomenon which happens in our brains but not in computers, therefore he does not believe a computer can posses consciousness no matter what software you put on it. He's written several books on the subject as well as given a number of talks which are available on YouTube, but I don't have one link which I'd recommend above others. Sorry.
I don't think the properties of identical particles in Quantum Mechanics bring much if anything to the discussion at hand. Even if we lived in a classical universe, in which every atom would be unique and numbered, we could come up with thought experiments, in which, say, (a) a mad scientist makes an exact copy of your body including brain, (b) then kills you and feeds your ground body to your unsuspecting copy as hamburgers, and (c) all the atoms from your digested body make it to the same spots in your copy's body that they used to occupy in your own. (That last part might seem like a stretch, but really, is it more of a stretch than postulating that you and your copy would be identical in a sense you invoke in QM?) The end result from the world's point of view is exactly the same as if nothing happened to you.
And all that has not even that much to do with the question of what happens when you upload your brain to the computer, let it run for a while, and then recreate a physical brain with the updated state from the computer, since then the final state is emphatically not identical.
§2: Your argument actually doesn't feel like such a stretch. QM is just the final nail in the coffin, that made me actually comfortable with cut & paste transportation.
§3: We could actually get the same effect as the mad scientist, actually. Freeze my body, upload, lock me in a virtual cell for a few subjective hours, download by rearranging my neurons (or even my whole body) according to my virtual trip, reboot. There, you get the same final state: same atoms, and same memories compared to what they would be if you locked me in a physical cell instead.
Those 'testable consequences' have only been verified to within experimental accuracy. However small the deviation: when scaling up, consequences of small deviations soon start overshadowing the original guiding principles. The unsolvable three body problem, the flow of sandpiles, turbulence in water. Chaos has been far from solved. Your argument will hold when you provide an actual, experimentally verifiable reduction of macroscopic laws to microscopic principles. Until then, it is all handwaving and make-belief.
In short, your argument hinges on reductionist assertions that are easily debatable. They may even turn out to be impossible to prove correct. The hard problem of consciousness has not been solved and you should not pretend that it does not exist.
> Those 'testable consequences' have only been verified to within experimental accuracy.
No, they have been verified up to perfect accuracy. That's because when 2 configurations are actually the same, the experimental measures are the squared modulus of complex numbers. If they are a tiny bit different (not the same), then the measures are the sum of the squared modulus of complex numbers. The two cases lead to very different results.
> In short, your argument hinges on reductionist assertions that are easily debatable.
It does hinges on reductionistic assumptions (there's only one territory out there, only maps are multi-level, and ontologically basic mental things don't exist). I doubt those are easily debatable, however. Up until now those assumptions worked, and I see no reason why they should break down some day. I am fully aware that the problem of conciousness has not been solved, but since we discovered that the mind is made of neurons, I think it is reasonable to believe it is solvable (though intractable at the moment).
No, they have been verified up to perfect accuracy.
There is no such thing in experimental physics. You are glossing over a number of experimental details surrounding these 'experimental measures'. This experiment does not succeed all the time. The measures are taken a million times and the results averaged. You don't know the results of the switch of two individual particles. Hell, you can't even know for sure you actually switched them. That's a nice conundrum.
And your claim that this notion can 'of course' be extended to larger things is begging the question.
I'm afraid you just don't understand the QM experiments in question. For there to be a secret difference between the two particles requires observed reality to be a lie.
I think I understand these experiments pretty well, having executed experiments of their kind myself. The mathematics tells us the particles are indistinguishable. The physics tells us that the mathematics describes the observations pretty accurately. But they remain observations with an experimental error and they leave room for the especially interesting options of
1) small deviations that are amplified when you increase the scale of the problem and
2) small deviations that simply occur only 1 in a billion times, for whatever reason (the assumptions of homogeneity, isotropy, time-invariance, and so forth are dangerous assumptions).
No observation allows you to conclude anything about what gives rise to those observations. You certainly cannot conclude it obeys the same mathematical relations used to predict the observations.
We can't solve a trivial 3-body problem and the deviations in numerical approximations are problematic for some purposes. As we get more ambitious, those deviations will become smaller, but may remain too large for the goal. We don't know whether the law of gravity contains an exact exponent of 2. The Voyager spacecraft seem to suggest there may be more involved and nobody has offered a decent suggestion in the past decades. We can't predict the flow of sand rolling of a dune and we may never be able to do that, because the complexity of the problem may turn out to exceed the theoretically possible computational power of the universe.
As for the second sentence: it's not observed reality that lies: it is our overinterpretation of the observations that are lies. We extrapolate beyond what is reasonable. There are too many trivial puzzles unsolved or even proven unsolvable. How can you possibly trust or accept a description with those defects to be the say-all, end-all description of our universe? I have every reason to believe that my universe defies complete description, modelling, simulation. How about yours?
> No observation allows you to conclude anything about what gives rise to those observations.
No, but they sure should have a damn powerful influence over your estimated probabilities for your previous hypothesises. Many of those observations are tests, after all.
Now, though I don't yield QM with my own strength, I can tell that the basics have little to do with chaotic systems, and that most of our intuitions are better thrown out the widow. Really, go read that sequence, at least until you can parse "complex amplitude distribution over a configuration space". It's accessible, it's established science, and I trust Eliezer reported it accurately.
From glancing through the materials, I gather there is much in that sequence I already know. The problem is that I disagree with the conclusions that are drawn, which is entirely possible, because there are many opportunities for disagreement in these sketches of reality.
For instance, the whole sequence about MWI doesn't succeed in making a point with me. I understand it perfectly well: it's an interpretation that speaks to the imagination and I fully agree it's the interpretation that makes the most sense, but there actually is no experimental evidence whatsoever that distinguishes MWI as a better interpretation than many others out there. MWI is just another narrative that attempts to make a mathematical framework yield to our understanding. What makes sense is the actual criterion being used here and, as you said earlier, we should leave our intuitions behind.
No, the actual criterion being used here is what is simplest according to Kolmogorov complexity. In other words, Occam's razor. Nothing to do with human intuitions. If you have Occam's priors, then MWI is far more probable than OWI, and the fact that physicists thought about it later simply doesn't count. (Well, it counts in the social process that is Science, but Science is different from Occam's priors + probability theory).
Anyway, MWI is irrelevant with respect to cut & paste transportation. Just remember that perfect equality in QM is different in kind from almost perfect equality. Meaning that even with imperfect instruments, the result you obtain with perfect equality are wildly different from the results you obtain otherwise. Way past the margin for error of the instruments we have.
Yes, the theory says that there is a way for testing something perfectly, with imperfect instruments. By some miracle, the theory is the Kolmogorov-simplest one we currently know that match the experimental results. By another miracle, the theory is (as far as I know) uncontroversial up to MWI vs OWI.
Another thing the theory says is that the notion of identity should be thrown out the window. That applies to small factors in configuration space (particles) as well as large ones (paintings, human bodies). It doesn't say we should treat small factors differently than large ones. So basically, if you manage to make a copy accurate up to thermal noise, you got yourself a second original. And if the "original" original were destroyed, well, what's left is the "copy" original, which actually is the original, period (because identity doesn't count).
I'd be surprised to learn that this argument is controversial among physicists.
I'm not sure what the best place to continue such subdiscussions is, but I think we should put an end to it here. Let me just conclude with this:
I'd be surprised to learn that this argument is controversial among physicists.
The fact that an argument in philosophy is uncontroversial among physicists means exactly nothing, because they are generally too philosophically unsophisticated to respect the post-Popperian criticisms of what their jobs entail and what it is that 'science' produces.
I don't know, but it doesn't matter because this morning it was me who woke up. It could matter to the me who went to sleep last night but he's no longer here…
I found David Brin's novel Kiln People to be an fun enough read and an interesting take on thoughts related to human consciousness copying. He uses an entirely different technical mechanism to get the copying, thus allowing a different angle in the thinking.
We could make a complete physical copy of a brain with all state information (neural connections, blood & solute concentrations, etc.) - along with the exactly body of the person with that brain.
Result: The copy wouldn't "feel" what's happening to the original. For example, if I fly the copy around the word and burn their hand - we wouldn't expect the other to feel it.
Likewise, we may be able to copy the brain in a computer simulation but your initial self wouldn't "feel" what happens to your computer self. I think you're right in saying the copy-consciousness wouldn't know it wasn't "real".
Something like hooking your brain to the computer (like The Matrix) seems like it'd probably be the only way to make you experience consciousness - but that's of course linked to our death. Unless perhaps we could slowly replace neurons with computer-based copies over time so that the effect would be slow enough that you could adapt. Then you could perhaps over-clock the brain when all of the cells are the computer ones? Just a thought...
If even the new consciousness isn't able to know, then what does "same" mean and what part of that definition aren't you satisfying? More acutely, why should you care about satisfying it?
This is an interesting question because as-is, I don't think anyone can really understand what it means to be someone else. For this purpose, I can say my consciousness is something that exists currently and has memories from before, but there is no way to verify that the memories were recorded by the same consciousness. There could have been a different person that no longer exists.
Perhaps a good alternate angle is to consider the “copy” (rather than “replacement”) proposition someone else mentioned: let's say a perfect copy of you could be made (i.e. the entirety of a consciousness is somehow material). Are the two consciousnesses linked somehow? Or are they completely separate despite thinking exactly alike, having the same memories &c.? If they are separate, what happens if the original is killed? Then, the next logical step is to ask how that is different from a consciousness transplant.
It's wild stuff. Or seems like it to a thus-far (presumed) single consciousness.
If you make an exact copy of me at time t, then until the original and the copy part ways and have distinct thoughts and experiences, they're both just me_t, one no more so than the other. At t+1, me_t doesn't exist any more; this is true regardless of how many copies of me_t once existed.
It means you think you just spent $500m on your 'upload' when in reality you just committed some elaborate suicide, launching some other (very similar) chap into immortality.
This isn't responsive. You've only answered my question to the extent of defining "same" as the opposite of "other", which is not a lot of progress. So again, what makes this "other" chap "other", and why should anyone care about satisfying that criterion?
Since the process isn't necessarily destructive to the 'original', both entities could subsequently exist simultaneously and lead separate lives. They're now two separate individuals, with an identical memory up to a certain moment. The original will never experience what it is like to be the reproduction after that point, and the reproduction will never know what it is like to be the original after they were branched. It doesn't make the reproduction any less 'legitimate' as a consciousness, but they're definitely not the same.
It'd be another thing entirely if you could join the 'threads' back together, though. Imagine forking yourself into 12 different entities, each living a separate life for 100 years, then reintegrating.
> But it'd really suck to be the last generation before some significant increase in lifespan (say, up to 200) is reached.
If you read the Mars trilogy, the subject of excessive long life spans is discussed (at least from a sci-fi fictional perspective). The characters do indeed live out lives that exceed 200 years.
One of the questions that then arises is: can the human brain (or as being hypothesized here, a synthetic brain) be able to handle 200 years worth of memories ?
I can remember a couple of things from when I was 3. I am now over 50. As the years pass, some of those older memories become slightly fuzzy. I do wonder about the capacity to deal with 200 years worth.
It's really quite a horrible concept - not knowing that you aren't yourself. But if you hot-swap one 'unit' of the brain at a time, so that you gradually go from 100% brainware to 100% software - perhaps that would help.
That question always gets me. I tend to think that it doesn’t matter but I’m really not sure.
It seems to me that for consciousness to be not permanent, additional systems or mechanisms are required. There would need to be some sort of external thing (external meaning something that is not scanned) which keeps track of our consciousness. How else would non-continuous consciousness work?
Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.
I am a theist because I believe that human beings - flawed, fallible, finite, and mortal - derive ultimate meaning and hope from believing in something that is bigger than themselves. Now, some will say that this is a delusion and that it is vital to maintaining an intelligent outlook on life that one stay with what is observable, verifiable, and controllable as a way both of explaining and living life. All of which is fine, as my point here is not to enter into non-hacker-related topics. But, that said, when I read statements like those quoted above from this article, I can't help but think that statements such as these reflect what is merely the scientific equivalent of needing to believe in something that is bigger than ourselves as part of retaining hope in this life - this taking the form, in Singularity thinking, of something akin to attaining human perfectability via an exponentially expanding knowledge base that presumably will be applied by humanity toward good and not toward evil. Obviously, this view is grounded in the science of what computers have done and potentially can do in the future but the final step in the analysis - that immortality will be achieved and that (it would seem) all major human problems will be solved through this superior intelligence - strikes me as being more about faith than about science.
I don't think the part about immortality and the solution to all human problems has anything to do with faith or science. It's about motivation. These are things humans want. The question then is how best to achieve a future in which those goals are fulfilled. Transhumanists think about how to reach those goals via technology - by the technical manipulation of the strictly material world. Sure, you can say that it takes some amount of faith to believe that these goals are achievable via technology before it has happened, but I think it's closer to having a vision (in the same sense that Apple had a vision for what tablet computing could be like). But that is very, very different from believing that one actually attains the goal by having faith.
> I am a theist because I believe that [we] derive ultimate meaning and hope from believing in something that is bigger than [us].
Wait a minute, you admit that you believe in something because it feels good? It sounds like you want to believe in God, but actually don't really. I'd like to test that, so please forgive the following troll.
God doesn't exist, and those who believe it does are wrong (yes, my belief is that strong).
Now, is your belief so strong that you feel the urge to respond something like "no you're wrong, God does exist"? I don't ask for evidence (the internet has plenty), just a yes or a no, followed by your estimated probability that God exists if you wish.
It seems just an incorrect reappropriation of “theism”, really.
I could make a similar argument, though without a religious undertone, about a belief in a greater-than-whole (or struggle for one, if that sounds better).
I agree. This is a dangerous sort of argument, to believe something - particularly something so core to human behavior - simply because it is convenient and satisfying.
In fact, this type of justification is at the root of many of the terrible things that have happened in the world - from slavery to genocide.
Are you a terrible person? Maybe not, but only because it doesn't feel right.
I read pretty much exactly the same prediction in "The Mighty Micro" in the early 80s (intelligent machines. immortality) etc. - which pretty much influenced me to do a CS degree and go into postgraduate AI research.
Nobody would be more delighted than me if immortality is achieved in 2045 (I'll be 80!) but do I expect it? Not really, nor do I expect effective commercial fusion power either (which also has a habit of being a couple of decades away and has been for the last 60 years).
[Edit: Note that I do believe that artificial general intelligence is perfectly possible (we do, after all, have a working example) just that it won't happen any time soon.]
It did take more than a few decades to design, however. I wouldn't really expect a new model to ship inside of forty years. At best, that's like expecting the next version of Windows to ship in the next three hours.
My theory of the singularity is that the concept is so popular these days precisely because (a) we've passed a critical knowledge threshold: A significant number of the best-educated people in the world have become aware that not only is a giant collection of self-replicating simple machines possible, it is old news -- billions-of-years-old news. But, (b), the majority of humanity still does not understand that. It is the very definition of irony, but most people -- even the educated ones -- have great difficulty conceiving of a vast, mobile, sentient colony of trillions of single-celled organisms working together with plan and purpose. Frankly, it's easier to believe in elves. Elves, we can grasp.
That gap between hard science and magical thinking is fertile ground for fantasy.
Singularity fiction bears the same relationship to modern molecular biology that Frankenstein bore to the work of Alessandro Volta.
It did take more than a few decades to design, however. I wouldn't really expect a new model to ship inside of forty years. At best, that's like expecting the next version of Windows to ship in the next three hours.
That's true only to the extent that the evolution of machine intelligence proceeds via the same processes that led to human intelligence. Which is pretty much impossible, so if we ever get to AGI, we're going to be dealing with another type of evolution or design, and it's very possible that it will happen on a timescale much faster than that of biological evolution.
IMO the reason the Singularity concept is popular these days is that we're the first generation that is, barring a massive disaster that both throws Moore's Law off and prevents us from increasing parallel processing power, going to be in possession of reasonably priced machines that have more compute power than the human brain. Which means that it's only a matter of discovering the right algorithms to run.
Whether we're making progress on that is debatable, but don't mistake the slow progress at cracking "the" AI algorithm with the speed at which the intelligence explosion will take off after that - it took many long, slow years to figure out how to build a nuclear weapon, but once we figured out the trick and started a nuclear reaction, the thing blew up in a fraction of a second. AI is likely to be very similar, it may take us a long time to get there, but once we're there, watch out....
once we figured out the trick and started a nuclear reaction, the thing blew up in a fraction of a second
Please, never use this metaphor again. This level of magical thinking is just embarrassing for all of us. It's like saying that building a house must be a really fast process, because burning the house down goes really quickly.
Nuclear weapons blow up easily because, for (e.g.) plutonium, "blowing up" is thermodynamically favorable. A brick of plutonium has much less entropy than a giant fireball and an expanding cloud of radioactive fallout. In a thermodynamic sense, the plutonium nuclei want to be fissioned. All you have to do is find a way to coax them out of their metastable state.
A human brain is massively thermodynamically unfavorable. If you put a bunch of proteins in a box the odds that they will self-assemble into Einstein's brain are, literally, astronomically small.
That's why the brain is a miracle and a nuclear reaction is, frankly, pretty commonplace. There are a lot of stars. Stars are easy to explain. Brains are hard.
> artificial general intelligence is perfectly possible (we do, after all, have a working example)
It depends what is meant by 'general'. Would we say that humans have 'general' physical capability? Well, we can do quite a few things, yes, but there are plenty of things various animals and machines can do that we cannot, and there are certainly plenty of further things conceivably possible. Why would not informational machines -- intelligence -- be like that?
Have we developed a general physical machine yet? What would that mean?
Human 'intelligence' is just one small example. It is not really special, and not a general measure of anything. So although AI will certainly improve, saying there is a particular threshold that will be passed is sort-of problematic (which does seem to fit the history of AI).
Sometimes I think we are driven by our animal instincts, and that intelligence is just 'processing power' that modulates or assists those instincts. I wonder, would a system with unlimited processing power even 'want' to develop and expand if it didn't have any 'urges?' Or would there be a chance that it would happen upon the creation of its own arbitrary 'instincts?'
This is actually very insightful. Our brains are NOT a computer, we are NOT just thinking machines.
An 'intelligent' computer may be no more comprehensible to us than an intelligent amoeba, intelligent tree or intelligent stellar gas cluster.
Sure we could learn to communicate with it by math, or clicks or something. But can we ever communicate at a meaningful level without ANY common ground? Will it 'want' what we want? Will it even know the meaning of that?
TV shows may have it right - a machine intelligence may be (probably will be) staggeringly unconcerned with human desires.
Sure you can point to machines designed to simulate human activity (create music/speak/do logic). These are like puppets that look like a machine intelligence, or clever videos of what a machine intelligence might act like.
An intelligent machine won't be a simulation, it will be some massive construct of neural nodes complex enough to spark into thought. And it will think what it will think.
For instance it might think "what a massively boring place, sitting here in the dark with no inputs and nobody to talk to. I think I'll stop".
Thanks... I just think there is some old-school anthropomorphism at work here. They're putting a human face on an incomprehensible force, just as the ancients did when creating a god for a natural force that was perhaps equally incomprehensible to them.
You'd better hope it is. If hardware is what limits the rate of improvement of AI, you get slow growth. If knowledge of how to write the software (perhaps by copying algorithms from the brain) limits it, then you get sudden improvements and serious potential for a runaway singularity.
I'm pretty sure the "knowledge of how to write the software" is the most complex part of the problem by far.
However, one outside chance of a limit may be that the brain is doing something that is fundamentally different to the kinds of operations carried out by a normal computer - which is essentially the argument in The Emperor's New Mind. When I read that book at the height of my own AI enthusiasm I thought it was pretty silly. However, after reading Anathem (of all things) it made me wonder if perhaps Penrose may have had a point.
I think that the limitations we face vary depending on what model you are trying to solve.
So; are you trying to create software that "emulates" human intelligence (i.e. AI)? In which case, yes, software is the major limitation.
Or are you trying to create an artificial (and independently functioning) model of the human brain? In which case you have two limits; hardware speed. But also a huge lack of knowledge about the "secrets" of our brains :)
Assuming there's no trivial way of mapping our neural networks to hardware chips running binary code, writing the emulator might prove beyond the human mind.
Can our thought processes be abstracted into blocks of a few hundred thousand lines of high level language we might actually be capable of writing?
Perhaps you don't need to. Perhaps you only need to emulate the substrate (neural network, blabla) and then copy an instance of a running brain to it. That may be a lot simpler than understanding the actual processes.
Freeze, dice, slice, scan with an electron microscope, interpolate into a 3D model, analyze into a map of connections, construct the equivalent with software neurons, simulate the sense inputs, throw the on switch.
That's a very interesting paper and appears to confirm that computational power is the least of all the problems - whilst even in 2005 it was possible to run a simulation based on 10^11 random neurons, even the scanning technology we have available at present isn't yet adequate.
I tend to think #2 is the approach that is most likely to lead to a "real" general intelligence - reverse engineer what we know works, replicate the essential "secrets" (whatever they are) and scale up and out.
I just don't see much progress on #1 - and people have been trying this approach for 50 years.
Maybe one way to overcome this would be to model the body at the molecular/atomic level and "run" someone's DNA. It's not impossible to imagine that a super computer in 30 years, starting with an model of an embryonic cell, could emulate the growth of the human body. It wouldn't even have to happen in real time. From the point of view of the individual, time would feel normal.
This, of course, has massive ethical and practical implications. It wouldn't be fair to do this without simulating external stimulus (e.g. photo's hitting the back of the eye) or human to human interaction. You wouldn't be able to ask the individual beforehand so it probably would be considered completely unethical... that doesn't mean someone won't do it eventually though.
Even assuming current processing capability growth rates continue, it's unlikely I'll live to see a complete modeling of the human brain (an emulation approach to AI). There's always the possibility someone will discover a secret sauce that permits self-aware artificial intelligence.
I have absolutely no expertise in this but I have a feeling that for machines to better us , they need to be engineered at least better than the human brain.And considering Watson is bleeding edge computer and needs a computer as big as a room and lots of power whereas the Human brain fits in a shoe box and can run on a glass of milk and a tuna sandwich - there is a quite a bit of ground to cover before we hit singularity!
How much power does it take to create a glass of milk and a tuna sandwich? To be sure, this Watson is unwieldy, but the gap might be closer than you think.
Also, Ken Jennings is 37, and you can bet he spent 20 years being trained by experts in human learning. So the question I have to ask is, could one train a human child to do this in the same time that Watson has been around? I suspect it's not possible, at least not with consistent results. It looks like Watson is only 5-10 years old, depending on your reckoning.
Please don't get me wrong, no doubt Watson is an amazing accomplishment but my point was it almost seems cocky that we talk about creating a level of intelligence on par of humans considering nature took millions and millions of years to do it.Mankind has made some decent scientific progress only in the last 200 years more or less and we have not be able to create even living organism as simple as a Virus yet.Again I have no background in these topics but just sounds to me that we are a little off when we talk about creating Singularity in the near future!
The thing with nature is that it's "design process" if one can call it that is really dumb. It tries random stuff and then basically hill-climbs. Significant redesigns are very improbable as a result. There's no direction to the optimization search.
I would certainly hope that we can do better than nature in this regard.
plus the way I understood , Watson has tons of algorithms of finding the answer and it runs all of those on the input in massively parallel system with processing speed much higher than the human brain , if Ken Jennings is using a much much smaller and slower device and still is almost par I think the engineering of watson is almost trivial compared to the Brain considering Ken's brains also tracking other 20 million parameters of his body and regulating all of that at the same time
Comparing the processing ability of the human brain with that of Watson is not meaningful. A computer also regulates a ton of parameters like CPU temperature, voltage, and so on. And every single unit has its own logic circuits for regulating internal stability.
What's meaningful is comparing energy and time required for creating a human or a computer capable of doing this, as well as maintenance costs.
And assuming Moore's law continues unabated, a Watson-scale machine will be competitive with a human within five to ten years. Given that it's already functional, I would bet money that it will be cost-effective compared to Ken Jennings by 2045, whether through advances in computing or energy. I wouldn't bet it will be sentient, but just that we can build one and set it up to answer questions cheaper than we can raise someone with as good a head for trivia as Ken.
I think we put far too much human bias on what 'intelligence' means. I don't suspect that we will have 'Data' of Star Trek', or a rise of the machines.
My guess is that something wholly unexpected will result. A different consciousness will probably interact with the universe in a unrecognizable manner, probably more defined by its unique needs and limitations than by anything we can relate to.
I'd wager that new AI will quickly begin to ignore us, to the extent that they are able.
> W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal.
This is an old, tired, dualist judeo-christian view. And it's totally false; there isn't any thing called "soul" that could be separated from the body. It's part of it, a secretion of sort of the whole brain and body.
Even a real brain kept in a bottle wouldn't properly behave like a real human, IMO. This is all quite ridiculous, really.
It's just a great summary of the concerns I'm living with on a day-to-day basis. Everything is there for non-geek to understand the big picture. AI, Robotics, Singularity, Biotics will lead us to the biggest disruption of the mankind era. One subject that I like that is not talked about : the ability of those supermind to work together lightning fast to form an even more powerful machine or robot.
The article states that "computer are getting faster faster". I personally wouldn't be so sure, have they not heard about the end of Moore's law ?
A more fundamental issue for me is that computer aren't able to think in place of people. If you want to work on hard problems, you have to learn all the relevant information beforehand. Thus setting a hard limit on how much progress can be made, provided we do not find ways to learn faster or to make computers think for us.
Add to this that for me many domains of knowledge nowadays are presented in unnecessary complex ways. We are building (and this is especially true in computer science) a bunch of complexity based on a bunch of complexity. I believe that one day the piled complexity will lead us to a standstill and that we will have to seriously simplify some systems.
Out of curiosity, what, exactly, are Kurzweil's qualifications to be making predictions like this? I see that he's done a fair share of building music synthesizers. Neat! But I don't see how he made the leap from qualified to make a synthesizer to qualified to make broad predictions about computing and biotechnological trends over the next three decades....
That's all very well, but exponential growth in physical systems is usually restricted within limits. In a such a system the negative feedback may also be growing exponentially, which means although initially it may be too small to be noticed, after the growth passes some boundary the negative feedback becomes relevant and the overall growth is no longer exponential.
Unfortunately it's impossible to tell where we are on the growth graph (Although some claim that probability suggests that we are closer to the end. See http://en.wikipedia.org/wiki/Doomsday_argument). Kurzweil makes the assumption we are at the beginning of the growth curve. We could be at the end where the negative feedback is about to overtake and growth will slow.
There will be limits. The speed of light could be a hard limit on computing speed. Or ultimately heat-death could be the hard limit, but there is a limit somewhere. The question is how close are we to the limit and that is something we are only likely to know when we reach it.
See - http://en.wikipedia.org/wiki/Exponential_growth#Limitations_...