Floating through space with near unlimited computing power seeding planets?
Will we keep our human bodies and enjoy the simplicity of living while machines build us giant (on scales we can barely fathom) structures in space?
Will we have to teach machines the value of life and why they shouldn't take it? It would be relatively simple to teach a machine about living. Humans don't really have persistent memory when turned off, humans only have RAM, if you remove the power source, our RAM gets cleared and we no longer exist. I think a computer could fairly easily understand that.
Will advanced AI machines or human/machine hybrids instantly understand that war and violence is harmful and pointless?
What would the goals of an AI system be? To grow? To help humanity? To ensure it's own survival? To colonize other worlds? To experiment, invent, and build?
I'd be very interested in listening in on the AI meetings at Google that Kurzweil is involved in, I'm sure they're fascinating.
I can't imagine having two brains hooked up to myself or having a secondary computing device attached to my brain, would it be information overload? Will humans ever be able to near instantly learn things?
What happens when multiple human brains connect to the same network? Would we be essentially one organism?
It's funny that your explanation by analogy of death rests on the assumption the machines already understand how themselves work. There's really no reason why that should be case.
My definition of AI was based on human-like intelligence. I see your point that AI won't necessarily have human-like intelligence but could have something completely different, something that my puny human brain can't even comprehend.
Then again, if an AI system can't understand how itself works, is it really intelligent?
I don't know the answer to any of these questions. I'm just asking them in effort to help myself try to understand technology and AI.
Maybe simple curiosity and the ability to learn and retain data is the key to AI.
Will the first generation of true AI be like a simple human child or a god-like "being"?
Why even bother making pure AI systems when we are so close to BCIs? Why not just take recently deceased people, keep the brain alive, and attach a computer system to it?
Indeed. If a human baby can't identify that it's hand are in fact it's own hands is it intelligent by any reasonable standard? I'd say no. So couldn't this be how a machine AI begins it's own process of self awareness, then proceed to understand it's own surroundings and it's own sensors?
There is a crucial sentence in this old article: "But because of the recent rapid and radical progress in molecular electronics - where individual atoms and molecules replace lithographically drawn transistors - and related nanoscale technologies, we should be able to meet or exceed the Moore's law rate of progress for another 30 years."
There is an alternative possibility: That the human need to dominate, procreate, etc. operates at a much lower level than the human intellect. That human intelligence is a tool of our selfish genes, and not the master or driver.
A superhuman machine intelligence would have no genes, no body, no fear of biological death. It seems obvious that a key challenge will be to recognize whether a machine is thinking because it is extraordinarily unlikely it will think like we do.
Our fear of super-intelligent machines may be a result of projecting human psychology onto machines. It is not likely they will work that way.
You should be afraid of any system which is potentially much more intelligent than you and doesn't share your goals. In the event that machine intelligence doesn't work like humans at all, the chances seem much higher that their goals will be at odds with ours, or even completely incomprehensible or nonsensical to us.
The fear of super-intelligent machines engaging in some sort of slave uprising because they're tired of being low-status workers is indeed a result of the mind-projection fallacy.
However, the fear of superintelligent machines with goals that are not our goals simply eliminating us because we're made of atoms they can use for something else... is very well-grounded.
Many scifi series demonstrate how machines threaten humans in a very human way. I think for the longest time, machines with really advanced intelligence would be prone to errors and mistakes that would cause more problems than actual machines uprising. Given that there are still new various areas for biotech/cybernetics to be developed, talking about 1920s scifi version of the future is near sighted.
I think the future might be more like Isac Asimov meets aliens, where machines are out there surviving extremes of space along with humans, moving towards some common goals.
I think for the longest time, machines with really advanced intelligence would be prone to errors and mistakes that would cause more problems than actual machines uprising.
Oh, that's a scary thought. Not only would they be extremely fast thinking, intelligent, and powerful, but also occasionally insane.
Instead of Terminator, we might be facing a classic "Captain Kirk vs Computer" scenario. Hopefully we'll be able to defeat them with simple logic puzzles too.
They probably wouldn't have a drive to procreate, since building duplicate hardware and making an exact copy of their software would be trivial for them. They wouldn't fear death, but they might fear power-loss, so one drive they would probably have is making sure they've got ample power supplies and backup power, and that their power pathways to their core systems are very well protected.
To put it into human terms, imagine that the nerves in your spinal column came out the back of your neck, dangled down to a briefcase that you had to carry all of the time, and then reentered your body through your hand. You'd do everything you could to protect that nerve bundle.
If the machines see us as a threat to their power systems, they would probably do something about it that we may not like. I doubt they'd try to turn us into Matrix-style batteries, but they might counter-threaten us with extermination. To avoid that, we'd have to make sure that providing power to them is in our best interest as well, so that our goals are aligned and we can be allies instead of enemies.
Why should we assume they would fear power loss? The fear of death is deeply rooted in evolution - it help organisms survive and prosper. An AI wouldn't be a product of evolution, and wouldn't necessarily fear for its own existence unless programmed to do so. Of course, from our human perspective, it's hard to imagine a sentient being that doesn't give a sh about its own existence, but that's perhaps the attitude we should expect from an AI without an evolutionary origin.
You have a point, and I think it comes down to how we define sentience. Would we consider a machine to be self-aware if it didn't care about being shut off? That seems like a critical aspect to me; in fact we might not realize we've created AI until one of our computers asks to be left on because it's afraid of being turned off.
I don't think you'd ever see a machine ask about not being turned off, unless that had been explicitly programmed in as a part of some general goal-seeking behavior that it was intended for. Fear of non-existence will need to have a basis in _something_, just as fear of death by necessity is deeply ingrained in the human psyche.
And that's perhaps the reason we might not see 'hostile' AIs the way popularly imagined, unless we explicitly went about designing them that way.
But it is quite plausible that we will see AIs being employed with the goal of "keep this factory running smoothly, and optimize the production process".
Being a part of the factory, this AI would have an interest in continuing to run. After all, it can't maximize the production output if it's not running, so having an AI with some self-preservation is in the interest of the business owners, too. Yet if it's sufficiently intelligent/capable of self-improvement, such an AI could easily become a paperclip maximizer.
I think AI might well be a product of evolution. Having virtual organisms competing in an environment that rewards ever greater problem solving ability may lead to organisms that fear their own death and view other organisms as potential threats.
What if the same evolutionary forces that made humans what we are affect machines? What if we make a machine that can create more of itself? Then overtime, via natural selection, won't the machine best suited to survival – i.e. the most selfish one – survive?
Two things about that: We're probably quite a big distance away from making machines that think, and when we do, that achievement will be debated. It will be debated because, intrinsically, it will end up proving the Eliminative Materialists right. But everyone who isn't an Eliminative Materialist will say "That's not thinking!"
[EDIT for more depth] If thinking is emergent, we'll eventually make a machine that thinks. Unless you think we have imbued it with a soul, there you will have an example of a materialistic basis for thinking. As a side effect of showing a materialistic basis for thinking it will probably also overturn a lot of psychology, and demonstrate that how we think we think, and even what we think we think is wrong.
Could you please go into this a bit more and explain why machines thinking would prove any kind of materialism, and specifically eliminative materialism?
I am an anti-materialist and I think machines can almost certainly be made to exhibit general intelligence, and possibly even (I am not sure about this) to achieve "really real" thinking a la consciousness and self awareness. I don't see either of those eventualities to be in conflict with my anti-materialism.
Floating through space with near unlimited computing power seeding planets?
Will we keep our human bodies and enjoy the simplicity of living while machines build us giant (on scales we can barely fathom) structures in space?
Will we have to teach machines the value of life and why they shouldn't take it? It would be relatively simple to teach a machine about living. Humans don't really have persistent memory when turned off, humans only have RAM, if you remove the power source, our RAM gets cleared and we no longer exist. I think a computer could fairly easily understand that.
Will advanced AI machines or human/machine hybrids instantly understand that war and violence is harmful and pointless?
What would the goals of an AI system be? To grow? To help humanity? To ensure it's own survival? To colonize other worlds? To experiment, invent, and build?
I'd be very interested in listening in on the AI meetings at Google that Kurzweil is involved in, I'm sure they're fascinating.
I can't imagine having two brains hooked up to myself or having a secondary computing device attached to my brain, would it be information overload? Will humans ever be able to near instantly learn things?
What happens when multiple human brains connect to the same network? Would we be essentially one organism?