The main problem is that most of the time arguments about humanity obviating itself are couched in a framework where the only thing that has advanced, in this instance artificial intelligence, is the science needed to make it a reality. This has never been the case, which is why you see so much eye rolling when arguments like this (or some of the others here about robots replacing human workforces) are made.
How can we say that by the time we have such wondrous machines that we as a species will not have found ways to move ourselves forward to a place on equal footing with whatever we create? Why do we assume that humanity won't move past our current societal constructs when we introduce new actors into the mix? These are the questions we should be asking when someone writes or speaks about the perceived dangers of some future event.
In light of this, while some of the dissent may seem opinionated, I would argue that the original premise of the article is somewhat opinionated itself. I think it goes without saying that most of us would prefer that humanity not obviate itself - but when we think about it do we really believe that the technology to create hyper intelligent machines will come before our society adapts to handle them? The answer may be yes, but lets not pretend such technology will be born into a world that looks like today.
> How can we say that by the time we have such wondrous machines that we as a species will not have found ways to move ourselves forward to a place on equal footing with whatever we create?
How can we find ways to move ourselves forward if we don't talk about and actively explore how to do so?
We are, just not so much in this thread specifically. Think about all the progress we are making in the bio-tech field - although this is clearly not the only answer to the problem. Don't get me wrong, conversations about moving ourselves forward are important, but I'm not sure starting such a conversation with what amounts to high-brow fear mongering is the correct way to do things.
How can we say that by the time we have such wondrous machines that we as a species will not have found ways to move ourselves forward to a place on equal footing with whatever we create? Why do we assume that humanity won't move past our current societal constructs when we introduce new actors into the mix? These are the questions we should be asking when someone writes or speaks about the perceived dangers of some future event.
In light of this, while some of the dissent may seem opinionated, I would argue that the original premise of the article is somewhat opinionated itself. I think it goes without saying that most of us would prefer that humanity not obviate itself - but when we think about it do we really believe that the technology to create hyper intelligent machines will come before our society adapts to handle them? The answer may be yes, but lets not pretend such technology will be born into a world that looks like today.