If you assume that your conscious experience is somehow arising from your neurons, then generating a set of those neurons that are genetically identical and cannot be distinguished in any way (except location in physical space, which is an ephemeral quality that doesn't appear to change neuron function), then we can assume we are creating, or will create, beings who are having conscious experience on some level.
On an ethical level, I think we need to understand whether we are creating thinking feeling creatures who will be doomed to suffer as data slaves before we normalize this. If I removed your brain from body, removing all sense pleasures, drowning you in darkness and isolation, and the only input and output you had were binary signals for some abstract data problem, you would experience profound silent suffering in an eternal private hell.
This truly would be the invention of the matrix, but not for an army of tyrannical robots / AI, but for the use of humans themselves -- a modern day slavery of the mental kind.
> On an ethical level, I think we need to understand whether we are creating thinking feeling creatures who will be doomed to suffer as data slaves before we normalize this
Though I agree with you, I think that's a bit alarmist. We're a very long ways away from this issue being something other than a thought experiment. The article states there is a 64 neuron chip that does not yet have customers and is not yet in production.
The Elephant brain has ~3 X 10^11 neurons in it, or a factor of 10^10 more neurons than the chip. Even assuming a doubling period as short as Moore's Law, we looking at ~50 years until this might be a problem we'd want to deal with.
I'd say we just wait and see how things play out for a few more doubling cycles before we start pulling at our hair.
Agree, but I don't think biological neurons are the crux of the issue. I can't find the quote, but I believe from the book Echopraxia, the author discusses consciousness as a form of conflict resolution in regards to predictions about the self, an example of holding a hot pan despite the pain, knowing the consequence of letting go results in hunger. Or similar, the classic Jabberwocky example from Dune as a test of Paul's "humanity". But we can easily imitate these processes with machine learning even today, several projects involving the OpenAI gym have approached this. At what point do we believe these agents are conscious, and at what point do we shut them down?
This builds on the theory of "predictive processing". There are a few key people in the field; Karl Friston, Andy Clark, a few others – lots of rabbit holes to go down.
An artificial brain being fed a stream of bits will not necessarily feel like it's in an empty room processing an abstract data problem.
If we can create an AI with different goals and reward mechanisms, there is a potential that we could create agents that are experiencing bliss doing data processing tasks.
Of course how we tell the difference between a miserable agent and a joyous agent is still an open question ..
“I leave Sisyphus at the foot of the mountain. One always finds one's burden again. But Sisyphus teaches the higher fidelity that negates the gods and raises rocks. He too concludes that all is well. This universe henceforth without a master seems to him neither sterile nor futile. Each atom of that stone, each mineral flake of that night-filled mountain, in itself, forms a world. The struggle itself toward the heights is enough to fill a man's heart. One must imagine Sisyphus happy.”
Emotions are just chemical responses, no? What if those chemicals aren't even present in the system? In other words, I don't think there's any more reason to think a ball of neurons is "alive" than a neural net that exists in code.
Maybe the conscious experience of emotions is the neural response to the chemicals? In other words, the chemicals are just one way to provide an input to the ball of neurons. If the chemicals aren't there but some other input mechanism is, it could generate an experience of suffering.
Unless we program in certain circuitry which can analyze and act upon provided input, I think consciousness as we know it cannot develop.
Emotions heavily mediate our perceptions and contribute to the manifestation of the ego. And sensory input defines the world in which we inhabit. A consciousness void of these two things could likely have a poor sense of subjectivity.
>If you assume that your conscious experience is somehow arising from your neurons, then generating a set of those neurons that are genetically identical and cannot be distinguished in any way (except location in physical space, which is an ephemeral quality that doesn't appear to change neuron function), then we can assume we are creating, or will create, beings who are having conscious experience on some level.
I think that's a bad assumption. Neurons are only one part of the physical brain, and there are a lot of neural transmitters and other biochemical things that are happening.
That is not to downplay the ethical concerns at all. I agree completely with you there.
On an ethical level, I think we need to understand whether we are creating thinking feeling creatures who will be doomed to suffer as data slaves before we normalize this. If I removed your brain from body, removing all sense pleasures, drowning you in darkness and isolation, and the only input and output you had were binary signals for some abstract data problem, you would experience profound silent suffering in an eternal private hell.
This truly would be the invention of the matrix, but not for an army of tyrannical robots / AI, but for the use of humans themselves -- a modern day slavery of the mental kind.