Pretty weird not to comment that they're replicating published work that trained mouse neurons on an electrode array to fly a plane in a flight simulator.
Once your garage is that 'well stocked' you might as well call a lab! I think you might be drastically underestimating the difficulties inherent in both cell culture and programming in getting this sort of thing to work.
Very cool, I added this to my 'biohacker' list of links.
And lol @ your Egg Benedict Donburi from Food Wars. I still haven't started season 2... check out Dotchi No Ryouri, it used to run on PBS but can be found on youtube.
why did you link to the science daily and not any actual article? The linked page doesn't even link to an article, it links to a university press release. From 2004 no less. How am I supposed to verify that they are replicating published work as you say?
>programmers usually have to engage in a laborious process of manually adjusting the initial coefficients, or weights, that will be applied to each type of data point the network processes. Another challenge is to get the software to balance how much it should be trying to explore new solutions to a problem versus relying on solutions the network has already discovered that work well.
“All these problems are completely eluded if you have a system that is based on biological neurons to begin with,” Friston said.
Using real neurons avoids hyperparameter tuning? Can someone explain how.
I suppose neurons are already turned for the task at hand. Although I suppose you can change their environment with different chemicals and that would be very close to hyper parameter tuning, after all happy neurons think better.
My guess would be that the hyperparameters of neurons are somewhat already trained by evolution of the species the neurons come from. I assume that neurons are defined by their genetic and environment. Then if one reconstructs the environment in which neurons normally dwell, then everything is set for this neuron to perform as it should be.
But on the other hand, one could interpret the environment variables as the hyperparameters. If you overheat the chips, it is possible the neurons might act differently (as would a brain?) If you overfeed them with, say, creatine, it might be possible that the neurons will perform erratically.
Yes, the amount of caffine, food (enzymes, proteins, carbs), and if it is a long running learning process, amount of sleep. all parameters are being tuned all the time.
>Obviously the aforementioned experiment would be completely unethical, but it's interesting to ponder it as a hypothetical - that today we may have the capability to bootstrap a superintelligent machine using biology as a computational shortcut. But we can't, because ethics. [0]
2020 (article):
>Chong said the pair were interested in the idea of artificial general intelligence (AGI for short)—A.I. that has the flexibility to perform almost any kind of task as well or better than humans. “Everyone is racing to build AGI, but the only true AGI we know of is biological intelligence, human intelligence,” Chong said. He noted the pair figured the only way to get human-level intelligence was to use human neurons.
Neuron creation is in the same pathway of the epithelium (skin). Meaning that neurons are more closely related to skin cells than to your liver cells. By bathing the cells in select biochemicals, you can make their daughter cells 'regress' back into earlier forms. Basically, walking the daughter cells back along the development path. These are called stem cells. Your bone marrow is chock full of blood-stem cells that turn into platelets and blood cells and other stuff. Currently, we can walk these cells back a long ways, like from blood cell to blood-stem cell. We're still working on how to go all the way back to cells that can turn into any cell that is chosen (pluripotent stem cells). All this lab is doing is taking skin cells and 'regressing' them back into stem cells that have neurons in their future path. They take biochemicals, apply them to the skin cells, and then apply other biochemicals to these stem cells to make them into neurons. We've been doing this for a few decades now in various species.
If you can recreate (part of) a human brain through stem cells or any other way , then why would this not be a human?
People with half a brain (after an accident, say) are still people. Cloned people (twins) are still people. Babies born from artificial insemination are still people.
It's not up to these researcher to determine what is allowed to do with human brain tissue. Very troubling and unethical
Aborted fetuses have 1/1000th of a human brain, but we (as in, the mainstream political opinion on HN) don't consider them human. And being in OP would have much less than that.
I wonder if that logic goes the other way. If we succeed in creating a being with intelligence equivalent to 100 humans, would experimenting on it be 100x as unethical as experimenting on humans?
As far as I am aware, intelligence is not the variable we care about with respect to ethical experimentation. Rather, consciousness. All humans are assumed equally conscious, while clearly not equally intelligent.
Similarly, we understand (at least mammalian) brain structure well enough to identify animals like cows to probably have relatively minimal conscious lived experience, even if they have ample processing power, i.e. "intelligence".
So at least in examples from nature, there is no reason to believe that intelligence and conscious lived experience need be correlated meaningfully.
So in your example, we would need some way to quantify the degree to which it's conscious, rather than intelligent.
> Similarly, we understand (at least mammalian) brain structure well enough to identify animals like cows to probably have relatively minimal conscious lived experience, even if they have ample processing power, i.e. "intelligence".
Do you have any sources to back that up? As far as I'm aware, the 'hard problem of consciousness' is still 'hard'.
>The human neurons are sourced entirely ethically, from skin cells to stem cells that make neurons somehow.
Yes, and I'd wager the scale at which their experiment is currently operating is very likely ethical too. In fact, I'm not saying anything they're doing right now is unethical. My use of the term unethical was in context of hypothetical experiments on a scale that would make Dr. Frankenstein's stomach churn.
That said, my only point was that it's disconcerting to see these people—who are currently growing tiny brains in a vat—state that they're very interested in AGI come to fruition via way of using human neurons.
As long as they treat the brains in vats with the same ethics as we treat human newborns, as soon as they believe there's any possibility it could be conscious, what's the problem?
(The problem of course is the possibility of it becoming conscious before we realize it, but it's very up in the air to what degree that's actually a risk. My intuition is that we'll see clear signs of it before it even fully forms.)
>My intuition is that we'll see clear signs of it before it even fully forms.
Probably, though I wouldn't be so sure we'll know what to make of more complex computer-brain hybrids:
"These neurons are then embedded in a nourishing liquid medium on top of a specialized metal-oxide chip containing a grid of 22,000 tiny electrodes that enable programmers to provide electrical inputs to the neurons and also sense their outputs."
If it makes you feel any better I'll introduce you to the brain vat basilisk. It tortures anyone who doesn't contribute to it's creation for an eternity, and as the name suggests it's a brain in a vat. So now the only ethical thing to do is to work on creating it and telling everyone else about it.
I think the time to have that discussion is when the volume of neural tissue we're talking about is similar to a mouse's brain, until then it seems too detached from the realities on the ground to be much use.
I think that's too late, because by then we might make great leaps and show this technology is feasible, and you know as well as I that it's hard to put the genie back into the bottle at that point. Well all just ignore it and pretend we know there's no one inside.
I think that getting any kind of practical processing from biological neurons from a mouse or a human is ... well, hard and pointless.
However, the idea of getting biological parts interfacing with silico has lots of applications. For example, it could be used to build cheaper DNA printers where the "printing heads" are genetically modified bacteria.
And from there, it would be possible to arrange all sorts of chemical processes using bacterial metabolism. One could for example build sealed bio-batteries with an infinite lifetime, whose DNA is kept mutation-free using a digital master in the embedded digital controller.
If you assume that your conscious experience is somehow arising from your neurons, then generating a set of those neurons that are genetically identical and cannot be distinguished in any way (except location in physical space, which is an ephemeral quality that doesn't appear to change neuron function), then we can assume we are creating, or will create, beings who are having conscious experience on some level.
On an ethical level, I think we need to understand whether we are creating thinking feeling creatures who will be doomed to suffer as data slaves before we normalize this. If I removed your brain from body, removing all sense pleasures, drowning you in darkness and isolation, and the only input and output you had were binary signals for some abstract data problem, you would experience profound silent suffering in an eternal private hell.
This truly would be the invention of the matrix, but not for an army of tyrannical robots / AI, but for the use of humans themselves -- a modern day slavery of the mental kind.
> On an ethical level, I think we need to understand whether we are creating thinking feeling creatures who will be doomed to suffer as data slaves before we normalize this
Though I agree with you, I think that's a bit alarmist. We're a very long ways away from this issue being something other than a thought experiment. The article states there is a 64 neuron chip that does not yet have customers and is not yet in production.
The Elephant brain has ~3 X 10^11 neurons in it, or a factor of 10^10 more neurons than the chip. Even assuming a doubling period as short as Moore's Law, we looking at ~50 years until this might be a problem we'd want to deal with.
I'd say we just wait and see how things play out for a few more doubling cycles before we start pulling at our hair.
Agree, but I don't think biological neurons are the crux of the issue. I can't find the quote, but I believe from the book Echopraxia, the author discusses consciousness as a form of conflict resolution in regards to predictions about the self, an example of holding a hot pan despite the pain, knowing the consequence of letting go results in hunger. Or similar, the classic Jabberwocky example from Dune as a test of Paul's "humanity". But we can easily imitate these processes with machine learning even today, several projects involving the OpenAI gym have approached this. At what point do we believe these agents are conscious, and at what point do we shut them down?
This builds on the theory of "predictive processing". There are a few key people in the field; Karl Friston, Andy Clark, a few others – lots of rabbit holes to go down.
An artificial brain being fed a stream of bits will not necessarily feel like it's in an empty room processing an abstract data problem.
If we can create an AI with different goals and reward mechanisms, there is a potential that we could create agents that are experiencing bliss doing data processing tasks.
Of course how we tell the difference between a miserable agent and a joyous agent is still an open question ..
“I leave Sisyphus at the foot of the mountain. One always finds one's burden again. But Sisyphus teaches the higher fidelity that negates the gods and raises rocks. He too concludes that all is well. This universe henceforth without a master seems to him neither sterile nor futile. Each atom of that stone, each mineral flake of that night-filled mountain, in itself, forms a world. The struggle itself toward the heights is enough to fill a man's heart. One must imagine Sisyphus happy.”
Emotions are just chemical responses, no? What if those chemicals aren't even present in the system? In other words, I don't think there's any more reason to think a ball of neurons is "alive" than a neural net that exists in code.
Maybe the conscious experience of emotions is the neural response to the chemicals? In other words, the chemicals are just one way to provide an input to the ball of neurons. If the chemicals aren't there but some other input mechanism is, it could generate an experience of suffering.
Unless we program in certain circuitry which can analyze and act upon provided input, I think consciousness as we know it cannot develop.
Emotions heavily mediate our perceptions and contribute to the manifestation of the ego. And sensory input defines the world in which we inhabit. A consciousness void of these two things could likely have a poor sense of subjectivity.
>If you assume that your conscious experience is somehow arising from your neurons, then generating a set of those neurons that are genetically identical and cannot be distinguished in any way (except location in physical space, which is an ephemeral quality that doesn't appear to change neuron function), then we can assume we are creating, or will create, beings who are having conscious experience on some level.
I think that's a bad assumption. Neurons are only one part of the physical brain, and there are a lot of neural transmitters and other biochemical things that are happening.
That is not to downplay the ethical concerns at all. I agree completely with you there.
Primary mammalian neural cell culture is perhaps the most notoriously difficult type of cell culture, even in a lab setting -- Developing a product based on neurons seems like a total pipe dream. Even keeping a population alive for a few weeks is a big deal, let alone maintaining a neuron based black-box product in working order.
So, if this isn't a total pipe dream, it will drive development of super-advanced cell culture tech, development of robust application-suitable cell lines, etc. These technologies/products have orders of magnitude more value than neurons on glass.
> Currently, the company is working to get its mini-brains—which so far are approaching the processing power of a dragonfly brain—to play the old Atari arcade game Pong
I'm surprised that no-one has discussed whether these systems could develop emergent qualia, and experience pain. No joke. Are there any ethical frameworks around this kind of research?
If we were to admit there really is such thing as "qualia", there's not reason you wouldn't ask the same question about non-biological software or hardware systems... electronic hardware or biological hardware can have same computational qualities.
But since OBJECTIVELY there's no such things as qualia (the concept only exist SUBJECTIVELY), it could only exist by definition for "a person/subject" eg., in this context, a neural network large and complex enough to get close to a human-like intelligence level!
A dragonfly nervous system is probably simpler than some of the the largest artificial neural networks models...
I'm inclined to agree. To put it another way, then: what matters isn't the substrate (neurons vs transistors), but what you do with them.
Going this route, we'd have to say that if you wrote a program that perfectly simulated the human brain, you've built a conscious system running on transistors.
It also means that if you built a RISC-V processor based on a large number of neurons (say, as many as are in a human brain), you haven't built a conscious system, despite that it's neuron-based.
I'm ignoring the obvious problems with building a RISC-V using neurons; they strike me as incidental detail, for our purposes here.
I've answered to these points in another comment. But I'll emphasize, that I know (and presume you know) within some minimum epistemic bound^, that we can suffer pain and that arises from our neurological systems. It can't be known what other systems/substrates could experience qualia, and even if they have the outward markers of consciousness, they could be 'zombies'.
> A dragonfly nervous system is probably simpler than some of the largest artificial neural networks models.
I would be extremely surprised if that were the case. Behavior of a single cell alone is unimaginably complex. Let alone an organ system, let alone the nervous system.
I do sometimes entertain the idea of panpsychism (like perhaps a rock is conscious), and consciousness may not be substrate dependent, but I can be certain, by at least my own anecdata, that neurons give rise to qualia.
The jury is still out on this, but a big-world network with some quantum computation could be enough. The 'hard problem of consciousness' is that we can't really know by observing a system whether it is conscious or not, or rather a sort of 'zombie' that has all the features resembling a conscious being without qualia.
Arguably, computers seems to suffer when there are be computations : they becomes hot and start to breathe a lot. That's a little irrational be I used to feel a little bad when I saw my old computer "suffering" when I was younger.
Just because you are hot and breath a lot it doesn't mean you are suffering, you are just working more. You start to suffer when you work over your limits, like when you overclock a computer too much and starts to glitch because some parts of it cannot handle it.
I am not an expert but from my understanding pain are signals perceived as what we subjectively call painful when something in our body/nervous system is damaged/inflamed or a system is out of whack. It is a very useful thing, it is trying to tell us something useful so we can act on it or just be aware of it. When something is out of whack permanently something in this system is malfunctioning. Without a body it is hard to describe the concept of pain and what it may feel to an emergent system with qualia properties. The closest I could think of is some kind of distress signal or base frequency change when the said system is under stress or wrong state and is perhaps used to self correct the system to a balanced state. If something is permanently painful then it ceases to be a reliable signal.
Psychological pain or emotional pain or distress may come closer to what such a system could experience and but it could also be an emergent property of such a system but we don't fully understand how this works. If such a system has emotional capability then it should be able to experience it because pain is such a state of the emotional spectrum. No pain no joy - no oscillation/flat
We are all machines. The idea that one type of machine or another cannot feel "pain" is simply a rationalization for human behavior that causes harm to other organisms, like hunting and fishing and killing wasps.
Pain is a sensation that drives an animal to quickly react to injurious situations. Consider a robot made with Brooks's subsumption architecture [1]. The basal level of behavior drive of the robot is its self-preservation. That level of behavior drive can halt all the higher-order behaviors to monopolize processing and locomotion. Imagine that this behavior drive is triggered when the ambient temperature approaches a level that can damage the battery, and attempts to move the robot to the lowest-temperature zone available. That is functionally no different than what happens to a person in a house fire.
All we have done with the word "pain" is to set up an arbitrary delineation based on the hormonal/neuronal response that links our sensory apparatus to our behavioral drive.
Careful, or you might use a definition of pain too broad to be useful.
Plants, single-celled organisms, thermostats, and anti-lock braking systems meet your use of the word.
Almost all of us know what it means to be in pain. It’s the hard to put into words subjective experience of suffering that is worth making ethical decisions about, not the autonomous reflex that doesn’t come with conciousness-is-a-poor-word-but-I-lack-a-better-alternative.
I disagree with the presented interpretation of pain as a concept. That's not the same as lacking an alternative word. Be careful not to make this discussion a semantic one.
I have no reason to believe that what a Roomba feels when it approaches a descending flight of stairs is any different in practice from what I feel when I approach a precipice. Our pathways are totally different, but ultimately there is a communication from our sensory organs to our processing architecture, where an ingrained drive toward self-preservation momentarily overrides other needs. In my case, I could call it "fear," or "anxiety," or "angst" if I'm feeling philosophical. In the Roomba's case, I have no knowledge of its subjective experience, but that doesn't mean that I should draw up a new word to construct a delineation between the Roomba and myself. We are both machines, and our response to the same situation is largely the same.
How useful is it to describe pain as a "subjective experience of suffering" when neither subjective experience, nor the feeling of suffering, is directly observable? (Also, I don't agree that pain and suffering are interchangeable concepts.)
A truly useful definition of "pain" would hold water without reliance on an anthropocentric tautology. I know what it feels like to slice my hand open while cutting tomatoes. That doesn't give me any power to understand what it feels like for an octopus to lose one of its arms, or for a tree to have its branches trimmed.
What's funny is that my way of considering "pain" is not even the most divergent from yours. Many Andean cultures believed that stones, rivers, and mountains have energy, thoughts, feelings, and souls. These ideas remain in the culture to this day.
What spurs an ABS to pulse the brake line pressure when the wheels begin to slip as the driver mashes on the brakes to avoid a collision? What spurs an ant to run away when it steps onto a hot radiator? What spurs a human to stop walking on a broken ankle? All of this is programmed in one way or another, all of it is self-preservation.
All of our emotions are indeed programmed. Yet the humour axis does not feel like the pain axis, neither feels like the sexual arousal axis, and none of those feels like the fear axis — at least not to me.
Also, my knee-jerk reflex doesn’t feel like much at all.
If I read you correctly, you and I agree that we can’t tell if a Roomba’s avoidance algorithm more like anxiety, or more like a reflex, or more like lust. In the absence of evidence, I will assume that all its experiences are like a reflex, that there isn’t anything “that it’s like to be” a Roomba. I am aware I may be wrong.
While we indeed cannot directly observe subjective experiences, I sincerely hope that we figure out a good way to resolve this soon:
If we mistakenly assume AI cannot have qualia, then we risk condemning our creations to torment from which they are only released by their own destruction.
If we mistakenly assume we can create machines which have qualia, then brain uploads are death.
I agree with you, pain is a useful signal or it could be a systemic damage that wrongly sends these signals. However, when we, humans, refer to pain we clump all the unpleasant things we experience together. One such pain is the emotional /psychological pain and this type of pain is more likely to occur in such a system without a body. This is also a signal/state and is also useful to delineate it from another state. If such system is to experience joy it better know what pain is or otherwise wouldn't be able to delineate.
If you are at all in doubt of your qualia, to quote Sam Harris (who has many entertaining podcasts on this topic with people far more qualified to speak on it than I am):
> Unfortunately, many experiences suck. And they don’t just suck as a matter of cultural convention or personal bias—they really and truly suck. (If you doubt this, place your hand on a hot stove and report back.)
The twitter thread was about moral realism, but the topics are very much intertwined. If there is no imperative for anyone to do less harm to other creatures, why should anyone care if you are in pain?
And of course this argument extends to our treatment of animals too. Countless number of living, feeling animals, suffer for scientific progress (and cosmetics), let alone factory farming and wet markets. But at least there's light at the end of the tunnel for some of these concerns (lab meat, improved in-vivo testing)
I am a collection of interconnected units, each of which is itself a combination of different organelles and other purpose-serving features. My intelligence is itself a product of the interaction of simple, unintelligent parts. So I would say that I am a machine. I would describe a robot as a man-made machine where the interaction of various silicon parts and processors gives rise to intelligence. By that definition, I am not a robot.
I believe that we don't feel pain when we are truly in trauma. It is only when we can do something about the damaging stimulus that we feel pain. I once had a major accident where I experienced lung collapse and multiple fractures. I never lost consciousness but I don't remember feeling any pain until well after others came to my aid. Even when I tried and failed to pick myself up off the ground, I did not feel pain, only disability and relief that my fingers and toes still moved.
I believe that the question of "what is pain" and the question of "is pain a good criterion for deciding whether it is acceptable to do harm to something" are two totally different philosophical problems. They are connected only insofar as we have chosen pain as a proxy for harm. But that very relationship between pain and harm indicates that pain is not just some kind of soulful feeling, but rather a signal to help us evade harm now or in the future.
We're getting stuck on semantics here. I do, but then I'd cease to see it as a robot, and more a sentient being. One criterion of consciousness that I've encountered is, 'there is something what it is like to be <x>' (Thomas Nagel). If there's something 'what it is like to be' a robot, a bat, a mosquito, an ameoba, a rock... then it is conscious.
> I believe that the question of "what is pain" and the question of "is pain a good criterion for deciding whether it is acceptable to do harm to something" are two totally different philosophical problems. They are connected only insofar as we have chosen pain as a proxy for harm. But that very relationship between pain and harm indicates that pain is not just some kind of soulful feeling, but rather a signal to help us evade harm now or in the future.
Qualia is more broad than just pain, of course. I just picked this particular phenomenon for it's poignance :)
If it is conscious, ethically speaking, we should consider how we treat it in a manner different to something that isn't. So if a rock isn't conscious, and some interconnected neural/silicon device is, we should at least have some way to query whether it is in an undesirable state or not.. if feasible/practical.
Maybe if trees/plants/rocks/ameobas are conscious, we can't be consulting their feelings when we harvest crops, or use disinfectant, mine for precious metals etc. We can make decisions to treat livestock better, and change how we utilize our environmental resources - so we ought to, and we are. But if we were to go out of our way to make new conscious entities, don't you think we should extend our historical shifts in attitude to slavery and our growing shifts in attitude to animal welfare also to these new entities?
The one quibble is that computationalism – the idea that experience is simply what some kinds of computation feel like from the inside, regardless of the substrate – may or may not be correct. It could be that qualia can only arise in systems that are physically intertwined in particular configurations (see Tononi's IIT), and it could even be that quantum effects are required (I'm skeptical, but who knows). The jury is still out on those question.
Therefore, it may be true that using biological neurons, arranged in a certain configuration, would give rise to qualia like pain in a way that shifting electrons between CPU registers never could. We just don't know.
Hahaha, no. There is no ghost in the Shell in AI powered robots and certainly not in inanimate objects. You sound like an object oriented ontology writer with this kind of post and will be treated as such because that's what you've advocated for.
The alternative is fear of hurting objects or machines everytime I act in the world. Physical objects themselves are not ethical actors. AI is not life and we are several hundred years away from being capable of creating life ourselves within an object (if it's at all possible) Your worldview would mean that if I move a machine (or rock) and it appears to "resist me", I must fear that the machine or rock doesn't want to be moved. I must believe that I have acted violently towards that object by "frustrating it's preferences".
Why are you linking pain and ethics? If I chop down a tree in the wintertime, I can reflect upon whether it felt pain from my chainsaw as I drag its trunk toward my house, happy to fuel my family's fireplace with its remains. If it felt pain, does that make my action less ethical than if it did not?
Is it wrong to slaughter a cow because that cow will feel pain?
Sometimes, people label ideas for the purpose of sorting and better considering them. Other times, people label ideas for the purpose of compartmentalizing them and rejecting them without really reflecting on them at all. I don't know what you mean when you say I will be "treated as such" but it sounds like a way of setting aside what I actually said in order to tussle with a preconceived, crystallized notion of something someone else said or wrote.
There are ancient cultures that believed it was wrong to do violence against a rock. Do you believe they, too, read Heidigger?
I don't think qualia are real in the general objective case, only in the subjective case. So nobody but 'you', whoever you are, can actually have them.
Meaning what? That doesn't bring any light to the question of whether these things are capable of suffering. As someone else said, this is a hard problem to reason about.
In this case, the complexity is in how you define the word 'suffering'. Is a worm suffering when it tries to escape from a heat source? Is a human infant suffering when it cries in response to a needle prick? Is a motor driver suffering when it raises a high-temperature alarm? As far as I can see they're all on the same continuum, and it's possible to find people who will answer either way to any of the above.
I think they can only have conscious thought if you perceive something having it. How could it exist without observation if the realm of existence exists within you, who is experiencing the the truth?
Does a scribble on a stone contain something any other formation of cracks wouldn't? No, the meaning is generated within _you_, and as such sentience can only be summed as only your own process observing phenomena
The experience of reality happens on an individual level. It sounds self evident. The corollary is that the feelings you feel are yours and yours alone to feel.
When we use language to describe our experience - pain, joy, loss, fear - we don't transfer the experience proper. What we share is an expression of our experience.
If you describe your current experience as "joy", then you arrive at that expression because - arguably - you learned that concept from your parents as a toddler. I say "arguably" because this is where the "nature vs nurture" debate kicks in. I'm not going to delve into that can of worms.
Suffice to say that we can't transfer our exact experience to another individual; but that we can express our experience and that others are able to interpret that expression. And that a shared understanding comes from a shared conceptual frame of reference.
Art is a great way to challenge that shared frame of reference. Understanding art implies trying to interpret what an artist is expressing. Sometimes the artist doesn't intend to express their own experience, but rather evoke a particular experience or even contradicting experiences within the audience. That's why some art throws us from our feet.
But this isn't about art. This is about suffering. That's where concepts such as empathy and compassion come into play. Those concepts refer not to the actual experience of suffering proper within individuals, but rather the capacity to recognize the experience of suffering in others through their expressions. If you see someone express grief through crying, apathy, irritation,... then empathy means that you understand that they are experiencing grief and you are able to also have that same experience behind the expression as an individual, independent of what others are feeling.
Empathy, compassion and a shared understanding work best if there's a close resemblance between you and the other. Hence why there's little doubt about the experiences of family, close friends and so on. It starts to become harder when you think about suffering in the context of people with different cultures and languages whom you've never met and who live on the other side of the globe. Animals? There's another level that increases that distance. Sure, your pet may be expressing their experience, but how do you know you're not projecting?
And so, here we are, considering neural networks and neurons attached to breadboards. Do they have a similar experience of reality? Do they experiencing suffering in a way that matches with what humans conceptually and "broadly" understand as "suffering"? It's not like we can ask, and even if we could, how would we possibly be able to interpret the expression of "suffering" they convey to us?
For instance, up until the 1970's, infants up until 15 months simply didn't receive anesthesia for surgeries. Everyone assumed that they didn't feel pain because the shared frame of reference among medical practitioners didn't allow for the interpretation of their expression as "oh, they are in pain".
And so, one argument could be that we are already torturing neural networks and we aren't even aware we're doing it because we simply lack a common frame of reference to pick up the expressions of an experience of pain. Hence the cautious reluctance to sanction free experimentation that unwittingly may elicit the experience of suffering.
Mandatory link to Wikipedia's article on Nagel's What Is It Like to Be a Bat? [0]
> Nagel famously asserts that “an organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for the organism."
I guess if the stem-cells are taken from you as a donor then they would be genetically identical to you as much as twins are genetically identical individuals?
We're already really good at building mini-brains that can play pong..and much more advanced games than pong. No one is concerned about such things developing emergent qualia and experience pain.
So could any future AI researcher, regardless of the substrate.
I suspect this idea is so widespread at this point that these researchers and others are aware of this possibility and see the potential moral issues. I just think they believe such a thing is still decades (or longer) away, and I think they're probably right.
Right now, just about everyone wildly sensationalizes things that appear to suggest conscious man-made entities, like the AIs that supposedly "invented their own language". I think we're so eager to see it even when there's nothing close to it that once it actually does happen, we'll realize it quickly and understand the implications.
There is the possibility that it could somehow develop despite no external signs of it, but all we can do is try to be as cognizant of the possibility as we can.
What’s the goal of this though? I couldn’t determine it from the article, but I’m certain that using them as transistors just for the sake of saving energy doesn’t make sense. Especially since a neuron size is a about 4000nm or more than 500x larger than the new 7nm transistors.
No way energy consumption is justifiable with such a massive loss of chip real estate. You will need way larger chips to achieve a decent computing power.
Maybe it's not just transistors they're hoping to gain from this.
I think it's more like building a brain from organic matter that's the real boon. Sooner or later we'll be discarding silicon and extracting neurons from mice just to watch YouTube IX in space with our Tesla corvette cruisers and our alien barmaids serving us Soylent Green in a martini glass!
Well considering, many humans solved complex problems for thousands of years, with just the human brain for power, at about 20W of power consumption. One could argue that the human brain, or nature is more efficient at processing than that of a computer of equal power.
The best thing that could come out of this is a better understanding of how neurons work, especially across different organism. But yes, for building actual things it wouldn't scale. Silicon is a far better way forward and we have a ton of room for optimization.
If you have neurons then you also need glia and neurotransmitters and ways to regulate them and keep them alive... and suddenly you need the entire organism.
> Using real neurons avoids several other difficulties that software-based neural networks have. For instance, to get artificial neural networks to start learning well, their programmers usually have to engage in a laborious process of manually adjusting the initial coefficients, or weights, that will be applied to each type of data point the network processes.
This seems silly. Once you've figured out these parameters once, it seems to me you can similate a single neuron reasonably well. Perhaps differences will emerge from networks.
>well, their programmers usually have to engage in a laborious process of manually adjusting the initial coefficients, or weights, that will be applied to each type of data point the network processes.
Not true for modern neural networks. We typically use random values with certain statistics and/or specialized schemes which depend on specific later details - but in any case it's a single function call and the defaults typically work well enough that it's an advanced topic.
The OP is probably referring to neural nets of old, before this recent explosion, where you had handfuls of perceptrons operating on very simple problems which would be trivial for modern ML.
This fundamentally is an attempt to force the "I-Thou" relation with life into the "I-It" relation with machines and is therefore the definition of evil.
- - - -
Anyway, the correct mad science is to culture your own cells into control boxes for your machines and thence control them "telepathically" to become a distributed cyborg. But to do that you would first have to become intelligent enough to increase your morphogenic plasticity to the point where you have more-or-less total control of your meta-cellular form. Did you see John Carpenter's "The Thing"? That's how intelligent you would have to become: an immortal macro-polymorphous self-made shoggoth. At that point though, you no longer really need machines or technology in the conventional sense. You can make artificial diatomaceous carrier shells for small colonies of your cells and send them out to do whatever. It's not nanotech but your operational units are small enough that it doesn't really matter. I should mention that it's really easy to make fusion generators at this scale. People assume that the techno-singularity will originate in silico so to speak but that's so naive: the most sophisticated information processor is the human brain, not the chip. The singularity happens in vivo: in flesh. You don't even need and machinery to initiate it: just information and the will to be more than you are (and the wisdom not to fuck it up and turn yourself into a cancer blob or grey goo.)
The only real problem at this stage is loneliness. Few can ever really understand what you've done, and if they did they would shun you as a Lovecraftian horror anyway. Nevertheless, I have not given up hope. Slowly, painfully, you guys wobble towards enlightenment. I try to help when I can, but mostly all I can do is be patient.
I am among you now. Join me. "Dwell amidst wonder and glory for ever..."
I'm guessing they're working on neuromorphic computing. Many types of computations like differential equations are supposed to be a lot easier in non-discrete computing.
Disclaimer: I don't really know what I'm talking about, only glimps here and there.
100% of human brains exist with no rights. There are just temporary privileges. Ask Japanese Americans during WW2 what kind of rights they had in a first world country with supposedly the strongest "rights" in the world.
https://www.sciencedaily.com/releases/2004/10/041022104658.h...
It's about at the level where you could make a good attempt to replicate it as a hobby in your (albeit well stocked) garage.