>> “People don’t really think of insects as feeling any kind of pain,” said Associate Professor Neely. “But it’s already been shown in lots of different invertebrate animals that they can sense and avoid dangerous stimuli that we perceive as painful. In non-humans, we call this sense ‘nociception’, the sense that detects potentially harmful stimuli like heat, cold, or physical injury, but for simplicity we can refer to what insects experience as ‘pain’.”
I don't understand the insistence of the article to place quotes around the word pain when referring to what insects feel. These are animals that can sense their surroundings and react to stimuli. How else would they be convinced to avoid dangerous situations, than by an unpleasant sensation?
In any case, the simplest hypothesis is that all animals can feel pain. The null hypothesis should be the opposite. And it has to be a complex hypothesis that explains why some animals can feel pain while others don't (which is tricky).
> I don't understand the insistence of the article to place quotes around the word pain when referring to what insects feel. These are animals that can sense their surroundings and react to stimuli. How else would they be convinced to avoid dangerous situations, than by an unpleasant sensation?
If you build a robot which senses dangerous stimuli and avoids them, does it feel pain?
Equating reaction to stimuli with pain is too sinplistic.
You feel pain, right? Well, an insect is a lot more like you than a robot... the mechanism by which it it decides what to seek and what to avoid is based on an infrastructure that's very similar to your own, even if much simpler. And the experience of pain appears to be something very fundamental in the wiring of living things, its avoidance being very directly connected to evolutionary success. So it is absolutely logical to assume this mechanism is fundamentally equivalent in humans and insects. The behavior is the same, the wiring is the same, why not use the same word?
For now, most robots's decision-making apparatus isn't anything like yours. So it really doesn't make much sense to use the same vocabulary. But that could change if we use neural-network computing in a way that appears to more directly simulate the functioning of living organisms. Maybe building an ant-robot that's really a functional copy of a biological ant isn't that far off, and in that case, sure, it might feel pain.
This seems like the most rational stance to me now.
I had a pastor once attempt to teach me that all animals are basically robots, and that this justifies treating them however we want. In his mind, we're the only ones actually 'awake'. I suspect a whole lot of people think like this on some level in order to justify all the ongoing barbaric treatment they willingly endorse or carry out.
Hearing that shook me up for a bit, but in the long run it accelerated the process of me abandoning religion.
If you mean that insects are more similar to humans than they are to robots, I'm actually not convinced.
When we are deciding what kind of mental mechanism to attribute to insects to explain their behaviour, there are two desiderata we need to satisfy:
- Our hypothesis has to be strong enough to explain the behaviour
- It has to be plausible given insect hardware
YeGoblynQueenne has suggested our starting point should be the hypothesis that "that all animals can feel pain." This is certainly strong enough to explain the behaviour, but it's implausible given the size of an insect's brain.
A fruit fly's brain has O(10^5) neurons; ours have O(10^10). Modern artificial neural network architectures typically fall somewhere in the middle. Now, certainly both human brains and insect brains have sophisticated architectural features that we haven't figured out yet. But given that robots, ANNs, etc can exhibit the same sorts of behaviours as insects given similar hardware constraints, I don't think we need to attribute pain (or any sort of mental life) to insects in order to explain their behaviour.
It would be cool if a neuroscientist could weigh in on this.
> 1. Most neural network architectures have fewer "neurons" than 10^5. Maybe the word you are looking for is "parameters"
That's a fair point, I shouldn't have said "typically." But some of the larger models probably have that many linear filters.
> 2. A neuron in the brain and a neuron in a neural network are totally different things.
There are certainly disanalogies between biological neurons and neurons in a vanilla feed-forward network, but A) there are analogies as well and B) a lot of interesting work is being done to make deep learning models stdp compatible.
At any rate, I think it's a reasonable claim that an insect brain has representational power closer to a SOTA ANN than it does to a human brain (though I welcome anyone here who knows about biologically plausible deep learning and/or insect brains to prove me wrong)
I entirely disagree. The onus of proof is on you to explain how a highly idealized ANN has the same expressivity as a fly's wetware.
Just because neural networks are tech's Zeitgeist, doesn't make them the perfect explanation for all physical phenomena. Fifty years ago there were people equating thought and consciousness with artificial intelligence programs; and 200 years before it was the watchmaker's clockwork holding that regard.
Yes, as a man of science I subscribe to reductionism. Brains are made of neurons which are made of molecules which are made of atoms and so on, all governed by the laws of physics. But there's no reason to believe ANNs have the required intrinsic complexity to behave like ganglia.
> The onus of proof is on you to explain how a highly idealized ANN has the same expressivity as a fly's wetware.
I've made an argument - it's essentially a functionalist one. The intelligent things insects do - object detection, maze navigation - are all things that ANNs are really good at.
To put things in perspective: the reason that I don't think that ANNs are anywhere near as expressive as the human brain is that there are countless behaviours that humans perform that ANNs simply can't - generalizing to novel viewpoints in vision, for example. (Or if you want to go whole hog, natural language understanding.)
AFAICT The same is not true of insects. To dissuade me, you'd have to specify an insect behaviour that ANNs are fundamentally unequipped to perform. I'm not an entomologist so I'm totally open to the possibility that there is one. I also imagine there is significant neural diversity within the insect kingdom - presumably some bugs are smarter than others, and maybe some of the bigger-brained ones can do stuff that would necessitate an explanation that invokes consciousness. But you have to tell me what it is.
I'm wondering now.. if you could replace just an ant's brain with an artificial, neuron-for-neuron copy, such that the ant would continue on the outside to behave in an identical way in identical situations, what if you went one step further, and replaced the hardware neural network with a virtual one running on a general purpose, ant-brain sized CPU? Or what if you had one third of the original ant's brain intact, one third replaced with artificial neurons, and the last third virtualized? Would you end up with three separate, yet closely interacting consciousnesses?
For that matter, if you take a human being, and cut the fibers connecting the two hemispheres, you end up with two separate minds, as demonstrated in experiments. Presumably you end up with two separate consciousnesses too.
If you replaced the neurons in your head one by one (say, 1% per day, over 100 days) with tiny machines functionally equivalent to neurons, what would be the effect from your point of view? To the outside, you would remain the same.
This line of inquiry is generally referenced as the Ship of Theseus argument. The underlying philosophical question about identity holds even for inanimate objects whose parts are replaced.
But, this argument has also been extensively reapplied to brains, bodies, and minds as well. The Chinese Room thought experiment is one common reference for this, where a functional system is trained to translate Chinese texts but framed in a way to cast doubt on whether there is any understanding of Chinese.
Rationally, there is no difference what-so-ever between your ability to assess how another human feels pain, and how the robot feels pain (or any other "life" form in between). In all cases, you can only observe behavioral responses to stimuli and make conclusions based on that. Therefor, everyone (else) "feels" "pain" equally as far as you can tell.
There are 2 differences to speak of but they still don't change the above statement imo:
1. You have knowledge of the experience of pain within yourself. You then extrapolate that other people probably have the same internal experiences because they look like you and their external behavior matches your external behavior. But of course, that is just a thought and cannot be proven. Maybe some people feel pain differently to you or not at all (like the idea of "is my blue the same as your blue")
2. The robot was programmed but the nervous system wasn't. Therefor one might say, we know the robot doesn't feel anything but at least I know that I do. If you don't believe in a divine element of being, then a brain is itself merely a computer in the broad sense of being 100% governed by the same laws of physics as a CPU (and everything else). So any experience or behavioral response in you or anybody/thing else has a physical casual link to the stimuli (with a quantum RNG in the mix).
Therefor I'd say that rationally, every feeling of pain, is equally valid, even if it's a program. But we have decided, for emotional and practical reasons, to give more validity to the pain of some creatures than others on an almost continuous scale.
I don't see why we have to assume that a robot would feel pain. We can explain its behaviour easily without recourse to pain: we 've programmed it to avoid certain situations that we deem dangerous.
On the other hand, I don't see why we have to assume that an animal does not feel pain. Have we designed them? No. Don't we, who are also animals, feel pain? And isn't it reasonable to assume that pain is one mechanism by which we learn to avoid dangerous situations? Yes.
So, no good reasons to assume animals don't feel pain. Animals don't need to not feel pain just because robots don't need to feel pain.
What people on this thread are trying to say is that things like cars and robots are created and seemingly understood by us humans. We took inanimate objects and pieced them together using principals we've learned from observation of these inanimate objects.
Insects on the other hand are what we've scientifically classified as biological things. Creatures that share some of the same biological qualities we have. Unfortunately, humans don't understand all parts of their own biology to even recognize how similar another living thing is.
What I'm saying is that if pain is a perceptual result, we have no way of saying insects cannot perceive it the same way we do. All we know is that they have brains, like most other biological creatures, and that the base assumption should be that they do have pain.
Take any other creature like a dog. Why would a dog feel pain? Their brain's may be more complex than something like an ant, but who's to say the rise in brain complexity has brought about the emergence of pain? Humans know too little about the world but a lot of people insist on taking the opinion most convenient to them. If they don't have to think about the fact that they inflict pain on insects, they can keep killing them in a variety of ways without any moral apprehension in the slightest.
I think it extends to saying that people who aren't me feel pain. another thing I have no evidence of. Being a black American, at points there has actually been a scientific consensus that I myself don't feel pain. Just because I react similarly to you when you feel pain doesn't mean that you're not anthropomorphizing.
I don't understand what you mean by "perceptual result".
Animals avoid dangerous behaviours because they cause them pain. Robots, including self-driving cars avoid dangerous behaviours, because they are programmed to do so. Why is the existence of robots programmed to avoid dangerous behaviour indicative of whether animals feel pain?
>Animals avoid dangerous behaviours because they cause them pain. Robots, including self-driving cars avoid dangerous behaviours, because they are programmed to do so.
We don't know a priori whether these are in fact two distinct sets or just two descriptions of the same set. That is, nature may have "programmed" the insect to avoid dangerous stimuli without any conscious awareness of it, similar to how my hand starts to move from a hot stove before I consciously feel the pain. Or, our programming of accident avoidance might in fact endow a conscious experience of pain for all we know.
If I understand correctly, you're suggesting that animals could react to stimuli
that we would consider painful by following the same behaviours that they
would if they felt pain, not because they feel pain but because they have been
programmed to follow those behaviours in response to those stimuli?
I find this unlikely. The set of stimulus-response behaviours necessary to
avoid every possible danger that might arise in an animal's environment would
be immense. It seems much more economical to "program" behavious that avoid any
painful stimulus (e.g. by setting some "pain threshold" and programming the
animal to avoid anything that causes a sensation of pain above that
threshold). This is particularly so for insects that have a limited number of
neurons in which to store their stimulus-specific programs.
>by following the same behaviours that they would if they felt pain
This is taking the claim I made too far. My point was that reacting to noxious stimuli does not a priori require the conscious experience of pain (which includes a suffering component). When it comes to analyzing sets of possible behaviors, that provides more evidence with which to decide between nociception and pain. I do think that most "animals" (excluding insects) probably experience pain due to, as you said, the varieties of possible behaviors being too large to be merely reflex based. But it's not obvious to me that this holds for insects.
>this leaves open the question of how noxious stimuli (thanks) lead to avoidance behaviours, if they are not unpleasant.
It triggers networks that are designed to cause avoidance behaviors, for example the withdraw reflex[1]. Reflex networks do not require any conscious experience of the stimuli that triggers the specific action.
>why most animals but not insects? What is different about insects?
The size of the set of possible unique behaviors, complexity of brain, existence of higher order brain processes.
They would have to experience some negative stimulus to avoid it. Disagreement with the term pain does not remove the resume that learned avoidance requires detection of a negative stimulus.
Further, while we separate negative stimulus into say thirst and pain, the fact you can torture people with either means the difference is ethically more semantic than meaningful.
There is a distinction in biology between sensory perception of noxious stimuli (nociception) and the experience of suffering in response to noxious stimuli. The distinction is in higher order brain processing. An organism does not need to experience suffering to react to noxious stimuli: https://en.wikipedia.org/wiki/Withdrawal_reflex
Withdraw reflex also doesn't preclude learning. We see learning in all sorts of reflex arcs that get over-triggered and thus become downregulated. So the fact that the fly "learned" to be more sensitive to noxious stimuli does not rule out a reflex mechanism at play in its behavior.
Down regulated withdrawal reflex is the opposite of avoidance behavior. It’s effectively increasing an organism’s tolerance for putting it’s limb in fire.
So, no what you’re describing does not mean learning to avoid stimulus. Learned avoidance behavior really requires a negative perception not just a neutral reflexive one.
I'm not sure why you're giving my comments such an obtuse reading. My example was to show how learning can happen in a reflex network, thus undermining your claim that learning indicates conscious experience.
The issue here is regarding the capabilities of a reflex network. I have established that learning can happen in such networks, as shown by common examples. You have not established that learning avoidance behaviors requires conscious networks.
Because you don’t seem to understand the difference between learning avoidance and reducing sensitivity. (If you notice my first comment I specifically referred to learned avoidance not simply any kind of learning.)
Reflexive behavior is a predefined response to a stimulus over some predefined limit and thus on it’s own it’s reactive not predictive. In a purely neutral context there is no reason to increase the response based on the external stimulus.
Avoidance requires the ability to predict before a negative response occurs. Without the negative association there is no impetus to increase avoidance.
Aka, don’t put your hand in fire is inherently different than how quickly you remove your hand from fire.
PS: Reducing sensitivity with exposure is a rather different thing as it helps deal with edge cases. At a meta level, strong constant uncontrollable spasms is extremely unlikely to be an ideal response to a given stimulus.
I see the distinction you're going for, but ultimately it doesn't work.
>Avoidance requires the ability to predict before a negative response occurs
The article discusses the mechanism by which the "avoidance behavior" they describe is learned. It mentions that the downregulation of inhibitory neurons causes heightened sensitivity to stimuli. But downregulation of inhibitory neurons is the same species of process involved in desensitizaion of reflex arcs, i.e. make a certain set of neurons require a higher threshold for activation. Your assumption that "avoidance behavior" requires something beyond reflex activation doesn't follow.
And lets be clear, all learning is predictive. A reflex being upregulated or downregulated is a mechanism of prediction by which an organism's response is modified to more closely correlate with the environment. What you seem to be going for when you use prediction is an organism's mental model of the environment that is detailed enough to associate certain states with negative valence, and behavior planning to avoid entering into such states. In organisms with that level of processing, I agree that an experience of pain is required. But it doesn't follow that any behavior that can be described as predictive requires such mental models.
I think you are making an arbitrary distinction between levels of response.
“more closely correlate with the environment” that’s leaning.
A tree that grows to light will physically reflect information in learned about the environment. It’s encoded in it’s physical form rather than neurons, but it’s still encoding information. In that case lack of sunlight is clearly not what we would associate as pain and I would not say it has ethical implications, but it is a negative stimulus from a tree’s perspective.
Anyway, avoidance is assumed to already happen as a separate system. “it’s already been shown in lots of different invertebrate animals that they can sense and avoid dangerous stimuli that we perceive as painful.“ Fruit fly’s have 250,000 neurons they can encode quite a bit to their mental model. This is simply changing the thresholds before learning occurs, or updating the system that updates their mental model.
As to the encoding, I think physical damage as a negative stimulus is kind of obvious.
>I think you are making an arbitrary distinction between levels of response.
I don't think the distinction between behavior carried out by reflex networks and behavior carried out by planning involving mental models is arbitrary. We have no reason to think reflex networks involve experience whereas we do have reason to think behavior involving mental models does. So it seems like this distinction is the critical distinction in terms of determining if some organism experiences pain.
>Anyway, avoidance is assumed to already happen as a separate system.
You're projecting more onto the term "avoidance" than is warranted, and you haven't defended your reading of the term. I take something like the withdraw reflex as an example of an avoidance behavior. Clearly you don't, but since your argument rests on your different understanding of the term, and your assumption that the authors of the article intend your reading of the term, its the key disagreement and deserves more attention.
Continuing the quote from the article you started "...In non-humans, we call this sense ‘nociception’, the sense that detects potentially harmful stimuli like heat, cold, or physical injury". So it doesn't seem like the authors are intending to reference predictive behavior, but simply protective behavior in response to noxious sensory perceptions, what you called reactive behavior.
Learned avoidance is in the literature ex: “A Drosophila larva essentially lives to eat. If one odour is repeatedly paired with a sugar substrate, and another is not, it will start to preferentially approach the first odour. If the pairing is with a quinine or high salt substrate, it will start to avoid the odour.” https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3427554/#!po=27...
Which is why I am saying learn avoidance behavior is simply not the topic of this research.
PS: That also includes some examples of complex behaviors.
>> Equating reaction to stimuli with pain is too sinplistic.
Just to clarify this- my comment says that animals react to stimuli and that pain is a stimulus that they use to avoid dangerous situations (or behaviours). Not that reaction to stimuli entails the ability to feel pain.
So I'm asking: if (some) animals don't feel pain, then how do they know to avoid dangerous behaviours? What is the mechanism that keeps them from, say, jumping into the fire?
> if (some) animals don't feel pain, then how do they know to avoid dangerous behaviours? What is the mechanism that keeps them from, say, jumping into the fire?
This is exactly why the robot analogy originally raised by Hendrikto is so powerful. Robots/neural nets/etc illustrate that it is possible for an agent to avoid dangerous behaviours without being motivated by feelings.
I agree that this doesn't prove that insects don't have feelings - it shows that an agent doesn't need to be conscious to exhibit these sorts of behaviours.
Where I disagree is your claim that it has no bearing at all on the argument. It does bear on the argument, because it presents an alternative hypothesis that doesn't involve attributing mental life to insects. Very (very) roughly speaking the alternative is something like "an insect brain is a neural network, which, while not created by conscious designer, was chiseled by evolution to respond to stimuli in such a way that dangerous behaviours are avoided - in much the same way that an atari-playing artificial neural network trained by reinforcement learning avoids dangerous behaviours that will end the game - and just like the ANN playing atari, no consciousness is required - just the right model parameters. And similarly, no conscious designer ever explicitly told the neural network what to do. The programmer just set up the right incentives, in the same way that nature set up the right incentives for the insect."
I can see the arguments on both sides, but I think the "insects are kinda like robots" hypothesis is more plausible because an insect brain has a more similar number of model parameters to a deep q learning network than it does to a human brain.
> I agree that this doesn't prove that insects don't have feelings - it shows that an agent doesn't need to be conscious to exhibit these sorts of behaviours.
This conclusion requires an understanding of what endows something with consciousness. (I won't use the word agent, because the relationship between consciousness and agency isn't clear. Agency may require consciousness, but consciousness needn't necessarily require agency.)
We do not know what part of the human brain is responsible for human consciousness. Anything which we create may also, as a byproduct or an emergent property, create consciousness. Perhaps the only thing consciousness needs is an ability to sense and respond to stimuli.
That's like saying if a doorbell is made of atoms, we've made atoms a useless term. A doorbell either has consciousness or it does not have consciousness. (I might agree that if a doorbell has agency we might be making agency a useless term.)
Consciousness as we understand it is defined in an ineffable way. We have individual experiences which we call 'consciousness' which roughly coincides with our subjective experience. As it stands, the term is fairly useless. Where we use it, we generally make the assumption that it applies or doesn't apply; almost exclusively in a self-serving way.
If a doorbell is conscious and we can prove it, we will necessarily have a much greater understanding of consciousness and will have refined the term consciousness, making it more useful.
"atoms" isn't a useless term, but describing real-world observable objects as "atomic" is useless.
In the same way we can talk about a spectrum and study of consciousness, but if the definition is to sense and respond to stimulus then the yes/no of "conscious" is always yes and therefore useless.
I'm not sure what you mean by "prove it". A doorbell very clearly responds to a stimulus. ...if you mean the definition of the word, no word's definition has ever been "proven". People assign definitions to words.
You're conflating the definition with the correlated properties.
> if the definition is to sense and respond to stimulus then the yes/no of "conscious" is always yes and therefore useless.
This isn't the definition. This is a hypothetical cause. The cause might be anything; we don't know (yet?) what objective indicators there might be of subjective experience.
Consider seeing. In order for a thing (be it robot, animal, or human) to see, it must possess some type of sensory organ and a method to interpret the signals from that organ. So, to test for the ability to see, we can look for those things. We can be pretty sure that fish can see, because they possess eyes, brains, and respond to visual stimuli. That doesn't mean the properties has_sight_organ and has_brain are seeing.
In the same way that physical matter is all made of atoms (loosely; let's not be pedantic), we might find that all things possess consciousness. We might not. But in neither case is the term made useless. Sure, if we find that to be the case, then describing a thing as conscious or not becomes similar to describing an object as atomic. But then we're just moving goalposts.
> This conclusion requires an understanding of what endows something with consciousness
Either you mean A) a "full" understanding or B) a partial understanding. If you mean A then I disagree. We can talk sensibly about markets even though we don't fully understand them. If you mean B, then I think we have it. I'll be the first to admit that it's pretty shoddy, but its not unusable. I love Dylan16807's point:
> If a doorbell is conscious then we've made "conscious" into a useless term.
Most creatures that are injured seem to react physically and mentally like we do when we feel pain. From an evolutionary perspective, they also all probably do for the same reasons. So, my default assumption is that animals feel pain.
I dunno, do trees feel pain? They react to stimuli. Do you feel pain when you're unconscious? People react to stimuli when they're asleep, in a coma, braindead...
> In any case, the simplest hypothesis is that all animals can feel pain. The null hypothesis should be the opposite.
Following the interpretation that "The null hypothesis is generally assumed to be true until evidence indicates otherwise." I'd argue for H° being that all living beings feel pain until evidence indicates otherwise. That approach wouldn't be very economical though which is why do the opposite.
> How else would they be convinced to avoid dangerous situations, than by an unpleasant sensation?
Moderate heat and cold, certain textures, a feeling of resistance against my movements... I react to many kinds of potentially-harmful stimuli that aren't pain. Unpleasantness is not a synonym for pain.
Mainstream psychology stays in an eternal mystical stage that speculates on possible inner states (and their self-reporting) rather than on observation. We can't really call it pain unless we can be absolutely sure that it's exactly what I feel when I say I'm in pain. We will ignore that under that definition, I can't be sure that my twin sibling feels pain. We will paper that over with an innuendo that things that look the most like me are most likely to feel pain as I do, and we will extend the definition of "me" to all potential scientists (humans) as a concession to abstraction and out of professional courtesy.
I don't understand the insistence of the article to place quotes around the word pain when referring to what insects feel. These are animals that can sense their surroundings and react to stimuli. How else would they be convinced to avoid dangerous situations, than by an unpleasant sensation?
In any case, the simplest hypothesis is that all animals can feel pain. The null hypothesis should be the opposite. And it has to be a complex hypothesis that explains why some animals can feel pain while others don't (which is tricky).