Hacker News new | past | comments | ask | show | jobs | submit login

You feel pain, right? Well, an insect is a lot more like you than a robot... the mechanism by which it it decides what to seek and what to avoid is based on an infrastructure that's very similar to your own, even if much simpler. And the experience of pain appears to be something very fundamental in the wiring of living things, its avoidance being very directly connected to evolutionary success. So it is absolutely logical to assume this mechanism is fundamentally equivalent in humans and insects. The behavior is the same, the wiring is the same, why not use the same word?

For now, most robots's decision-making apparatus isn't anything like yours. So it really doesn't make much sense to use the same vocabulary. But that could change if we use neural-network computing in a way that appears to more directly simulate the functioning of living organisms. Maybe building an ant-robot that's really a functional copy of a biological ant isn't that far off, and in that case, sure, it might feel pain.

I don't see anything simplistic about this.




This seems like the most rational stance to me now.

I had a pastor once attempt to teach me that all animals are basically robots, and that this justifies treating them however we want. In his mind, we're the only ones actually 'awake'. I suspect a whole lot of people think like this on some level in order to justify all the ongoing barbaric treatment they willingly endorse or carry out.

Hearing that shook me up for a bit, but in the long run it accelerated the process of me abandoning religion.


> an insect is a lot more like you than a robot

If you mean that insects are more similar to humans than they are to robots, I'm actually not convinced.

When we are deciding what kind of mental mechanism to attribute to insects to explain their behaviour, there are two desiderata we need to satisfy:

- Our hypothesis has to be strong enough to explain the behaviour

- It has to be plausible given insect hardware

YeGoblynQueenne has suggested our starting point should be the hypothesis that "that all animals can feel pain." This is certainly strong enough to explain the behaviour, but it's implausible given the size of an insect's brain.

A fruit fly's brain has O(10^5) neurons; ours have O(10^10). Modern artificial neural network architectures typically fall somewhere in the middle. Now, certainly both human brains and insect brains have sophisticated architectural features that we haven't figured out yet. But given that robots, ANNs, etc can exhibit the same sorts of behaviours as insects given similar hardware constraints, I don't think we need to attribute pain (or any sort of mental life) to insects in order to explain their behaviour.

It would be cool if a neuroscientist could weigh in on this.


1. Most neural network architectures have fewer "neurons" than 10^5. Maybe the word you are looking for is "parameters"

2. A neuron in the brain and a neuron in a neural network are totally different things.


> 1. Most neural network architectures have fewer "neurons" than 10^5. Maybe the word you are looking for is "parameters"

That's a fair point, I shouldn't have said "typically." But some of the larger models probably have that many linear filters.

> 2. A neuron in the brain and a neuron in a neural network are totally different things.

There are certainly disanalogies between biological neurons and neurons in a vanilla feed-forward network, but A) there are analogies as well and B) a lot of interesting work is being done to make deep learning models stdp compatible.

At any rate, I think it's a reasonable claim that an insect brain has representational power closer to a SOTA ANN than it does to a human brain (though I welcome anyone here who knows about biologically plausible deep learning and/or insect brains to prove me wrong)


I entirely disagree. The onus of proof is on you to explain how a highly idealized ANN has the same expressivity as a fly's wetware.

Just because neural networks are tech's Zeitgeist, doesn't make them the perfect explanation for all physical phenomena. Fifty years ago there were people equating thought and consciousness with artificial intelligence programs; and 200 years before it was the watchmaker's clockwork holding that regard.

Yes, as a man of science I subscribe to reductionism. Brains are made of neurons which are made of molecules which are made of atoms and so on, all governed by the laws of physics. But there's no reason to believe ANNs have the required intrinsic complexity to behave like ganglia.


> The onus of proof is on you to explain how a highly idealized ANN has the same expressivity as a fly's wetware.

I've made an argument - it's essentially a functionalist one. The intelligent things insects do - object detection, maze navigation - are all things that ANNs are really good at.

To put things in perspective: the reason that I don't think that ANNs are anywhere near as expressive as the human brain is that there are countless behaviours that humans perform that ANNs simply can't - generalizing to novel viewpoints in vision, for example. (Or if you want to go whole hog, natural language understanding.)

AFAICT The same is not true of insects. To dissuade me, you'd have to specify an insect behaviour that ANNs are fundamentally unequipped to perform. I'm not an entomologist so I'm totally open to the possibility that there is one. I also imagine there is significant neural diversity within the insect kingdom - presumably some bugs are smarter than others, and maybe some of the bigger-brained ones can do stuff that would necessitate an explanation that invokes consciousness. But you have to tell me what it is.


I'm wondering now.. if you could replace just an ant's brain with an artificial, neuron-for-neuron copy, such that the ant would continue on the outside to behave in an identical way in identical situations, what if you went one step further, and replaced the hardware neural network with a virtual one running on a general purpose, ant-brain sized CPU? Or what if you had one third of the original ant's brain intact, one third replaced with artificial neurons, and the last third virtualized? Would you end up with three separate, yet closely interacting consciousnesses?

For that matter, if you take a human being, and cut the fibers connecting the two hemispheres, you end up with two separate minds, as demonstrated in experiments. Presumably you end up with two separate consciousnesses too.

If you replaced the neurons in your head one by one (say, 1% per day, over 100 days) with tiny machines functionally equivalent to neurons, what would be the effect from your point of view? To the outside, you would remain the same.


This line of inquiry is generally referenced as the Ship of Theseus argument. The underlying philosophical question about identity holds even for inanimate objects whose parts are replaced.

But, this argument has also been extensively reapplied to brains, bodies, and minds as well. The Chinese Room thought experiment is one common reference for this, where a functional system is trained to translate Chinese texts but framed in a way to cast doubt on whether there is any understanding of Chinese.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: