Hacker News new | past | comments | ask | show | jobs | submit login
Neuroscientists Wirelessly Control the Brain of a Scampering Lab Mouse (ieee.org)
69 points by aaronyy on Nov 29, 2016 | hide | past | favorite | 38 comments



If you look more than 50 years into the past, to any civilization on earth, there are a number of behaviors and norms that were totally acceptable to the people of the time that modern man/woman would consider morally repugnant. Not allowing gays to marry, not allowing people of different races to marry, not allowing women to vote, slavery, ritual sacrifice, etc. It follows then, that there are likely things we view as morally OK that future generations of humans will condemn us for. It's a healthy exercise to imagine what those things might be.

Reading about this research makes me think that this sort of animal experimentation will certainly be one of those things.


I dont think they'll condemn us for it, just that they'll do it a better way.

People suffer and die from things that animal research can help us study, and I don't think that killing a dozen or a hundred or a thousand rats per lifetime we save is too many. (A human lives about 30 times as long as a rat, and Id say a human year is worth 30 times a rat year. A little self-centered, but probably not genuinely that awful.)

Something like 600,000 people in the US die each year of cancer. If you figure on average they'd get ten more years, that's about 75,000 human lifetimes a year lost.

So Id be okay with a literal rat holocost: 75,000,000 rats a year on the altar of science for as long as we can get useful knowledge about cancer from them.

In actuality, I expect we're killing less than 1 rat per human lifetime lost each year -- and Im so absolutely okay with that cost to make progress against scourges like cancer, it's legitimately hard for me to understand the other view.

Do you really believe that future humans will condemn us for killing less than 1 rat per human lost to try and stem the loss of human potential?

Or is your position that it's just not sufficiently useful to do things about mental disorders (which cause lifetimes of suffering and disability, instead of immediate death)?


> Do you really believe that future humans will condemn us for killing less than 1 rat per human lost to try and stem the loss of human potential?

I think it's possible that they could condemn us for forcing rats to suffer and die for our own benefit. I don't think it's as likely as future humans looking down upon us for the awful way we treat animals we eat and maybe eating meat at all, but I can see it happening.

It seems like there's a tradeoff between ethics and the pace of progress. After all, if we only cared about progress then we'd experiment on humans. On the other hand, medicine would probably stall without any experimentation on live animals, and so we choose to do it on rats.


If our descendants our truly more enlightened than we are, I doubt that they would condemn us for our comparatively barbaric behavior in the same way that we dont judge predators for playing with their prey. Just as playing with the prey improves their hunting ability, so to do we learn from our mistakes, eventually. The Romans are not universally condemned for causing the extinction of the Tencteri, Usipetes, Carthaginians. Similarly, we do not condemn our ancestors for the likely genocide of the Neanderthals, Denosivans, and all the other hominids.

Judging people from the pass through the scanner of modern morality is counter-productive ti understanding history. The historian, Herbert Butterfield, put it best when he said "real historical understanding is not achieved by the subordination of the past to the present but rather by making the past our present and it's tempting to see life with the eyes of another Century than our own"

In the end, society is what makes us civilized. We are still the same species we were 70,000 years ago. If society fell and we somehow experienced the same material conditions as they did, we would behave more or less the same. When we have wars and conflicts that are so intense that civil society breaks down, every one of us is more than capable of committing the most horrific atrocities. The 20th century is littered with examples of that. If you need any more evidence, I'll leave you with this description of what happened to a 13 yr old peaceful protestor in Syria

Video posted online shows his battered, purple face. His skin is scrawled with cuts, gashes, deep burns and bullet wounds that would probably have injured but not killed. His jaw and kneecaps are shattered, according to an unidentified narrator, and his penis chopped off.


Speciesism, that's what Peter Singer calls it (the philosopher, not the roboticist). We tend to draw an artificial line between human and nonhuman.

The article describes how it selectively can be used in a hybrid setup. I hope they can come up with applications that are new, rather than doing the same experiments over and over again. I often see results and am like duh... It's like running code, changing a few lines, and running it again to see if anything changes.

And now an extremely rough statement. If there are so many people dying anyway we should definitely experiment more with humans. That will make all our experiments likely to be more humane.

Last, I hope robots won't see us as lab animals. To understand their own brain we are likely to be their best test subject though.


> If there are so many people dying anyway we should definitely experiment more with humans. That will make all our experiments likely to be more humane.

Im not sure I follow: are you suggesting we raise creatures that take longer, are smarter, and are harder to study just because they're us?

That seems to be increasing the total harm just so it happens to humans rather than rats, ignoring the humans we do it to didnt have any more choice than the rat. I find that far more speciesist than recognizong legitimate differences in human mental powers and complexity from rats.

That being said, we do experiment on humans too, including some pretty radical things on already dying ones.

The problem comes in that if we're going to create a broken mind just to study it, we should a) use a simple mind we can actually perform a non-confounded experiment on and b) use the minimal mind possible to minimize suffering.

So rats rather than 3 year old humans.


I'm assuming that if we would treat humans in the same way we treat rats, we would likely raise standards.

Also rats are able to choose by the way. We just don't give them a choice.


Are lab rats treated poorly? My (limited) experience indicates that scientists are generally as humane as they can be, and the rat lifestyle isn't any worse than say... A pet rat. Maybe Im an outlier.

My point is that if your experiment requires a GMO organism, the organism can never have decided to participate, since it would require choice pre-conception.

If GMOing a rat in to a forced experiment is bad, isn't doing that to a human much worse?

It's still worth the cost using humans, but harm reduction would suggest using rats instead.


The question is not about choice w.r.t. a GMO. It's about the underlying moral values behind a seemingly rhetorical question "isn't doing that to a human much worse?"

https://en.wikipedia.org/wiki/Speciesism

Why is it so much better to force non-humans to partake in our experiments? Just swap rats and humans in your sentences and the bias is very obvious.


We also draw an artificial line between living and non-living. All lines are artificial.


Due to the intelligence difference it doesn't seem reasonable to even equate one human year to one rat year.

There are no ethical complaints with trashing an artificial neural network.

It could be argued that it is ethical to kill a bivalve because their ganglia is comprised of about 100 nerves. Their "mind" mostly consists of direct signaling from input to response.

But what about 1000 neurons? 100,000? 100,000,000? Humans have 100,000,000,000. There likely isn't some perfect dividing point between conscious animal that should be saved and dumb animal that doesn't have a conscious. The negative utility of killing an animal probably increases continuously as the number of neurons increase.


Brain has no pain receptors. Mice probably have no ability to reflect on the fact that they are being experimented upon. So it is probably your own icky feeling about that experiment that bothers you, not a real problem.


I like your style. Your remorseless capacity to rationalize the destruction of another orginsm's experience makes you a suitable candidate for an experiment I'm designing.

In this experiment, you'll be required to watch hours of cell phone videos in which Central American drug cartels torture their rivals in dungeon-like warehouses.

Afterward, we'll ask you to rationalize in a fatalistic , whether or not people should feel "icky" about second-hand footage and general evidence of events that have already transpired, which they can do nothing about, and also have you form similar opinions about the fact that comparable events are likely to be transpiring as we speak, although by nature we lack the capacity to anticipate where and when they will occur next.

Finally, after having you participate in the rationalization of said events, ex-post-facto, we'll permit our researchers to place you in blinded version of the famous Milgram experiment, involving deception and empathy with regard to the suffering of others, and gauge how well you perform, as compared to the control group.

Hopefully you won't feel too icky afterward.


Wouldn't you want to just do a before and after Milgram to get context? Otherwise it's hard to know if I was lacking empathy beforehand compared to the control group.

Happy to play though, what's the pay like?


> In this experiment

What hypothesis is this experiment testing? Are there controls?


And probably eating animals. Once we have lab grown meat it'll be easy to take the moral high ground on that.


What sucks is that everything around/in us fights for survival by eating all living things it could. So even if we manage to avoid animals, plants will still send distress signals once we collect them, nuts will still contain powerful allergens, viruses will still eat us from inside etc. Simply that's how life works; to change it we would need to go beyond Universe and reprogram it from outside.


I thought the point of avoiding the consumption of animals was so we wouldn't have to murder things which have similar concepts of fear and pain as we do. Why would plants sending distress signals or nuts with allergens be an issue?


"to change it we would need to go beyond Universe and reprogram it from outside." - that was quite beautiful.


Original sin was when the first living creature decided to eat another living creature instead of getting energy directly from the source like the sun or a thermal vent.


I couldn't disagree more. Not because human lives are worth more than rats. They aren't. The only thing that gives human lives value is the fact that each one of us wants to preserve our own. The best way of ensuring this is entering into a collective agreement against homicide. That is the source of this 'moral' position. Rats, not being able to kill us, do not benefit from this agreement.


Would these advanced future generations exist without this sort of research?


True, changing the nature in this way, we might produce sth with side effects beyond our control and more dangerous is detecting it early, because we are playing with a huge and complex circuitry of nature,and probably we assume we know enough, but I suspect we know less that one percent.


There is also the roboroach if you want to have a taste of this on your hand, with a cockroach. https://backyardbrains.com/products/roboroach

In the neuroscience conference they also showed a DIY optogenetic fruitfly kit, but it's a pity the channelrhodopsin transgenic fly is not available outside a neuroscience lab.


Wow. That's a neat kit.


I can't help but feel sorry for the mice, I can't imagine what it would be like to not have control of your limbs. I understand the purpose is the opposite, however.


I would not be surprised to find out the brain backwards rationalized it and the mice from their point of view always thought they wanted to do that.


This is the real question here. Unless the mice can explain itself to us, we really have no way to know. But I'm not so sure about the ethics of conducting such experiments on humans.


This question is answered already - we have enough studies and reports of people who suffer from delusions, spasms etc. to have detailed information on outliers.

Whats really going to make you lose your equlibrium is that for normal human beings, the impulse to move a limb (like a hand), preceded the conscious instruction to move it. See the Libet Experiments, (and the can of rebuttals and counter rebuttals that threw up)


Yeah, what is the internal dialogue like, such as it is in a mouse?

"Huh! Okay, yeah, I guess I'm going this way ... Uh, I forgot something at the nest. NO! No, I'm going to ... turn around, yeah! Turn around and around and around."

Surely fear response could be measured in terms of heart rate changes or skin.

It's creepy, humans are a bunch of creeps.


I would imagine it as a spasm. You have not control over it and you don't try to rationalize it, just accept it as something that happens to your body.


Impressive, however nature itself is even more impressive http://www.scholarpedia.org/article/Neuroethology_of_Parasit...


I understand this is valid science but at the same time I'm extremely creeped out. Seems to me like all these findings follow from some physics and chemistry and the mice didn't need to be implanted with these devices. Don't really see the point of making them go in circles either.


Which findings, specifically?

Because I feel like the article makes a pretty good case for the benefits of studying optogenetics. These three paragraphs in particular succinctly outline that case:

Neuroscientists study these patterns of electricity, but they’ve been limited by the imprecision of their tools. Much progress in biology depends on observation, which means scientists need tools to both meddle with an organism’s natural bodily systems and watch what happens. Typical neuroscience techniques rely on electricity, using electrodes on the scalp or implanted inside the brain to stimulate and record from groups of neurons. These electrodes are relatively large and crude, though, and can’t target very specific cells, such as the neurons in the hippocampus that encode distinct memories.

This limitation bothers me. From an engineer’s point of view, the study of living creatures can seem messy. When I’m tinkering with an integrated circuit, I can swap out one transistor and check to see if the chip still works. If it doesn’t, I can be sure the new transistor is responsible for the glitch. In biological systems, it’s far harder to isolate a variable of interest.

With optogenetic technology, we can turn neurons on and off as if they were transistors in a circuit. Geneticists have various ways (which we won’t go into here) to deposit the necessary genes into very specific clusters of cells. With our light-up devices, we can then switch on a particular set of neurons. The neurons react to light within milliseconds, making the result of our tinkering fairly obvious.


Parts that you mention are part of those findings. Seems to me all this could have been done in vitro. No mice brain cage mutilation necessary to see if a cell with some genes reacts to a certain wavelength of light.


You are right that testing whether the gene changes how cells react can be tested in vivo, and that's how it was done originally. But the point of these studies is to figure out how certain parts of the brain work. Since the brain produces behavior, we can best understand how it works by changing brain activity and then seeing how that changes behavior. That can't be done in a dish, at least not now, even though we often use modeling whenever possible to make sure we're doing the right experiments. The goal was definitely not to control mice or make them suffer and I find it a little regrettable that the professor did not emphasize that. The fact that making neurons in motor cortex fire causes movement is proof of concept, but no neuroscientist wants to just sit around making mice just turn in circles all day. The goal here is to better understand how neurons firing produce behavior. For example if we can crack the code for movement, we can e.g. help patients better recover function after a stroke. The techniques we've had for a hundred years have not gotten us all the way to an answer on that yet. Think of it this way: do you want to debug your program with only print statements, or would you rather be able to put in a break point?


I think it's more intended as a proof of concept to allow researchers to target specific neurons and a live animal would help demonstrate that. If I'm understanding correctly they would be able to target the optogentic cells to very specific parts of the brain.


That's not the purpose of this experiment, as the capability to target specific neuron types has been well demonstrated in the past. It's approaching a toolbox technique at this point much like regular electrophysiology and electrical stimulation.

From the IEEE article (I did not read the original research paper) the goals of this experiment are lost in the veil of journalism, as it boils down to the following items:

1. Implantable chronic optogenetics (fairly novel)

2. Wireless RF power for said implant (novel context, not necessarily novel technique)

3. Use the optogenetics to do "something". This is the worst part of the article and is the meat of the Neuroscience - which neuron types are targeted (genetically), with which opsin, and histological evidence of where the opsins are located are all important features. The article seems to take at face that making rats pause and run in circles is interesting but scientifically speaking if you can't say anything about why they run in circles it's not a particularly good or useful study, as it doesn't actually give people any information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: