Hacker News new | past | comments | ask | show | jobs | submit login
Scientists Can Read a Bird’s Brain and Predict Its Next Song (technologyreview.com)
89 points by aq3cn on Oct 11, 2017 | hide | past | favorite | 29 comments



Here's a link to the actual manuscript, instead of a strange summary that somehow drags Elon Musk into this:

https://www.biorxiv.org/content/early/2017/09/27/193987


The male zebra finch, like many other songbirds, purposely attempts to sing the same song over and over without any variation, including in pitch and volume. This is because singing is primarily a courtship ritual, and females are attracted to high song stereotypy in males. A random song can be selected out of the hundreds sung daily and 80 to 90 percent of the other songs will be pretty much exactly the same.

While the research is of course impressive, it's a bit premature to extrapolate it to humans, or any other animal.


I'd love to see this tried on Mockingbirds, which mimic other bird songs but also a huge variety of sounds. The ones in my neighborhood even do car alarms, and rapidly cycle through a whole range of noises at the top of their little lungs. The repertoire seems to vary a lot from one bird to the next, I but was never even able to pick out a pattern for the one guy whose venue of choice every night all spring long this year was a bush right outside my bedroom window.


Ah, that does put it into perspective a bit.

A sibling comment mentions trying it on mockingbirds; how about blackbirds, whose song never repeats (although it does show structure and they occasionally mimic)?


"We implanted 16/32 site Si-probes in male, adult zebra finches and recorded simultaneously their song and neural activity in HVC; then we used these data to train a long-short-term memory network (LSTM 5) to translate neural activity directly onto song. The goal of the network is to predict the spectral components of the song at a time bin ti, given the values of neural activity features over previous time bins"


I wonder how much better the network performs than a network based on: previous songs, time of day, motility, temperature.


Extrapolating this to humans is scary (given the mental privacy we are used to), but perhaps inevitable. In some sense, the security of our thoughts is only through obscurity, and that may make them intrinsically insecure. Untangling that obscurity may be some years away, but it's not beyond the realms of possibility. I wonder if the brain has evolved some internal methods of encryption - I would imagine not, as I can not think of a solid evolutionary advantage for brain based encryption.

A very interesting topic, and I can't quite decide if a mind-machine interface would end up having a net positive or negative effect on our society...


It's not inevitable unless people believe it. Seriously I don't know when people collectively lost the will to actually trying and dictate the rules of our society


Agreed. I think its because since the scientific revolution we got the idea that science/technological progress will gradually cure all of humanities problems. Its accepted as fact by many that capitalism + science will solve every problem under the sun without any intervention on our part.. even though its obvious the market does not always serve society, and that many societal problems are far outside the realms of markets.


When has legislation worked to prevent technology? Just look at North Korea and see how well that's working.


Just because you can't dam a river doesn't mean you can't divert it.

More explicitly: there are many cases of regulation shaping the course that technology takes. It's a strawman (of the variety people are trying to call out here) to say that we're either helpless in the face of technology or we must halt its progress. There's a huge middle ground of regulating and guiding the process.


This research is less reading thoughts and more predicting actions. It's unlikely to lead to following your internal monologue but it could predict actions involving speech and movement.

The most likely first application of this sort of research is as a sort of neural prosthetic for people with speech and/or movement disorders.


I'm not so sure; I'd imagine that a human mind is a massively parallel process, and singling out individual trains of thoughts or decisions will be hard. It's probably fairly easy to tell what a large wildcat is thinking about when it's time for a meal, but an average person? How often has someone advised you to 'sleep on' a difficult problem? How often have you discovered a solution to something after spending awhile doing totally unrelated things?

It will probably take a lot of doing to decrypt that mess reliably across a broad population. And think of the impact that culture and language will have! Surely people brought up in different systems of belief will think in different ways, their development having been shaped by different memetics and core ideals.

Maybe that's the sort of problem that deep learning and AI could help sift through. But I'm not sure that I'd want to train a nascent AI on the human subconscious. Didn't Poe address this sort of thing? "The Imp Of The Perverse?" Hey, totally unrelated, does anyone have HBO's number? I think I got a pitch.

Oh, and you can't mention Mind-Machine Interfaces without the quote from Alpha Centauri :)

"I think, and my thoughts cross the barrier into the synapses of the machine - just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams the sensibility of the machine invades the periphery of my consciousness. Dark. Rigid. Cold. Alien. Evolution is at work here, but just what is evolving remains to be seen."


One major roadblock to something like this is the precision (and invasiveness) of the probes. You can get very imprecise probes that are not invasive and you will get an average reading from a large number of neurons. This may be enough to make predictions but likely much too noisy to actually “read” any thoughts. Alternatively you can get very precise probes that are connected to single neurons (or at least a very small number), but it’s invasive (you literally have a wire connected to the nerve tissue). You can now very accurately read individual values but you probably need a lot of these to actually determine what’s going on since a single neurons state isn’t enough data. Additionally, such invasive probes so far eventually are always rejected by your body.

It is, of course, a spectrum. You can get a middle ground between the two extremes but it’s still a trade off between invasiveness (and the problems and concerns that come with that, such as rejection), precision (how many neurons worth of data is being averaged) and the need to probe many neurons to build an accurate model.


Eh, the near-term applications aren't too scary.

They're recording from HVC, an area of the bird brain that does that does motor planning. I think it'd be fair to say that HVC sits between the bird's intention to sing and the actual singing actions.

It's really impressive that they can decode enough to reproduce the song, but you could also access the song by....letting the bird sing it a fraction of a second later :-)


Yes, but I think the scary implication is being able to detect the song and then change it before it happens.


> (given the mental privacy we are used to)

I sometimes evaluate the impact of the lack of privacy in personal thought in light of the possibility of demons/aliens/whatever a more-advanced species should be labeled, which are typically classified as non-omniscient (versus most definitions of God). The requirement of locality (non-omniprescence) seems to be the limiting factor, but this limitation is only effective in coordination with limits on long-range/high-speed capabilities, which could be augmented via unknown technologies. I think it is reasonable to assume that if such beings exist, human thought would not be kept private from them.

Nearly all technology interactions are recorded by various 3rd party companies; it seems the various collection and aggregation techniques employed by those companies and government agencies (across companies) already offer a few humans in a privileged position a sufficiently capable approximation to reading minds.


Is it inevitable? What if the bird knew it was being monitored? Impossible probably, but a human could definitely think about that, and even try to foil detection. I’m not convinced that metacognition doesn’t add a non-trivial degree of difficulty to the whole endeavor.


Monitoring a brain seems reasonable - understanding how a brain will respond to hypothetical or predicting it seems insanely hard. Each stimuli can be processed in relation to others that were recently processed (like reading a book and using the words you read when speaking). I can't fathom how we could do more than track brain signals and potentially apply "how would this brain respond to this stimuli currently". There's just unlimited possibilities that are ever changing.

Though we all have to hope we uncover the source of truth in the brain so we can live forever embodied in some other medium :)


I can already imagine Spotify using this to predict what song I'd like to hear next.


I would very much hope that we have an encryption system in our brain. If this technology somehow succeeds to read human thoughts, there is comparatively very less good that can come out of it. The later evolution of the brain would be to develop an encryption system.


Maybe it's time to start encrypting our thoughts as it's being applied all over the internet ;p


Next step: Editing the playlist


Then make the bird sing the complete soundtrack of Doom.


Can't wait for the day when BMI + VRD is advanced enough to not need a phone anymore. It would be like having a computer inside you.


That's only exciting if it's secure, private, and completely under your control.

If not, it's terrifying.


Given the frequency and severity of both security incidents and privacy-related issues in recent history, I would be pretty confident in betting that no technology is ever fully “secure, private and completely under your control”.

At the very least, you know that the fancy new brainPhone will load tons of third party analytics JavaScript...


Wouldn't that make you obsolete?


"help me"




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: