Extracting a control signal with few degrees of freedom from an EEG signal is doable right now. A reasonably motivated undergrad with a few hundred bucks of OpenBCI gear should be able to get “MindPong” up and running pretty quickly. But…it’ll be a little janky: the signal quality won’t be great, especially if you’re out in the real world, trying to do normal stuff at the same time. The content of that signal is also limited, in part due to the skull and scalp that sit between your electrode in the brain. This also make it difficult but, surprisingly, not impossible to perturb the brain with electric current (tDCS, tACS), magnets (TMS), or other techniques.
What about putting something inside the brain itself? We know information is likely much more accessible without the skull/scalp filtering, but we don’t completely understand how information is represented in the brain, how to modify those representations, or even really how to get the raw data out: most neural implants have a pretty short lifetime before they’re ruined by the immune system, mechanical strain, etc. We’re not totally ignorant: there’s been some amazing progress decoding motor and speech intentions and, after a 20+ year hiatus, cool new electrode technology (Pandromics, Neuralink, etc) but there’s a lot to be discovered and invented.
To sum up, we have some fairly crude stuff working now, but it’s looking for a killer app, especially in humans. Building something more like a movie requires work on many different fronts, ranging from materials science to build the electrodes to neuro/ML to understand what those electrodes see.
> To sum up, we have some fairly crude stuff working now, but it’s looking for a killer app
I think it's quite a bit earlier than that. If you get motor imagery working, which is the most reliable signal (your brain is electrically very active when visualizing motor tasks), your accuracy rate is still crazy low. And it doesn't generalize across people since our brains are all pretty different. Some people can't make the EEG work.
That’s fair. Most of the existing stuff is pretty janky, in that it works for some people in some circumstances sometimes.
I’d say it’s a bit like mid-90s speech recognition: it works well enough to be intriguing, but the hassle and inaccuracy make people favor other alternatives. I’d take an eye tracker over an EEG speller, for example, if I had ALS.
Some of this is due to hard technical problems, like building better electrodes or squeezing more information out of the signals, but I think there’s a lot of low-hanging fruit too. For example, many spellers don’t include language models, which, as someone originally trained in speech recognition, absolutely blows my mind.
That makes sense. Granted even if they did language models, most spellers can only be used for short periods of time because of eye strain. Is that low hanging fruit or just small improvements on a still unviable approach? And if you're using motor imagery spellers...you're adding a model to like 2-4 bits of input. The efficiency there is horrible.
I'm with you on "I'd prefer an eye tracker." Modern advances in eye tracking (saccade-based stuff!) are pretty sick, and make the technology usable for long periods of time. You have higher bandwidth input than motor imagery, and don't torture the eye like P300.
I feel like the "killer app" is accessibility. There's so many disabilities (quadriplegia, SMA, amputations, stroke, ALS etc.) where having a BCI that lets you reliably control a wheelchair would be a godsend.
I always see futurology/inspiration porn articles about impressive demos of this stuff, when is it hitting the market? It doesn't have to be the next iPhone, it just has to let people control their mobility or communicate.
Accessibility gear needs to be robust, but a lot of the BCIs, especially the noninvasive ones, tend to be a bit janky. As a result, you can zip around in VR once the experienced lab tech sets you up, but maybe don’t have the DOF to control a real wheelchair or a system that disabled people can set up themselves. Some of this is probably just need some good systems engineering, but there are some legit technical challenges too.
I think this will start to change soon: there’s a lot of money flowing into neurotech and hopefully, some of it will end up with people who are more serious than hype-y. (If nothing else, I’d like a job :-))
maybe it doesn't have to be perfect, as long as there's some safeguards? like if you had some sort of collision radar or a way for the user to emergency stop (e.g. maybe a blink pattern), it might work well enough to give back some freedom.
I hope it changes too. I have a progressive disability and the lack of innovation/competition in assistive devices is starting to make me nervous as I start to need more.
Extracting a control signal with few degrees of freedom from an EEG signal is doable right now. A reasonably motivated undergrad with a few hundred bucks of OpenBCI gear should be able to get “MindPong” up and running pretty quickly. But…it’ll be a little janky: the signal quality won’t be great, especially if you’re out in the real world, trying to do normal stuff at the same time. The content of that signal is also limited, in part due to the skull and scalp that sit between your electrode in the brain. This also make it difficult but, surprisingly, not impossible to perturb the brain with electric current (tDCS, tACS), magnets (TMS), or other techniques.
What about putting something inside the brain itself? We know information is likely much more accessible without the skull/scalp filtering, but we don’t completely understand how information is represented in the brain, how to modify those representations, or even really how to get the raw data out: most neural implants have a pretty short lifetime before they’re ruined by the immune system, mechanical strain, etc. We’re not totally ignorant: there’s been some amazing progress decoding motor and speech intentions and, after a 20+ year hiatus, cool new electrode technology (Pandromics, Neuralink, etc) but there’s a lot to be discovered and invented.
To sum up, we have some fairly crude stuff working now, but it’s looking for a killer app, especially in humans. Building something more like a movie requires work on many different fronts, ranging from materials science to build the electrodes to neuro/ML to understand what those electrodes see.