I don't understand this comment. Does an artificial heart need an onboard GPU to do the required real time signal processing? Does a arthroscopic robot localize the head of the device with < 10 um accuracy? I realize I'm coming off as a Neuralink fanboy, but I think it's ridiculous to compare next generation brain interface technology to the existing Medtronic/St Jude/Stryker space. The complexity is the reason why none of those companies would touch this with a 10 foot pole. Medtronic paid $200M to acquire a company that made a 32-channel DBS electrode and then killed the product because it was too hard to manufacture. Neuralink is trying to build a 10,000 channel device.
Do you have any resources to claim that more advanced technological devices require an order of magnitude more animals to experiment on?
How does this justify sloppy stuff like wrong vertebra, injecting random gels, not euthanizing when undue suffering occurs, incorrect experimental parameters (Eg wrong chip sizes)?
*Developing* a brain machine interface technology requires many animals because the key unknowns have to do with acquiring, processing, and manipulating brain signals (so no in vitro model). Once the technology is working, and a particular disease/disorder/disability has been identified, the final number of animals required for the FDA to give safety approval for human tests is probably similar to any other medical device.
> How does this justify sloppy stuff like wrong vertebra, injecting random gels
It doesn't, though my suspicion from reading the article is that these things are all reporting one side. Sealing a craniotomy (hole in the skull) is complicated, and while it is well known that vetbond on the cortical surface is a no-no, it's actually quite acceptable on the skull surface, my guess is that they had thought that the hole was sealed, but there was a crack that the vetbond went through. If you're developing a new way of making 100s of holes in the skull, I don't think it's surprising that it might not work how you model it in vitro. This is precisely why we do in vivo development in animal models. The wrong vertebra seems likely to be driven by Elon actively trying to hire engineers who don't know anything about neuroscience and then thinking he can train them fast. (I found that idea - which was in last years Neuralink presentation) incredibly annoying - typical Silicon Valley hubris.
> Developing a brain machine interface technology requires many animals because the key unknowns have to do with acquiring, processing, and manipulating brain signals (so no in vitro model). Once the technology is working, and a particular disease/disorder/disability has been identified, the final number of animals required for the FDA to give safety approval for human tests is probably similar to any other medical device.
Can you point me to any books, articles, etc that make the direct comparison that more complex devices means more animal trials Through an order of magnitude? I don’t know your personal expertise in the matter so I don’t know how to weigh the knowledge you’re telling me (and I don’t know either, which is why I’m asking). For example there’s someone in the thread who claims to have worked on complex devices and describes a different experience, and h want to weigh these knowledges relative to their contexts.
TLDR - not sure why I'm getting fired up about this. Thanks for your interest, here's a long form answer.
To be clear - I think referring to the sum of Neuralink's in vivo work as "trials" is a misnomer. They are doing basic R&D. A "trial" is when you have a device or drug that you believe will be functional to fix something. It's ironic, because the very thing that frustrates me to no end about how Elon Musk presents Neuralink to the public is biting him here. They are still a huge WORLD away from doing "trials". At this point, they're probably trying to get an IDE ("investigational device exemption") to begin to _test_ their devices in humans, which, in the case of brain-machine interfaces is still a huge step from having a clinical trial to treat something (and potentially get FDA approval).
Qualifications - I have about 20 years of experience in neural interfaces on the academic side (including in regulation of animal welfare). (I've even given a talk at Neuralink about my research). The Neuralink situation is really annoying from an academic perspective - they are doing really amazing tech development. What they presented last week in pigs and monkeys appears beyond what anyone currently can do reliably in the lab (of course it isn't peer reviewed). They have really good scientists that put up with the ridiculousness of working for Elon and are excited to advance knowledge and potentially impact health. The world of brain medicine has been stuck for three decades in a zone where we know (from neuroscience experiments in animals) that things like the symptoms of Parkinson's disease are complex multidimensional active processes whose treatment could be improved with high-bandwidth brain interfaces. But without those interfaces to use to develop the technology in people, there's no way to actually get to clinical trials. I think our hope with Neuralink is that Elon might fund them until they get past that point, and then there would be a virtuous cycle of data leading to discovery leading to treatment, but even at the ridiculous pace he's been driving them at, this is still a decadal problem. DARPA have invested $100Ms in this and still barely cracked it. And unlike the other scenarios he's worked in, there is no market for high bandwidth brain machine interfaces, and it's definitely worrying whether he'll lose interest when he finally realizes how hard the problem is from a market perspective.
I'm not a fanboy, I think there are definitely issues at Neuralink, and a cowboy attitude about animal care may be an issue. One of my acquaintances that no longer works there is a vet, so I wouldn't be surprised. ALL THAT SAID, the way that rodent experiments ("1500 ANIMALS!!!") and regulatory strategy decisions ("WRONG SIZE DEVICES IN PIGS"), are put together with a few actual tragic errors (vetbond leaking into some craniotomies and pigs implanted in the wrong spinal space), without acknowledgement that they have actually successfully implanted flexible probes into other pigs and monkeys for longer than has ever been done before (keeping in mind that the animals often are euthanized when they're data isn't good anymore) gives me the sense that this is a hit piece, and that the people that are piling on either must have strong opinions about animal research in general or don't really understand what the company is doing.
Hey thanks for the context, I have a better idea of the nuances involved here now. I however want to apologize if this has caused you emotional stress and hope you get the space you need to process through that stress.
I have an intuitive and therefore undereducated sense that anything to do with Musk gets hyper escalated and it’s a detriment to everybody.
Exactly, in my experience the more complex the device the fewer the animals you nneed. Simole things like drugs and molecules need the highest number of animals for testing. So it's the exact opposite of what the above comment says.
You don't need to do an animal study to unit test your gpu code.
And yes, I also work on surgical robotics where we have the latest gpu hardware to do the latest "AI" and again, we don't need to do animal studies to test that code. You do animal studies even everything is seemingly perfect on the bench.
I've worked for 10+ companies that have been acquired snd killed by the Medteonics and J&Js of the world. That's the rule, not the exception.
They should start with a simpler device, get it on market, then go for the crazy stuff.
Prediction from someone who has worked on this stuff for 20 years - Neuralink will never have a on market device.