Hacker News new | past | comments | ask | show | jobs | submit login
New AI System Predicts Seizures (ieee.org)
71 points by riffraff on Nov 16, 2019 | hide | past | favorite | 32 comments



Time will tell, but as a machine learning engineer, when you see results this good it's more probable that a mistake was made. They could be reporting the training error on an overfit model or data leakage could be occuring due to an improper train-test spilt of the data.

Also, it is definitely appropriate to use the term AI in this case. AI is not a technical term so it's really in the eye of the beholder, but I think it's safe to say that ML is a subset of AI. Perhaps people are conflating AI with AGI?


It's a 14-person training set with an 8-person test set, so my guess is that it can pretty accurately predict seizures in the small group of people it is trained on. Whether the model could be kernelized for a useful general deployment is unclear. It still requires many electrodes attached to the scalp, so there is a still a ways to go before it can be integrated into a watch, for example.


They acknowledge this in the article - the system will have to be transfer learned for every patient. Which IMO is ideal, but you will need training/validation data, which in this case sounds like it'd be extremely expensive. Furthermore, the system could get automatically better over time _for that patient_, if properly designed and fed clean training samples.


I assume it’ll end up in an implant like RNS


Yeah watches can't even detect whether you are sleeping or not. The products in the market are mainly accelerometer based and aren't really reliable if you e.g. are awake but don't move.


They're too good and also bad at the same time. Most laypeople don't realize that "99.6%" accuracy is 1/250th chance of making an error. If you do inferences every second, that's 14.4 errors per hour. Now granted, some of those errors are false negatives, but I'm not sure what's worse in this case. Depending on which action is expected after receiving an alarm, this could render the device completely useless.


I think you meant AI is a subset of ML. Agree AGI is a red herring here. Mostly this is a marketing problem, but the pattern rec., ML, "AI", etc. branding has been shifting around for decades.

This set is far too small to say anything strong about the results, but the problem is interesting anyway.


ML is a subset of AI therefore even though the system is using ML you can call it AI.

Similarly sculpture is a subset of visual arts therefore when sculpting you may claim to be creating a work of art.


I think I disagree in two regards. One that unless we stretch the meaning of AI to extremes, there is plenty of ML that is not AI. I guess that presupposes we sort out the stats vs. ML issues but that's what you get with all these fuzzy terminologies floating around. So it isn't really useful to think of it as a subset, in my opinon.

Secondly, while I know of (mostly historically) a small amount of serious non-ML AI and AGI work being done, it has almost nothing to do with the common parlance to day, which is nearly entirely ML. Is this what you mean when you talk about non-ML AI or is there something I'm missing?

For what it's worth, I'm happy with thinking of them as overlapping, although I do think the AI terminology is almost useless at the moment, and ML is slightly better defined.


I was not referring to the inconsistent way in which these terms are used to sell products and obtain funding for startups, but to their academic definitions.

While most recent successes in the field of AI have been brought about from advances in the subfield of Machine Learning, even today's most advanced AI systems have components that are not Machine Learning (e.g. Alpha Go still requires tree search techniques from "classic AI" to work).

In the definitions below, AI refers to any "intelligent agent", whereas ML refers to the subset of techniques that achieve this through learning/experience/data.

ML: "Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E."[0]

AI: "Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals."

[0] https://en.wikipedia.org/wiki/Machine_learning#Overview [1] https://en.wikipedia.org/wiki/Artificial_intelligence#Defini...


Ok, we are bogged down in semantics, but I see where you are coming from. I don't buy the proper subset argument for reasons above (i.e. I don't buy the broading of AI to fit that idea of "intelligent agent", as it includes too many things that don't really fit, imo).

Unless something has changed radically since I stopped paying as much attention there is not actual agreement on these terminologies, at least broadly, in academic circles.

I certainly agree many current systems with a core ML component include other techniques from lots of areas including what you call "classic AI" as well as optimization, etc., but the ML is still the fundamental part of nearly everything recent I've seen. As pretty much every successful system of this type is a hybrid in the sense you mean, I don't find differentiating them from some putative "pure ML" approach very interesting.

There was some good work in very different approaches in the 70s through 80s, but that seems to have tapered off in the 90s really. I'm not very current though and would love to hear of newer interesting things in that vein.


The sample size is 22 patients.

Also, I've noticed the definition of the word "AI" has grown to encompass pretty much any type of software that does something with data.


> Also, I've noticed the definition of the word "AI" has grown to encompass pretty much any type of software that does something with data.

Afaik AGI research is still a separate thing, so it's not really that misleading to use "AI" for applications like this were machine learning is involved, resulting in a very specialized "artificial intelligence" that can spot otherwise hidden patterns.


Also accuracy is not a good measure for evaluating detection of rare events such as seizures.

(https://arxiv.org/pdf/1812.01388.pdf)


I think it's ok to call this AI, but I agree that it's a little misleading. I think a better way to describe much of AI these days is pattern recognition. Incredible pattern recognition, to be clear, but pattern recognition.

Agreed about the sample size. It's not clear from the article whether the high accuracy was based on testing against the training set, too. I assume not, as that would be exceptionally sloppy. But it's not clear.


What, do you think that's too low? Sounds pretty normal for this type of study.

Of course, that does mean the results don't necessarily generalize to the whole population, which is why you do eventually need much larger sample sizes when you're working toward FDA approval.


It seems a little premature to proclaim "Near-Perfect Accuracy" on such a small sample size. It sounds to me like they may have overfit their model.


> don't necessarily generalize to the whole population

In my experience it is far more likely to mean "doesn't generalize well, and moving to a reasonable training set will reveal flaws in the model.


> Also, I've noticed the definition of the word "AI" has grown to encompass pretty much any type of software that does something with data.

That's definitely the case in conferences like O'Reilly Strata. They seem to be 100% focused on AI/ML, which seems really limiting. I'm starting to skip attendance until things get a bit more balanced.


Good catch, I don’t think they mention that in the main article (at least in reader view).

In this case I guess the “AI” label is relatively well applied (compared to the scope of ALL things I’ve seen it haphazardly slapped on), but yeah, still - people need to stop calling any kind of data analysis “AI”.


"A false positive rate of 0.004 per hour" is a sly way of putting "a false positive once every 10 days."


That's 3 false positives in a month, depending on how many seizures a patient suffers from, this might actually not that bad. Afaik a false positive doesn't have any direct consequences except "be careful and keep medication ready"?


How does this become a treatment, assuming the results actually hold up? Is it possible to prevent seizures with a timely dose of a powerful epilepsy medication? Are there portable EEG rigs that can produce sufficiently powerful readings?


I assume it’d be used in an implant like the RNS, which is used to treat (mostly) focal seizures. It’s a small device implanted in your skull to record localized EEG and provide neurostimulation to quell burgeoning seizures. Currently, it’s periodically adjusted and individualized by neurologists based on the the EEG it records. Maybe this could speed up or automate that process.


At least it lets you get into a safe place and position. Perhaps you can even get someone to watch over and be ready to call 911 if it comes to that.

Edit: Perhaps this hints at some medication that is not currently viable but could be administered if the seisure was predicted: "Notably, seizures are controllable with medication in up to 70 percent of these patients."


Correct - a friend had a seizure on the stairs and was lucky to get away with just a cut lip...


It's possible to make anti epileptic drugs work rapidly, some in 30 minutes - but it requires an injection.


Its interesting that service animals like dogs or monkeys can sometimes be trained to anticipate seizures. And in many of these cases its not clear what the signal is- a change in activity, odor etc.


Work proceeds on natural intelligence system for predicting seizures: https://www.nature.com/articles/s41598-019-40721-4


Actual paper, linked from the article:

https://ieeexplore.ieee.org/document/8765420

"Efficient Epileptic Seizure Prediction Based on Deep Learning"


Some questions not addressed by the article:

* what is the input to the model? It sounds like it's EEG, but how practical is it to collect those data on a day-to-day basis?

* how far ahead of time are seizures predicted?

In other words -- how practical is this research?


"predict the occurrence of seizures up to one hour before onset"

"We are currently working on the design of an efficient hardware [device] that deploys this algorithm, considering many issues like system size, power consumption, and latency to be suitable for practical application in a comfortable way to the patient"




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: