Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And behind the scenes it was probably the equivalent of flipping a coin between red and white.


It was totally rule-based. More complex systems had a little more probabilistic stuff via Bayes and "certainty factors", but not this one.

I worked on another one for this company called Vibration Advisor which diagnosed odd noises in GM cars.


Being rules based isn't necessarily a bad thing or disingenuous. I develop healthcare AI products (ML/DL researcher) and we actually aim to be able to translate our models into a rules based engine (find a strong signal, interpret/understand model well enough to translate/embed into a rules engine, look for a new signal in our models, rinse + repeat). We end up deploying a mix of rules based and true ML based models into production but it may not be immediately obvious to the end user which type of model they are using.


I didn't mean it as being disingenuous - that's precisely the value that was sold and if you could do the proper "knowledge engineering", it worked well. It's just interesting to me having seen the previous turn of the AI hype wheel, how much is being repeated.

Another interesting thing was the transition from special purpose hardware - Lisp machines - to C code on commodity platforms. A contrast from today's ML moving in the other direction.


That's fair. Google's recent paper on predicting patient deaths is another good example of this (logistic regression + good feature engineering performed just as well as their deep learning models, and the logistic regression has the added benefit of being significantly more interpretable and as a result, actionable).

It'll be interesting to see when specialized ML focused silicon will become readily available. Right now I find ML libraries that are able to run on blended architectures (any combination of CPU and GPU's) much more exciting/impactful than TPU's. The ability to deploy on just about any cluster a customer may have available is huge.


In the near future customers don't have clusters, cloud providers offer elastic adaptive compute sharing.


From my experiences (currently work with several Fortune 100 health insurers/benefits managers, and have previously worked for another large insurer, a major academic medical center, and a large pharma company), healthcare organizations tend to be rather cloud adverse (most of our contracts very explicitly forbid us from using any form of 3rd party cloud computing). So while I agree that much of the heavy lifting will shift to the cloud (or already has), I expect health analytics will continue to favor on-premises solutions (GPU’s still tend to be pretty rare compared to CPU based clusters but are slowly becoming more common).


As someone in the field, what do you think about the idea of a fully automated "doctor"?

Are we close to it being technically feasible , leaving aside regulation and interpersonal qualities doctors bring to the table ?


Depends on the definition of "doctor".

The likes of INTERNIST, CADUCEUS, and MYCIN have been around and provably accurate starting in the late 70s through the mid-80s. MYCIN even arguably sparked the 1st AI boom. But there were ethical issues with computer-aided diagnosis that I'm not sure have been solved/overcome.

Perhaps the current startup generation can get past them with Zuckerberg, Kalanick and Holmes as role models. :)


It's funny how much complex "AI" really comes down to If and Switch statements. "Utility AI" is popular for videogame AI right now - it's weighted switches.


Diagnosing vibrations is all the rage right now, it's just rebranded under "predictive maintenance". The Industrial Internet of Things crowd is all hyped up about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: