Completely agree that the response is a trope at this point.
I've seen more comment section finger wagging in response to science articles, offering such wisdom as correlation isn't causation, XYZ "isn't a panacea", research into treatments to mitigate or avoid cancer "don't cure cancer", statistical confidence of 95% falls short of absolute proof, and any number of extraneous add-ons that make caricatures of the research and then bat those caricatures down.
It reminds me of this quip from the Onion, back in 2009:
"All that in just six years? Wow, that's so amazing. If you can't tell, I'm being sarcastic. And I'm being sarcastic because I don't understand the significance of the study."
The QZ article is narrowly correct but widely misleading. It almost willfully ignores the momentum and direction.
In reality, radiologists will not be summarily replaced one day. They will get more and more productive as tools extend their reach. This can occur even as the number of radiologists increases.
Here's a recent example where Hinton was right in concept: recent AI work for lung cancer detection made radiologists perform better in an FDA 510k clearance.
20 readers reviewed all of 232 cases using both a second-reader as well as a concurrent first reader workflows. Following the read according to both workflows, five expert radiologists reviewed all consolidated marks. The reference standard was based on reader majority (three out of five) followed by expert adjudication, as needed. As a result of the study’s truthing process, 143 cases were identified as including at least one true nodule and 89 with no true nodules. All endpoints of the analyses were satisfactorily met. These analyses demonstrated that all readers showed a significant improvement for the detection of pulmonary nodules (solid, part-solid and ground glass) with both reading workflows.
(I am proud to have worked with others on versions of the above, but do not speak for them or the approval, etc)
The AI revolution in medicine is here. That is not in dispute by most clinicians in training now, nor, from all signs, by the FDA. Not everyone is making use of it yet, and not all of it is perfect (as with radiologists - just try to get a clean training set). But the idea that machine learning/ai is overpromising is like criticizing Steve Jobs in 2008 for overpromising the iphone by saying it hasn't totally changed your life yet. Ok.
This is how it needs to be approached. AI systems and rule based systems that work together with the clinicians to enhance their decision making ability instead of replacing them.
There were limited scope CADe results showing improvements over average readers 20 years ago, and people calling it a 'revolution' then. I'm not sure anything has really shifted; the real problems in making clinical impact remain hard.
Siemens Healthineers Imaging Intelligence | https://www.siemens-healthineers.com | Malvern, PA (Greater Philadelphia) | INTERNS | Onsite Our R&D group delivers medical image/text tools (e.g. deep learning, NLP, etc) for medical data analysis. We are well recognized for delivering cutting-edge intelligent solutions to Siemens 3D workstations and medical imaging scanners. Our group also has strong publication record in top tier journals and conferences, and several Siemens "inventor of the year" award recipients.
We offer well-paid internships lasting >= 3 months, with independent moonshot projects.
Responsibilities: · Contribute to research projects to develop intelligent solutions for medical imaging and text analytics · Conduct fast prototyping, feasibility studies for exploratory clinical research · Support the productization of research prototypes
We look for: · Strong research capability in computer vision, machine learning, text analytics and medical image analysis, proven by publications in journals/conferences. · Research experience in image/text analytics using large scale, weakly supervised / unsupervised learning algorithms · Research experience in medical image/text analysis of different modalities (CT, MRI, PET, medical reports etc.)
Siemens Healthineers Imaging Intelligence | https://www.siemens-healthineers.com | Malvern, PA (Greater Philadelphia) | INTERNS | Onsite
Our R&D group delivers medical image/text tools (e.g. deep learning, NLP, etc) for medical data analysis. We are well recognized for delivering cutting-edge intelligent solutions to Siemens 3D workstations and medical imaging scanners. Our group also has strong publication record in top tier journals and conferences, and several Siemens "inventor of the year" award recipients.
We offer well-paid internships lasting >= 3 months, with independent moonshot projects.
Responsibilities: · Contribute to research projects to develop intelligent solutions for medical imaging and text analytics · Conduct fast prototyping, feasibility studies for exploratory clinical research · Support the productization of research prototypes
We look for: · Strong research capability in computer vision, machine learning, text analytics and medical image analysis, proven by publications in journals/conferences. · Research experience in image/text analytics using large scale, weakly supervised / unsupervised learning algorithms · Research experience in medical image/text analysis of different modalities (CT, MRI, PET, medical reports etc.)
If you find this kind work interesting, our AI group at Siemens Healthineers is hiring interns to carry out projects like this. We typically target machine learning or medical imaging PhD students, but are open to a variety of backgrounds. Please feel free to reach out via email.
Our R&D group delivers medical image/text tools (e.g. deep learning, NLP, etc) for medical data analysis. We are well recognized for delivering cutting-edge intelligent solutions to Siemens 3D workstations and medical imaging scanners. Our group also has strong publication record in top tier journals and conferences, and several Siemens "inventor of the year" award recipients.
We offer well-paid internships lasting >= 3 months, with independent moonshot projects.
Responsibilities:
· Contribute to research projects to develop intelligent solutions for medical imaging and text analytics
· Conduct fast prototyping, feasibility studies for exploratory clinical research
· Support the productization of research prototypes
We look for:
· Strong research capability in computer vision, machine learning, text analytics and medical image analysis, proven by publications in journals/conferences.
· Research experience in image/text analytics using large scale, weakly supervised / unsupervised learning algorithms
· Research experience in medical image/text analysis of different modalities (CT, MRI, PET, medical reports etc.)
I read the original study. I have a few thoughts. My partner is a physician and I am an AI researcher working in medicine. I think a lot about doctors as machine learning models, and RCT results like loss terms in a complicated objective function.
What is the best learning rate for updating physicians (our models) from the results of RCT's (part of our loss)?
The authors reviewed all articles in three journals from (generally) between 2003-2017. They didn't, afaict (please point me if they did), review the time-to-correction (if any correction has been made). It takes some time before the results of an RCT end up in established practice. I'm actually surprised it's so small in many cases.
It's not like there's a database where the results of every RCT are immediately updated and the physician model is retrained overnight on the new data.
Even if there were, imagine if the learning rate (so to speak) were so high that every discipline immediately changed their published best practices on the basis of a single RCT?
Here's a cautionary paragraph from one of the excellent reversal studies they use:
Several limitations of the study warrant discussion. First, because we enrolled only 26% of eligible patients, our findings must be generalized cautiously. The most frequent reason that patients declined enrollment was a strong preference for one treatment or the other. Since patients' preferences may be associated with treatment outcome, our trial may be vulnerable to selection bias. Participating surgeons may not have referred potentially eligible patients because they were uncomfortable randomly assigning these patients to treatment; this form of selective enrollment may also create bias.26 Second, because the trial was conducted in academic referral centers, the findings should be generalized carefully to community settings. Third, we did not formally assess the fidelity of the physical therapists or surgeons to the standard intervention protocols. Finally, our study was not blinded, since our investigative group did not consider a sham comparison group feasible. [0]
I'm less concerned about RCT to Best Practice time than from Best Practice to Typical Physician Practice time. There is a cascaded model connected to the 'complex RCT loss' and it's discipline published practice down to individual physician treating patients. Compressing the time from RCT to individual physician is fraught with difficulties, but could be improved.
Finally, RCT is the gold standard, but it's not perfect and it doesn't always clearly translate to the individual physician's model of practice. Many best practices weren't established from RCT's either.
And an inconclusive result from an RCT is not the same thing as proving that there's no difference in outcomes, but a proper statistician can chime in there.
we're building the standard model for bio. We're doing for biology what mathematics did for physics.
papers this month: genomics - https://arxiv.org/abs/2509.25573 protein-language: https://arxiv.org/abs/2509.22853 longitudinal ehr: https://arxiv.org/abs/2509.25591
hello world: https://standardmodelbio.substack.com/p/introducing-standard...
we're humble and ambitious - we want to be the quiet backbone of biomedical ai but we want none of the glory of the final applications.
kevin@standardmodel.bio