I think I could have been happy with the physician lifestyle, but I knew I wouldn't make good life decisions surviving/getting through residency. I am glad to have survived (e.g. career change).
I've spent two decades as an electrician, and as the story goes: it is what it is... but also what you make of it. An incredible Good Fortune keeps me from having to work full-time, but as I've entered my forties I know I want to get in an air conditioned office (before too long).
Perhaps I'll first move to a country where healthcare isn't tied to full-time employment. Best of luck to you in your Interpretations.
A radiologist practicing in the USA can live anywhere in the world, but they must be trained in the USA, licensed in the state(s) where they read from, credentialed at the hospitals or facilities they read from, and carry medical malpractice insurance.
The point being, hospitals can't throw cheap Indian or Filipino radiologists at the worklist.
Not sure about other countries, but RAs do outsource work to RRAs(1) and then sign the final product. This is relatively new as Medicare/Medicaid didn't recognize RRAs as mid-level practitioners until 2019. I suspect the answer is yes for ARRT-certified RRAs, but it will be state-specific (similar to the answer above about RAs).
I agree with you, but here is where things get tricky:
Everyday I see something on a scan yhat I've never seen before. And, possibly, no one has ever seen before. There is tremendous variation in human anatomy and pathology.
So what do I do? I use general intelligence. I talk to the patient. I talk to the referring doctor. I compare with other studies, across modalities and time.
Another way of saying this that might not trigger your alarm bells is that even a perfect image analyzer is not enough to replace a radiologist. The job consists of much more than just analyzing images.
I would say generally speaking that people who assume AI will replace somebody else's job believe that these jobs are merely mechanical and there is no high-level reasoning involved that would basically require AGI (when that comes about nobody is safe). So the model of the AI radiologist assumes the only job of a radiologist is to classify images, which is pretty vulnerable to near-future disruption.
I imagine, given the training involved, the job involves more than just looking at pictures? This is what I would like to see explained.
The analogy would be the "95% of code is written by AI" stat that gets trotted out, replacing code with image evaluation. Yes AI will write the code but someone has to tell the AI what to write which is the tricky part.
>> jobs are merely mechanical and there is no high-level reasoning involved
This is a very binary way of thinking about it. More usual is that components of many professions are mechanical and can be automated, while other components are not mechanical and thus harder to automate. Regardless, if some % of the mechanical work goes away, it is unlikely that human workers just work less. Instead, they will work just as much and the overall demand for workers is reduced by %
Between 1985 and 2025 we went from programming in 8086 assembly language to high level languages like python, typescript and go. These automate a lot of the drudgery of programming in asm, so why has the overall demand for programmers not diminish (in fact it increased massively)?
Can you describe a driving scenario where the correct action couldn't be determined "mechanically"? Are you thinking of something like the trolley problem?
That is such a contrived phenomenon, it has taken decades of lobbying and destruction of political accountability to create the conditions where a person considered sane would touch that idea instead of immediately skipping over to driverless trains.
Incredibly wasteful gimmick, I don't get why the usians are still struggling away at it now that the chinese seem to have already done it.
I don't know where you've gotten the idea that cars are something you can "skip over" on the way to trains. Public transit is great, and the US should do it better, but it doesn't obsolete every other form of transit. The vast majority of people in every country who can afford access to cars use them regularly.
If (as acknowledged in the article) AI automates at least part of the work of radiologists (e.g. tool that "saves her 15 to 30 minutes each time she examines a kidney image"), don't you fear that the demand of radiologists will decline? Even if some are still needed, surely if a hospital needs X reports per day and now Y radiologists are sufficient to provide them rather than the current Z (Y<Z), that should be something for people considering your career to take into account?
On the other hand, how much of your confidence in not being replaced stems from AI not being able to do the work, and how much from legal/societal issues (a human needing to be legally responsible for the diagnoses)? Honestly the description in the article of what a radiologist does "Radiologists do far more than study images. They advise other doctors and surgeons, talk to patients, write reports and analyze medical records. After identifying a suspect cluster of tissue in an organ, they interpret what it might mean for an individual patient with a particular medical history, tapping years of experience" doesn't strike me as anything impossible for AI within a few years, now that models are multimodal and they can work with increasing amounts of text (e.g. medical histories).
No. There is no area of medicine where a boost in productivity will cause doctors to have idle time. The wait times may decrease, throughput may increase, diagnosis accuracy may improve, even costs may decrease (press x to doubt) but no, there will never be a case where we will need less radiologists.
Wonderful insight that I'd never considered. Talk to almost anyone in America and they'll tell you about a health issue that they or their family are deferring due to lack of access. Waiting months or years to simply talk to a specialist, let alone find one that can help, is sadly the norm. Patients rightly feel it's a waste of their time so won't even seek treatment.
Remove that barrier to access and we won't see a shiny new streamlined medical system but rather a flood of new patients requiring even more bureaucracy to manage.
100% why I bothered to stick w/ OneMedical after the Amazon acquisition. I tried to get a "New Dr/New Patient" appointment with any local doctor I could find that took my insurance, and the closest I could get was 3 weeks away.
I'm not worried about medical folks having "time to clean" any time soon.
To be clear, we also need more medical professionals in general -- they're not keeping up with the population and it's making us all less healthy. Three to six months, or more, in the SF bay area for some critical appointments is really unacceptable, but there's not really an option given supply and demand.
I'm sure this will all get better with captain brainworms at the helm.
>> No. There is no area of medicine where a boost in productivity will cause doctors to have idle time. The wait times may decrease, throughput may increase, diagnosis accuracy may improve, even costs may decrease (press x to doubt) but no, there will never be a case where we will need less radiologists.
I dont think this is how market participants may think about it. If costs decrease, some group of radiologists will drop out of the market. We may not "need" less radiologists, but we're signaling we need less of them by not paying them as much as before.
Much like I still "need" a photographer, but short of weddings, I'm not willing to pay as much as before. I may well acquire a photogrpher for a portrait, but it would have to be priced competitively to a selfie.
You're making the common mistake of thinking that healthcare is some sort of normal market subject to simple supply-and-demand economics rules. The reality that supply is heavily constrained by CMS funding for residency slots, prices are (mostly) fixed by a few large payers, and patient demand is effectively infinite. There is a serious shortage of radiologists already and it's getting worse as the population grows older and sicker. If AI tools make radiologists more productive then more imaging studies will be ordered.
>> supply is heavily constrained by CMS funding for residency slots
I keep hearing this argument. Then I look at an Insurance Explanation of Benefits statement. A Radiologist might make $1-2k/day in billings. If you are in a balance-billing state, whatever insurance doesnt pay gets forward billed to the patient. On a standard 252 day workyear, that is $250-500k/yr in billings. The average resident salary is 70k, lets assume 100k with benefits.
Of course there is plenty of overhead, but from the math i'm seeing, the average Radiology Resident is a 150-400k net revenue center. Is the overhead really greater than 150-400k/yr/resident?
What am I missing, why would a profit center need "CMS funding"? From what my doctor friends tell me, the real bottleneck is the unwillingness of AMA and ABR to open up more radiology residency spots (artificial supply constraint) with "CMS funding" a boogeyman and red herring.
Contrary to what your doctor friends might have told you, the AMA has no power to restrict the number of residency slots. They are a private membership organization with no regulatory or accreditation authority. At one time they did lobby Congress to restrict graduate medical education funding but have since reversed that position.
A minority of funding does come from sources other than CMS. Teaching hospitals are largely free to add more residency slots if they want to. The fact that most hospitals don't do this indicates that GME programs are largely unprofitable.
>> You cannot bill for a resident's work, without an attending signing off on it.
Isnt that the case for almost every industry? I was a management consultant and our Partner signed off on the work before billing clients. Architects bill clients, but only when the lead architect signs off on the plans. Same for civil engineers. Same for magazines where the editor signs off on most major pieces.
Faster CPUs, better screens, helpful IDEs, heck even Gen-AI itself did not reduce the need for software engineers let alone decrease the costs. As mentioned in another comment, The Jevons Paradox implies that in certain industries, increased productivity may actually lead to more consumption (therefore propping up the demand / cost) despite not being intuitive so.
The only industries that has observed the opposite effect I can think of so far are translators and stock photographers. Maybe also proof readers - but is that gen Ai or did spellcheckers already kill that branch?
> Unless I am mistaken, the work of radiologists is more defined.
When I had cancer I had to go to an Interventional Radiologist. Never heard of it before. But they use X-Ray in real-time (flouroscopy?) to guide surgery. Pretty neat.
I've authored and contributed to several open source projects over the years, and I'm currently doing a deep dive in CAD/CAM after buying a CNC machine.
I help my practice where I can, and have written a little utility to make generating reports easier, but I would have to quit my job if I was trying to take on the absolutely enormous task of radiology computer aides diagnosis. And I need my job!
> I would have to quit my job if I was trying to take on the absolutely enormous task of radiology computer aides diagnosis
I think about this sort of thing every so often. The difference one person can make by creating a decent piece of software might be one of the best ways of getting a return on investment (in human hours, if not in money).
>> The only industries that has observed the opposite effect I can think of so far are translators and stock photographers. Maybe also proof readers - but is that gen Ai or did spellcheckers already kill that branch?
if the cost for preventative scans goes down, demand will rise. medical demand is incredibly constrained by price. people skip all kinds of tests they need because they can't afford it. the radiologists will have more work to do, not less.
There is a national shortage of radiologists in the US, with many hospital systems facing a backlog of unread cases measuring in the thousands. And, the baby boomers are starting to retire, it's only going to get worse. We aren't training enough new radiologists, which is a different discussion.
Askl to your question on where my confidence stems from, there are both legal reasons and 'not being able to do the work' reasons.
Legal is easy, the most powerful lobby in most states are trial attorneys. They simply won't allow a situation where liability cannot be attached to medical practice. Somebody is getting sued.
As to what I do day to day, I don't think I'm just matching patterns. I believe what I do takes general intelligence. Therefore, when AI can do my job, it can do everyone else's job as well.
> We aren't training enough new radiologists, which is a different discussion.
About that, I think the AMA is ultimately going to be a victim of its own success. It achieved its goal of creating a shortage of medical professionals and enriching the existing ones. I don't think any of their careers are in danger.
However, long term, I think magic (in the form of sufficiently advanced technology) is going to become cost effective at the prices that the health care market is operating at. First the medical professionals will become wholly dependent on it, then everyone will ask why we need to pay these people eye-watering sums of money to ask the computers questions when we can do that ourselves, for free.
The trial lawyer angle doesn't seem accurate. Did trial lawyers prevent pregnancy tests from rolling out? COVID tests? Or any other automatic diagnostic, as long as it was reasonably accurate?
Not as far as I know. Once an automated diagnostic is reasonably accurate, it replaces humans doing the work manually. The same would be true of anything else that can be automatically detected.
No comment on whether radiology is close to that yet, although I don't think a few-million-param neural network would tell us much one way or another.
Are you aware of any states in the US that have made it harder to sue doctors for malpractice?
My point, which I made poorly, is this: There's a reason doctors that went to medical school in India and trained as Radiologists in India can't read US cases remotely for a fraction of the cost of US trained and licensed radiologists.
It's not because the systems to read remotely don't exist.
It's not because they're poorly trained or bad doctors.
No, it's because the AMA lobbies to protect American doctors' jobs, and refuses to license them to practice in the US. Of course you can still sue for medical malpractice regardless of citizenship. Trial lawyers have nothing to do with it, American doctors who don't want competition are entirely to blame.
A big wrinkle in AI evangelism is that proponents don’t understand the concept of human judgment as a “learned” skill - it takes practice and AI models / systems do not suffer consequences the way humans do. They have no emotions and therefore can not “understand” the implications of their decisions.
For context, generative AI music is basically unlistenable. I’ve yet to come across a convincing song, let alone 30 seconds worth of viable material. The AI tools can help musicians in their workflow, but they have no concept of human emotion or expression and it shows. Interpreting a radiology problem is more like an art form than a jigsaw puzzle, otherwise it would’ve been automated long ago (like a simple blood test). Like you note, the legal system in the US prides itself on “accountability” (said tongue in cheek) and AI suffers no consequences.
Just look how well AI worked in the United Healthcare deployment involving medical care and money. Hint: stock is still falling.
It's not really my genre, so my judgment is perhaps clouded. Also, I find the dumb lyrics entertaining and they were probably written by a human (though obviously an AI could be prompted to do just as well). I am a fan of unique character in vocals and I love that it pronounces "A-R-A" as "ah-ahr-ah", but the little bridge at 1:40 does nothing for me.
> A big wrinkle in AI evangelism is that proponents don’t understand the concept of human judgment as a “learned” skill
Which is ironic given how much variation in output quality there is based on the judgement of the person using the LLM (work scope, domain, prompt quality, etc.)
You say "Legal is easy, the most powerful lobby in most states are trial attorneys."
The most powerful lobby in this case is the ABR which carefully constricts coveted residency spots in Radiology to create an artificial scarcity and keep up incomes. It is the opposite of, say, technology, where we have no gatekeeper and supply increases.
The ABR will say that Medicare doesnt fund enough residency spots, but all you need to do is look at an EoB and see that a week of residency billings covers the entire cost of the resident.
If a teaching hospital with an existing radiology residency program wanted to add one more spot, does the ABR have any power to stop them? If Medicare offered more funding to a teaching hospital to add more spots would the hospital turn it down?
I worked on an autocontouring model but we could not get very high accuracy for it to be adopted commercially. The algorithm would work for some organs but would totally freak-out on the others. And if the anatomy was out of norm then it would not work at all. This was 5 years ago, I see Siemens [0] has a similar tool. I remember shadowing a dosimetrist contouring all the Organs-At-Risk (OAR) and it took about 3-4 hours to contour one CT image of thoracic region. Do you know how much better the autocontouring tools have become?
May be in the West. However more practical countries like China with a huge population and clear incentive to provide healthcare to a large population at reduced cost will have incentives to balance accuracy and usefulness in a better way.
My personal opinion is that a lot of Medical professionals are simply gatekeeping at this point of time and using legal definitions to keep changing goalposts.
However this is a theme that will keep on repeating in all domains and I do feel that gradual change is better than sudden, disruptive change.
AI in healthcare is going to add so many layers of indirection for malpractice lawsuits. You'll spend years and lots of $$$ trying to figure out who the defendant would ultimately be, only for it to end up being a LLC that unfortunately just filed for bankruptcy.
The worry isn't that you'll find an AI sitting on the chair that a radiologist used to sit. It's that the entire field of radiology gets reduced down to a button click on a software.
The other doctors will still be there for you to sue.
So the question is, “what if people bought an x-ray machine (affordably available on Amazon)and started using it without training on radiological safety”?
Here in New Zealand you get a licence after purchasing the equipment, and require the machine spec, date of manufacture and serial number to get the licence.
It does require a radiologist name on the paperwork as they are the one with the radiation licence. However it is possible to get one if not a radiologist (dentists do, and radiographers have).
Being licensed to use the equipment is the hard bit, as insurance companies require accreditation which is hard to get.
Isn't that 90% of going to get scan is right now? You'll still need the "shop" to provide the equipment and the tech with the training to know what/where to scan, but you might get the results a bit faster? Are the radiologists the chokepoint now, or is it the techs?
That's the way it already works in many cases, just like with outpatient surgery clinics and other outpatient specialist practices. There is a critical difference, though, because radiology also has sub-specialities and someone focused on orthopedics probably isn't the one you'd want reading your cardiology images, nor would you want your ophthalmologic radiologist trying to diagnose a brain CT.
I initially read this as ‘Medical Imaging tech who is an entrepreneur.’ I now think you’re a radiologist?
Any particular interests?
Fixing the shiite RIS/PACS world and the hell of hl7 would make me happy. I’m an MR tech, and just finished trying to make a scan description ‘MRI Cervical Spine + Right Brachial Plexis’
Are AI models able to detect abnormalities that even an experienced radiologist can't see? i.e. something that would look normal to a human eye but AI correctly flags it for investigation? Or are all AI detections 'obvious' to human eyes and simply a confirmation? I suspect the latter since it was human annotated images the model was trained on.
For example, let's say I'm looking at a chest x-ray. There is a pneumonia at the left lung base and I am clever enough to notice it. 'Aha', I think, congratulating myself at making the diagnosis and figuring out why the patient is short of breath.
But, in this example, I stop looking closely at the X-ray after noticing the pneumonia, so I miss a pneumothorax at the right lung apex.
I have made a mistake radiologists call 'satisfaction of search'.
My 'search' for the patient's problem was 'satisfied' by finding the pneumonia, and because I am human and therefore fundamentally flawed, I stopped looking for a second clinically relevant diagnosis.
An AI module that detects a pneumothorax is not prone to this type of error. So it sees something I did not. But it doesn't see something that I can't see. I just didn't look.
> I have made a mistake radiologists call 'satisfaction of search'.
Ah, now I have a name for it.
When I've chased a bug and fixed a problem I found that would cause the observed problem behavior, but haven't yet proven the behavior is corrected, I'm always careful to specify that "I fixed a problem, but I don't know if I fixed the problem". Seems similar: found and fixed a bug that could explain the issue, but that doesn't mean there's not another one that, independently, would also cause the same observed problem.
I've been going to RSNA for over 25 years, in all that time, the best I've seen from any model presented to me was the smack the radiologist on the head and say, "you dummy, you should have seen that!" model.
That is, the models spot pathologies that 99.9999% of rads would spot anyway if not overworked, tired, or in a hurry. But, addressing the implication of your question, the value is actually in spotting a pathology that 99.9999% of rads would never spot. In all my years developing medical imaging startups and software, I've never seen it happen.
I'm sure it's a matter of training data, but I don't know if it's a surmountable problem. How do you get enough training data for the machine to learn and reliably catch those exceptions?
One Harvard study trained an AI that could reliably determine someone's race from a chest X-ray. AIs can be trained to see things we can't.
The difficulty is likely in making a good training dataset of labeled images with pathologies radiologists couldn't see. I imagine in some cases (like cancer), you may happen to have an earlier CT scan or X-ray from the patient where the pathology is not quite yet detectable to the human eye.
I suspect that radiologists could identify race from a plain chest x-ray if they were given the patient’s race and asked to start noticing the difference. They just aren’t doing it because, if it’s important, you can just look at the patient.
There are a lot of things in medicine that aren’t in literature, but are well-known among certain practitioners. I’m an anesthesiologist and practice in an area with a large African-American population. About 10-15% (rough guess) of people of West African descent will have a ridiculously strong salivary response to certain drugs (muscarinic agonists). As in, after one dose their mouths will be full of saliva in seconds. We don’t have East Africans for comparison, so I can’t say it’s a pan-Bantu thing, but I have seen it in a Nigerian who lived here. Not in the literature, but we all know it. I had a (EDIT: non-anesthesia) colleague ask me about a hypersecretory response from such a drug. I said, oh, was he black? Yes, how did I know? Because we give those drugs all the time and have eyes. It’s very rare to see in European-descended populations.
It's possible humans could learn to do this, but I'm skeptical they could do it this well. According to the article, humans experts couldn't tell race from the chest X-rays and the researchers couldn't figure out how the AI was detecting race. They fed it corrupted images to figure out what information it was relying on. It was robust against both low pass and high pass filters.
That's the reason rads never train to determine race from a chest x-ray.
BTW, models don't need to train that either. Because if it's important, it's recorded, along with a picture, in the guy's medical record.
I'd just like to gently suggest that determining someone's race from an X-Ray instead of, say, their photograph, is maybe not how we should be burning training cycles if we want to push medical imaging forward. Human radiologists had that figured out ages ago.
You're being snide about the Harvard/MIT researchers being idiots doing useless research because they don't realize radiologists can just look at the patients face, but that's obviously not what happened. They were trying to see if AI could introduce racial bias. They're not releasing an AI to tell radiologists the race of their patients.
According to the article, human experts could not tell race from chest X-rays, while the AI could to do reliably. Further, it could still determine race when given an X-ray image passed through a low pass filter or high pass filter, showing that it's not relying solely on fine detail or large features to do so.
2. because it isnt a competitive market (basically, the American Board of Radiologists controls standards of practice and can slow down technologies seen as competitive to human doctors)?
3. or perhaps 1 doesnt happen because outsiders know the market is guarded by the market itsself?
What's accurate in this article? It's very vague, it can be tldred into "we won't go anywhere, although AI does more and more of our work"
> Radiologists do far more than study images. They advise other doctors and surgeons, talk to patients, write reports and analyze medical records. After identifying a suspect cluster of tissue in an organ, they interpret what it might mean for an individual patient with a particular medical history, tapping years of experience.
AI will do that more efficiently, and probably already does. "tapping years of experience" is just data in training set.
> A.I. can also automatically identify images showing the highest probability of an abnormal growth, essentially telling the radiologist, “Look here first.” Another program scans images for blood clots in the heart or lungs, even when the medical focus may be elsewhere.
> “A.I. is everywhere in our workflow now,” Dr. Baffour said.
> “Five years from now, it will be malpractice not to use A.I.,” he said. “But it will be humans and A.I. working together.”
Maybe you'll be able to happily retire because inertia, but overall it looks like elevator operator job.
>> Therefore, when I lose my job to AI, so does everyone else.
Not quite right? Some fields are licensed, regulated, and have appointments -- and others are not. AI is most keenly focused on fields w/o licensure barriers
On the one hand, you’re totally right. The job takes general intelligence.
On the other hand, a lot of jobs take general intelligence. You’re right about that too.
It’s difficult to guess the specifics of your life, but: maybe you’ve engaged a real estate agent. Some people use no real estate agent. Some have a robo agent. No AI involved. Maybe you have written a will. Some people go online and spend $500 on templates from Trust & Will, others spend $3,000 on a lawyer to fill in the templates for them, some don’t do any of that at all. Even in medicine, you know, a pharma rep has to go and convince someone to add their thing to the guidelines, and you can look back at the time between the study and adoption as, well people were intelligent and there was demand, but doctors were not doing so and so thing due to lack of essentially sales. I mean you don’t have to be generally intelligent to know that flossing is good for you, and yet: so few people floss! That would maybe not put tons of dentists out of business. But people are routinely doing (or not doing) professional services stuff not for any good (or bad) reason at all.
Clearly the thing going on in the professional services economy isn’t about general intelligence - there’s already lots of stuff that is NOT happening long before AI changes the game. It’s all cultural.
If you’ve gotten this far without knowing what I am talking about… listen, who knows what’s going to happen? Clearly a lot of behavior is not for any good reason.
How do you know where the ball is going to go for culture? Personally I think it’s a kind of arrogant position: “I’m a member of the guild, and from my POV, if my profession is replaced, so is everyone else’s.” Arrogance is not an attractive culture, it’s an adversarial one! And you could say inertia, and yet: look who’s running the HHS! There are kids right now, that I know in my real life, who look like you or me, who went to fancy Ivy League school, and they are vaccine skeptical. What about inertia and general intelligence then? So I’ll just say, you know, putting yourself out here on this forum, being all like, “I will AMA, I am the voice,” and then to be so arrogant: you are your own evidence for why maybe it won’t last 10 years.
I jumped into this thread to share my thoughts, and my thoughts alone, because I'm not sure there are a lot of radiologists on HN. I certainly don't speak for all radiologists.
But, I would submit to you, that rapid, radical changes to the practice of medicine are rare, if not impossible.
I don't think it will go away as long as we have third party paying for the costs and AMA controlling competition.
If I had to pay $500 or whatever to get a scan, and instead I could get my data, send it to a model and only follow up if it came back bad, I would. But now someone else pays and there are laws and regulations that prevent people from controlling their data, or at least make it difficult. Kind of weird I have a file on me that I have never seen.
Nope. You have an absolute legal right to obtain copies of your medical images and other data. Who paid for it is irrelevant. The provider can charge you a nominal fee for copying the files but they can't keep it from you.
Respectfully, it doesn't matter what you expect or think. What matters is this:
- If the law allows AI to replace you.
- If the hospital/company thinks [AI cost + AI-caused law suits] will be less expensive that [your salary + law suites caused by you].
I'm almost in the same situation as you are. I have 22 years left until retirement and I'm thinking I should change my career before I'm too old to do it.
The original author of the paper about the technological singularity [1] defines it as simply the point where predictions break down.
If AI gets to the point where it is truly replacing radiologists and programmers wholesale, it is difficult to tell anyone what to do about it today, because that's essentially on the other side of the singularity from here. Who knows what the answer will be?
(Ironically, the author of that paper, being also a science fiction author, is also responsible for morphing the singularity into "the rapture for nerds" in his own sci-fi writing. But I find the original paper's definition to have more utility in the current world.)
I don't think robotics is progressing at nearly the same pace as AI so for a while there will still be a bunch of manual labor for us to fight over. :-)
Still, I have enjoyed my time in radiology. It is insanely challenging to do well, and has been very rewarding.
reply