Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

    "No computer, no AI can replace a human touch,” said Amy Grewal, a 
    registered nurse. “It cannot hold your loved one’s hand. You cannot 
    teach a computer how to have empathy.”
Sadly, this is terrible tactics. The people they are protesting do not see lack of empathy as a problem and probably even see it as a feature, as empathy is a bug from their point of view.

They'd be much better off talking about how obviously bad the care being given is and how easily people will be able to sue them for criminal negligence. And also, I've heard, you know, not that I'd ever do this, but I've heard some other nurses are even starting to tip off the patients that this is something they should sue over and giving them pointers on what to ask for during discovery. Certainly not something I'd ever do and I don't know any nurses personally who do this. I've just heard rumors. If you get my drift.

I'm not celebrating it, just calling it like it is.




One of Peter Lee's arguments in his AI in medicine book[1] is that the GenAIs (GPT4) actually excel at empathy. He gives a pretty compelling example where the GPT is able to empathize very well with a young girl who is having a medical issue. Empathy is part of the training set.

[1] https://www.amazon.com/AI-Revolution-Medicine-GPT-4-Beyond


This strikes me as something that will fade over time, though. We will eventually learn to recognize fake empathy, just as once upon a time when a corporation said "Your business is important to us and we're trying to get a support person on the line for you as quickly as possible", it was believable and there was a good chance your customer believed it. Now of course we've all got a pretty good idea it's not true.

An AI can not empathize. We don't even really want it to; who wants to build an AI that "really" experiences losing a limb or losing a daughter? Not anyone I want actually building AIs. So this isn't even about whether they're "really conscious" or any of those somewhat tedious debates; even if they are human-level AI already they literally can't empathize. See the recent article where Meta's overly helpful AI yielded an answer as to how New York's public schools treated its disabled child. Even if the text was completely accurate it still had no standing to emit such text.


Does anyone care what goes on in the mind of a person making a healthcare LLM product? I think we all know what they’re thinking: ride the hype train; get paid.

I think the nurse here is trying to appeal to human beings who are, more or less, afraid of dying alone.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: