Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of Peter Lee's arguments in his AI in medicine book[1] is that the GenAIs (GPT4) actually excel at empathy. He gives a pretty compelling example where the GPT is able to empathize very well with a young girl who is having a medical issue. Empathy is part of the training set.

[1] https://www.amazon.com/AI-Revolution-Medicine-GPT-4-Beyond




This strikes me as something that will fade over time, though. We will eventually learn to recognize fake empathy, just as once upon a time when a corporation said "Your business is important to us and we're trying to get a support person on the line for you as quickly as possible", it was believable and there was a good chance your customer believed it. Now of course we've all got a pretty good idea it's not true.

An AI can not empathize. We don't even really want it to; who wants to build an AI that "really" experiences losing a limb or losing a daughter? Not anyone I want actually building AIs. So this isn't even about whether they're "really conscious" or any of those somewhat tedious debates; even if they are human-level AI already they literally can't empathize. See the recent article where Meta's overly helpful AI yielded an answer as to how New York's public schools treated its disabled child. Even if the text was completely accurate it still had no standing to emit such text.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: