11 November 2025
The empathy protocol: what happens when AI delivers better bedside manner than doctors
I came across a very interesting article in The New York Times which is paywalled so I will try to summarise it as best as I can. In the article, emergency room doctor Jonathan Reisman describes how his assumptions about job security in medicine were challenged by large language models like ChatGPT. During medical school, Reisman learnt that delivering bad news to patients follows a structured protocol with specific guidelines: avoid medical jargon, pause after sharing the diagnosis, use phrases like "I wish I had better news" rather than "I'm sorry," and assess what the patient already knows. He found that following this memorised script made difficult conversations feel more natural and human, despite initially resisting the idea that compassion could be choreographed.
However, in one study, ChatGPT's responses to patient questions were rated as more empathetic and of higher quality than those written by doctors. Reisman then suggests that whether empathy comes from a human following a protocol or AI generating language based on patterns, what matters is the effective communication of care rather than the internal experience of the communicator.
This raises a fascinating question about the nature of empathy itself. Reisman's observation that doctors follow scripted protocols for compassion suggests that much of what we perceive as human empathy is already, in some sense, performative and learnt rather than purely spontaneous. If both humans and AI are essentially executing patterns (one through training and memory, the other through statistical prediction) then perhaps the distinction isn't as clear-cut as we'd like to believe.
However, I think there's something Reisman's argument overlooks: the capacity for genuine adaptability and moral reasoning in difficult situations. A doctor following a protocol can still deviate from it when something unexpected happens, draw on years of varied human experience, or sit in uncomfortable silence when that's what a patient truly needs. AI, for all its impressive pattern-matching, lacks the embodied experience of mortality, loss, and suffering that informs a doctor's presence at a patient's bedside. The real test isn't whether AI can match scripted empathy in controlled studies, but whether it can navigate the messy, unpredictable moments where protocols fail and human judgement becomes essential.
Perhaps what we should be asking isn't whether AI can replace human empathy, but whether our increasing reliance on protocols has already diminished the very humanity we're trying to preserve.
However, in one study, ChatGPT's responses to patient questions were rated as more empathetic and of higher quality than those written by doctors. Reisman then suggests that whether empathy comes from a human following a protocol or AI generating language based on patterns, what matters is the effective communication of care rather than the internal experience of the communicator.
This raises a fascinating question about the nature of empathy itself. Reisman's observation that doctors follow scripted protocols for compassion suggests that much of what we perceive as human empathy is already, in some sense, performative and learnt rather than purely spontaneous. If both humans and AI are essentially executing patterns (one through training and memory, the other through statistical prediction) then perhaps the distinction isn't as clear-cut as we'd like to believe.
However, I think there's something Reisman's argument overlooks: the capacity for genuine adaptability and moral reasoning in difficult situations. A doctor following a protocol can still deviate from it when something unexpected happens, draw on years of varied human experience, or sit in uncomfortable silence when that's what a patient truly needs. AI, for all its impressive pattern-matching, lacks the embodied experience of mortality, loss, and suffering that informs a doctor's presence at a patient's bedside. The real test isn't whether AI can match scripted empathy in controlled studies, but whether it can navigate the messy, unpredictable moments where protocols fail and human judgement becomes essential.
Perhaps what we should be asking isn't whether AI can replace human empathy, but whether our increasing reliance on protocols has already diminished the very humanity we're trying to preserve.