Can AI Surrogates Help Make Life-or-Death Medical Decisions? Experts Weigh In

Can AI Surrogates Help Make Life-or-Death Medical Decisions? Experts Weigh In

Can AI Surrogates Help Make Life-or-Death Medical Decisions? Experts Weigh In on Ethics, Accuracy, and Risks

As artificial intelligence (AI) becomes more sophisticated, healthcare researchers are exploring its potential role in helping doctors and families make life-or-death decisions for incapacitated patients. Among the most provocative developments in this space is the concept of AI surrogates—digital models trained to reflect a patient’s values, preferences, and likely choices during medical crises when they cannot speak for themselves.

Although no hospital currently uses AI surrogates in real-world scenarios, research in the field is progressing. Some experts believe it’s only a matter of time before AI systems are integrated into end-of-life decision-making. However, others raise significant ethical, cultural, and clinical concerns, warning that AI might one day be trusted too heavily, especially when human surrogates are unavailable.

This detailed analysis explores where the technology stands today, who’s developing it, and what experts from the medical and bioethics communities are saying about its potential risks and benefits.


The Current State: No AI Surrogates in Hospitals—Yet

Dr. Muhammad Aurangzeb Ahmad, a resident fellow at the University of Washington’s UW Medicine, is among the first researchers actively developing AI surrogates for potential use in hospitals. His work, based at Harborview Medical Center in Seattle, aims to build predictive models that could estimate a patient’s likely medical preferences in situations like cardiac arrest or traumatic injury.

Ahmad is still in the conceptual and testing phase. His current models analyze retrospective patient data such as injury severity, demographics, medical history, and prior care choices. Importantly, no patients have yet interacted with the AI system, and any future use would require rigorous approval through institutional review boards (IRBs).

“There's still a long way to go,” Ahmad told Ars Technica. “This is very brand new, so very few people are working on it.”

Ahmad envisions a future where patients engage with AI systems over the course of their lives, refining their digital surrogates to better represent their evolving values and health-related beliefs.


What Would an AI Surrogate Do?

In critical care scenarios, when a patient can’t communicate, healthcare teams must often make rapid, life-altering decisions. Typically, they rely on human surrogates—family members or legal representatives—who ideally understand the patient's wishes. In the absence of such surrogates or clear advance directives, the decision becomes even more complex.

Ahmad hopes that AI surrogates could reduce emotional stress and improve accuracy in such moments. The goal is not to replace human decision-makers but to offer an additional, data-driven perspective—especially when patient preferences are unclear.

Still, this technology raises key questions: Can AI truly reflect an individual’s deeply personal values? Can it account for changing preferences or the complex social context in which end-of-life decisions are made?


The Ethical Dilemma: Predicting Preferences or Simulating Autonomy?

One of the most persistent concerns from ethicists and physicians is that AI cannot fully replicate the human experience. Dr. Emily Moin, an ICU physician in Pennsylvania, warns that relying too heavily on AI surrogates could backfire—especially when dealing with patients who have no clear preferences or whose views may evolve over time.

“These decisions are dynamically constructed and context-dependent,” Moin said. “If you’re assessing the performance of the model by asking someone after they’ve recovered what they would have said before, that’s not going to provide you with an accurate representation.”

Moin is also concerned that AI predictions could be treated as more authoritative than human insight, especially in fast-paced hospital environments. She argues that conversations with loved ones are a critical part of ethical care, and AI might inadvertently discourage those discussions.

“Imagine a doctor rounds on 24 critically ill patients in one day, and the family is reluctant to talk. All parties might default to whatever the AI suggests,” she said. “That’s dangerous.”


Could AI Surrogates Help When There Are No Human Surrogates?

One of the intended benefits of AI surrogates is to assist in cases where no human surrogate is available—unrepresented patients, such as homeless individuals, immigrants without nearby family, or young people with no prior directives.

However, this is also the demographic most vulnerable to harm from errors or bias in AI systems.

“You’ll never be able to know the so-called ground truth for those patients,” Moin noted. “So you can’t assess whether the AI is right or wrong. That’s a huge risk.”


The Research So Far: Promising Accuracy, But Limited Context

A recent study led by Dr. Georg Starke in Switzerland tested three AI models trained on survey data from individuals over 50. These models predicted CPR preferences with up to 70% accuracy—better than many human surrogates.

Still, Starke emphasizes that accuracy alone doesn’t equate to ethical viability.

“Human surrogates remain essential for context,” he said. “Decisions can’t just be about demographic pattern-matching. They must reflect a person’s moral reasoning and lived experiences.”

Ahmad agrees and is working on a pre-print paper that outlines different fairness frameworks in AI. He stresses that individual fairness requires factoring in patients’ spiritual beliefs, cultural identities, and ethical values—not just surface-level data.

“Two patients may look identical on paper but be morally and culturally worlds apart,” he said.


Doctors Raise Red Flags: AI Can’t Replace the Human Touch

Many clinicians see AI surrogates as potentially redundant.

Dr. Teva Brender, a hospitalist at a veterans’ medical center in San Francisco, argues that a good doctor already does what an AI surrogate might be programmed to do: talk to the family, ask about the patient’s life, and make decisions collaboratively.

“Do you really need an AI to ask, ‘Who was this person? What brought them joy?’” Brender asked. “I’m not so sure.”

Moreover, he and others are deeply skeptical about how AI-generated decisions might be perceived. A “black box” algorithm that suggests withholding resuscitation, for example, could confuse or alarm families—especially if the logic behind the decision is unclear.


Future Risks: Manipulation, Bias, and Over-Reliance

Ahmad is wary of future systems that mimic patient voices or emotions too closely. A chatbot speaking in a deceased patient's voice, for instance, could cross ethical lines and manipulate decision-makers emotionally.

There’s also the risk of bias—something virtually all experts agree hasn’t been sufficiently studied in this area. Demographic biases, cultural blind spots, or limited datasets could all influence how AI models make decisions.

To reduce these risks, researchers suggest that AI systems should never make final decisions, only offer supporting data. Outputs should be explainable, and any serious conflicts should automatically trigger ethics committee reviews.


Conclusion: Not a Replacement, But a Decision Aid

The consensus among physicians and ethicists is clear: AI surrogates may become valuable decision aids, but they must not replace human judgment. They can assist—but never absolve us—from the weighty responsibility of end-of-life decisions.

“AI will not absolve us,” Ahmad writes in his upcoming paper. “The fairest AI surrogate is one that invites conversation, admits doubt, and leaves room for care.”

Whether AI surrogates will be accepted by hospitals—and by society—remains to be seen. For now, the technology is still years away from real-world deployment. But the debate it has sparked is already reshaping how we think about autonomy, ethics, and trust in the future of healthcare.

AI in healthcare, AI surrogates, end-of-life decisions, medical ethics, artificial intelligence patient care, predictive healthcare AI, digital patient clones, AI and CPR decisions, AI medical decision-making, healthcare decision aids