Stanford University School of Medicine
The Robert G. Fenley Writing Awards: Solicited Articles
Silver
Drawing from his dual experience as a physician and a computer scientist, author Jonathan Chen tested an AI chatbot with ethical dilemmas and challenging medical scenarios. In his essay for Stanford Medicine magazine, Chen described one of these — a role-play involving a patient's wife faced with the decision of whether to insert a feeding tube in her husband with advanced dementia — telling how he tested the chatbot's ability to navigate layers of medical advice, emotional support and ethical considerations.
The results were surprisingly positive, tipping over to disconcerting. The chatbot provided responses that were not only technically accurate but also demonstrated an understanding of emotional and ethical complexities. It offered balanced, compassionate advice that considered the patient's dignity, the family's emotions, and the ethical implications of medical interventions. According to Chen: “Once I got past the initial discomfort of realizing the chatbot was likely providing better counseling than I did in a real life, I considered optimistically how this experiment highlights the potential for AI to complement the work of medical professionals, not by replacing human interaction, but by enriching it.”
What was the most impactful part of your entry?
What makes this entry particularly distinctive is the creative approach taken to evaluate AI, by testing it against the nuanced and emotionally charged realm of medical ethics and patient counseling. The work doesn’t just illustrate AI's surprising capability to handle complex medical knowledge but also showcases its potential to enrich the most human skills of communication and empathy in medicine, areas traditionally thought to be exclusive domains of human relationships.
The significant impact of this exploration lies in its demonstration of how AI can enhance the clinician-patient relationship, offering a low-stakes training ground for medical professionals to refine their communication skills in high-stakes scenarios. This represents a judicious use of AI resources, extending beyond administrative tasks to directly benefit patient care and support.
Furthermore, the essay serves as a model for others by challenging the conventional skepticism surrounding AI in sensitive areas. It illustrates a successful strategy where technology is leveraged to complement, not replace, the human elements of medicine. This innovative use of AI underscores a creative idea that has the potential to revolutionize health care practices, a beacon of how empathy and technology can coalesce to improve the quality of medical care and decision-making processes.
What challenge did you overcome?
The writer went into this exercise expecting to “break” the chatbot AI system, having seen many prior examples that could be tricked into saying racist, misogynistic or otherwise erroneous and dangerous comments. It was thus quite surprising to find it handling such nuances with such grace. He found it challenging for his own identity and sense of self-worth when the chatbot continued to provide counseling better than he felt he had in real life.
In Chen’s words: “I of course believe there is a unique and special value to human relationships and communication. These and similar examples challenged me to consider that we, as humans, may not have as much of a monopoly and empathy and personal connection as we might like to imagine.”
Contact:
Alison Peterson
medawards@stanford.edu