Editor’s note: The opinions expressed by the author do not necessarily reflect the opinions of the AAMC or its members.
"Fox Games", a gripping piece of visual art by renowned American photographer Sandy Skoglund, has been used in our medical humanities curriculum for many years. In the image, dozens of bright red foxes are leaping around a restaurant that is portrayed largely in muted gray colors. A man and a woman are seated at a table in the background speaking to what appears to be a waiter, all seemingly unaware of the scene unfolding around them.

Credit: Fox Games, © Sandy Skoglund 1989
Students often stare avidly at the piece, discussing whether they are seeing one fox at various points in its journey or multiple foxes jumping about the restaurant tables. The emotional reaction to the piece varies — from students feeling that the foxes represent danger and destruction, to others feeling a sense of joy and playful expression. Students wonder about the humans in the background, who seem oblivious to the action in the foreground.
At a recent talk I gave in California, one physician said the artwork reminded him of people who were resistant to new ideas and technologies, like artificial intelligence (AI), even as its use is rising rapidly around them. Indeed, the use of visual art to reflect on artificial intelligence is at the core of the AI curriculum at the University of Miami Miller School of Medicine.
Using visual art to teach students about AI may not seem like an intuitive pairing, but in education the visual arts have long served as a vehicle for reflection and the development of such critical skills as observation, perspective-taking, and tolerance for ambiguity. Visual arts-based pedagogies like Visual Thinking Strategies (VTS) are uniquely suited to highlight the questions students and many of us have about what makes us fundamentally human.
Through VTS, we invite learners to engage deeply with art by asking open-ended questions such as, “What’s going on in this picture?” or “What do you see that makes you say that?” This practice sharpens observational acuity, encourages multiple perspectives, and fosters dialogue. The tool offers students a means to reflect on what cannot be codified: empathy, ethics, judgment, and the human condition. (This recently published AMEE Guide explains the implementation of VTS in health professions curricula.)
At our medical school, first-year students complete a core module that teaches them about the fundamentals of AI in health care, followed by a four-hour workshop at our university’s art museum. Working with our computer science faculty, students learn how to prompt large language models (LLMs) to describe and analyze works of art. Students then analyze the outputs and discuss the accuracy of the answers, as well as the potential for bias within the LLM training data.
The students also experience AI-generated scenarios in virtual reality environments (like an operating room) by wearing VR headsets that may be used extensively in future health care training. Finally, students engage in visual art analysis of their own, using the VTS protocol with several pieces of art.
Impacts on students
At the conclusion of the workshop, students reflect on their experiences of using both AI and their uniquely human skills in analyzing artwork. Students discuss their views on the future benefits and limitations of the technology. Questions emerge such as: “Can AI truly understand human suffering?”
One of our students, Aidan Kunju, explains: “In our visual arts curriculum, our observations are both objective, like ‘the painting depicts flowers,’ and subjective, like ‘the flowers embody hope.’
“As medical students, we learn that medicine also lies at this junction of subjective and objective,” Kunju adds. “I think this subjective variability is what sets humans apart from machines — understanding the psychosocial context of a patient’s history, knowing what questions to ask, and how and when to ask them.”
Clinical decisions are shaped not just by data, but by histories, cultural contexts, and the nuanced act of listening. Humanities-based sessions such as these help students grasp this interplay. They teach future physicians that while AI can analyze images or generate summaries, it cannot replicate the affective labor of care: presence, warmth, and intuition.
These exercises also spark philosophical reflection. Another of our students, Isha Harshe, commented on a portion of the session in which a painting was uploaded into ChatGPT and the VTS methodology was applied to an AI analysis, then compared with human analysis:
“It was so interesting to me that a large language model could even ‘think’ about and analyze a painting and offer an interpretation. It made me wonder, ‘Where exactly does the boundary between AI and human intellect lie, especially in the field of medicine?’”
The experience inspired Harshe to explore whether LLMs could be useful in the realm of clinical ethics. She created ethically complex scenarios and input them into different LLMs to assess the adequacy of the responses. She observes:
“While the large language models provided reasonable considerations for some scenarios, they did not always demonstrate the flexibility of thought that a human ethicist did when confronted with some of the more complex situations.”
The museum activity and that particular exercise highlighted for her the importance of developing and sharpening critical thinking skills and clinical judgment in medical training.
Harshe notes that “while AI is powerful in revolutionizing the science of medicine, it will remain a tool for us to commandeer when navigating the art of medicine, which is so uniquely human.”
Other medical schools are also employing visual arts in their AI curricula. Adam Rodman, MD, director of AI programs for Harvard Medical School, leads a task force exploring integration of AI into its medical school curricula. Rodman explains, “One of the activities I had my residents do was to develop and adapt an art and medicine curriculum using a language model, using it to explore the themes in the artwork.”
Roshini Pinto-Powell, MD, associate dean of admissions at the Geisel School of Medicine at Dartmouth, and codirector of On Doctoring (a multiyear curriculum that covers the skills, knowledge, and attitudes needed to practice clinical medicine), recently compared her medical students’ observations of Andrew Wyeth’s painting Christina’s World — which depicts a young woman lying in a field facing a farmhouse — with those provided by ChatGPT. She felt the LLM missed key findings noted by her students, such as apparent muscle atrophy and a positioning of the body indicative of muscle weakness.
“Using these types of exercises was helpful for students to reflect on AI and the value of the students’ own innate human instincts and observations,” Pinto-Powell notes.
Medical educators have an opportunity — and a responsibility — to ensure that our students are not just competent users of AI, but also wise stewards of its potential. By grounding AI education in the humanities and visual arts, we can nurture a generation of physicians who value human insight as indispensable, who see technology not as a replacement for human connection, but as a tool to deepen it.