America’s medical schools must push back against rising intolerance of opposing views on campus by making sure their students are exposed to a wide diversity of viewpoints about controversial issues, even if some students find those views offensive, a panel of experts in free speech declared at the opening plenary session of Learn Serve Lead 2023: The AAMC Annual Meeting.
“If you can’t have vigorous discussions at universities and colleges, where else can you have them?” Jacob Mchangama, executive director of The Future of Free Speech Project at Vanderbilt University told more than 4,600 leaders in academic medicine gathered in Seattle on Saturday, Nov. 4. “If you enroll in college and you have the same views when you graduate, what’s the point of going to college?”
AAMC President and CEO David J. Skorton, MD, opened the session, entitled “Under Attack: How Did Free Speech Become So Complicated, and What’s Next?,” by lamenting the increasing, often hostile efforts throughout the country to delegitimize and shut down opposing views on sensitive cultural and political issues.
“Where and when we are able to talk freely is increasingly unclear,” Skorton said. “We are healers, and yet today’s environment can feel anything but healing.”
But college campuses, including medical campuses, should be one place where all views are welcome, the panelists agreed.
“The culture of inquiry and conversation can only be preserved if we value differences,” said Michael Roth, PhD, president of Wesleyan University, one of the three panelists. Roth said universities must “build the resilience of our students and our faculty to deal with ideas they find offensive, in fact find disturbing. Because they [the opposing views] may be right.”
Banning certain viewpoints actually harms students by failing to equip them with the experience and skills they need to grapple with and respond to a diversity of opinions, and to remain open to adjusting their own views based on what they learn, Roth said.
Panelist Amna Khalid, DPhil, associate professor of history at Carleton College in Northfield, Minnesota, warned that once teachers, administrators, and lawmakers start banning what they consider extreme viewpoints from campuses, there’s no telling where the bans will stop.
“If we go down the route of banning particular kinds of speech, we’ll see that it’s a downward slope and we’ll lose sense of what is appropriate or not,” Khalid said. “These are interpretive questions.”
Several weeks ago, Mchangama sat down with AAMCNews to share his thoughts on free speech and the First Amendment, the role of social media companies in spreading misinformation and divisive viewpoints, “elite panic” and what he sees as a global “free speech recession,” and what can be done to protect free speech both on campuses and more broadly.
What, exactly, is free speech?
It might be good to start with the origins of free speech, which originated in the Athenian democracy 2,500 years ago, where they had two overlapping concepts of free speech. One was equality of speech, which was the right of every male freeborn citizen to speak and vote directly in the Athenian democracy, in the assembly. So no matter whether you were uneducated or poor, you had, in principle, the same right as wealthier citizens to speak your mind.
But they also had a broader concept called parrhesia or uninhibited speech, which was a commitment to broad-mindedness and tolerance of dissent.
Today, in most open, modern democracies, free speech has developed into a legal, constitutional, and internationally recognized protected right of the individual to be protected against the government [for speaking out]. In the United States, the First Amendment is probably the most speech-protected legal instrument in the history of humankind.
As a society, we rightfully disdain hate speech — and yet, hate speech is protected under the First Amendment. Why is it important to protect speech that many people find offensive?
I’m in favor of the U.S. approach, so I don't believe that the government should be able to punish hate speech unless it is intended to and likely to cause violence or serious harm. Every European democracy prohibits hate speech — in fact, there is EU legislation that requires members of the European Union to prohibit hate speech. But the definitions of hate speech vary quite dramatically between states … one of the many problems with hate speech bans is that it’s very subjective. … Today, with social media, hate speech has become a big issue again, and those who do the most removal of hate speech are private social media companies according to their own terms of service. They remove billions and billions of [instances of] hate speech every year.
Is that a good thing? Should social media companies be able to censor information on their platforms?
If you want to take the perfectly legalistic view of it, these are private companies. They have a First Amendment right themselves to do what they want on their platforms. So removing content that they feel is not in line with whatever they want is not a problem. That was a reasonable assumption when you had a much more decentralized internet, but today you have platforms that have billions of users and that have become crucial for public debate around the world. Their content moderation practices have real, practical consequences for what kind of speech can be distributed around the world.
That’s why I think it makes sense to have more distributed, decentralized content moderation standards, where you take as many of these decisions away from centralized platforms that can be pressured by governments, and [put them] into the hands of users who can then make meaningful decisions about what kind of content they want to be confronted with.
In the meantime, we all are confronted with online information that threatens people and institutions. This isn’t benign speech; it’s had real-world consequences, including the deaths of thousands of people who believed the misinformation about COVID-19 vaccines, for instance. How do you reconcile the need to protect people’s right to say what they want with the impact of their words on other people?
First of all, when you look at COVID misinformation, I think there are studies that show that it's actually a relatively small number of people who are responsible for the vast majority of that. What we also see is that those who are likely to consume and share this are people who already are skeptical and have a lack of trust in institutions. … The temptation then becomes for institutions and governments to say, Oh, we have to limit that kind of speech because it will be catastrophic, but I think that is likely to cause people to be even more distrustful, especially when you're confronted with COVID, something completely new, that you're trying to understand in real time. The process of science, as impressive as it is, is that it's trial and error, and there was lots of confusing messaging from various health institutions. If one day you insist that, let's say, face masks don't work and you lean on social media companies to remove content to the contrary and then you come back and say, Oh, actually now we have the opposite opinion, you've undermined your own position. It would have been much better if the line of communication from authorities had been, Listen, we’re confronted with a new disease. We have put all our resources, our best researchers, into this. We're making incredible progress at a speed that was unimaginable for previous generations, but we're likely to make mistakes and what we think is the best available science today might change in two months. That shows humility. And it also acknowledges that you're likely to get things wrong rather than taking one position and then having to tie yourself in knots with your messaging further down the road.
It’s interesting that the United States, which has more protections for freedom of speech than other democracies, actually did worse in terms of getting its people vaccinated and protected. So, is it just because Americans are distrustful of government in general or were the bad actors who were spreading misinformation more able to reach the American people?
That’s a very difficult question to give a convincing reply to. I think one of the problems is that there's been a collapse of trust in this country, in the United States, and also the fact that COVID very quickly became polarized and tribalized, according to culture war narratives, which probably played a significant role. Would it have helped if the federal government had been able to shut down misinformation through law? I don't have a perfect answer to that. I just think the likelihood of that creating further trust rather than distrust among people who are already deeply skeptical [is low]. … The real issue here is, what are the underlying factors that make people more susceptible to disinformation, to engage in it, to share it. What can we do to make people more likely to think twice before accepting it? Free speech and access to information are part of the solution.
Earlier this year, a respected Mayo Clinic physician almost lost his job for questioning the National Institutes of Health’s COVID-19 policy and for saying that testosterone boosts athletic performance. How important is it for academic institutions to foster (rather than squash) divergent viewpoints?
The Foundation for Individual Rights and Expression (FIRE), where I'm a senior fellow, has a Scholars Under Fire database where they show a huge uptick in the number of scholars who are sanctioned, or have had attempted sanction, since 2000. The data suggests that they are more worried about the consequences of speech than under the second Red Scare [the perceived threat of U.S. communists during the Cold War], which is pretty remarkable. That suggests to me that this is a real problem and that cancel culture is real. It's also a cultural war phenomenon. But it's not something that is invented out of thin air. It has a real basis. COVID is a hot topic, transgender [health] seems to be a huge issue and one of the most thorny ones to navigate. … It’s the responsibility of the medical establishment to have the best available knowledge and you can only arrive at that through debate and what you might call the process of “open science” where no one ever gets to establish the capital T truth or settle the debate once and for all.
In your book, you write about “elite panic,” about the temptation by elite individuals and institutions to censor divergent viewpoints. We’re certainly seeing this in our own time and it’s leading to what you call a “free speech recession.”
Elite panic is this recurring phenomenon throughout the history of free speech, where whenever the public sphere is expanded, either through new communications technology, or to segments of the population that were previously marginalized, the traditional gatekeepers, the elites who control access to information, tend to fret about the dangers of allowing the unwashed mob — who are too fickle, too unsophisticated, too unlearned — unmediated access to information. They need information to be filtered through the responsible gatekeepers and it may be even more dangerous to allow them to speak without adult supervision. That's a phenomenon that we see again and again. And we're seeing it play out now on social media. … [Elite panic is] one contributing factor to the free speech recession. Another is that democracies have shied away from protecting free speech and are much more likely now to view free speech as a danger rather than an unmitigated good. And so they don't put in the same effort at protecting free speech, whether at home or away as they did, say, in the 80s, early 90s, when free speech was crucial to defeating communism.
But I think there's some sense that unfettered free speech is threatening our democratic institutions.
That’s part of the elite panic. We’re still trying to make sense of the digital world. Most institutions and cultures develop in the analog world. We have problems keeping up with the speed of information. We have trouble keeping up with the number of opinions you see out there that go against your basic values … opinions that are more extreme, because those opinions would not have bubbled to the surface the way that they can now.
So it's likely to make people concerned, even though some of the research we’ve done shows that hate speech and disinformation — in absolute numbers, it’s a lot, but the share of the total amount of posts on social media is actually not very large. We have a built-in negativity bias. Rather than focusing on all the wonderful opportunities that social media provides and the equal conversations that people have, we tend to focus on the dark side, and I think that AI is likely to increase that concern.
How do you see it being resolved?
First of all, tinkering with the model. So maybe we will have models that are less focused on engagement and outrage. That could be one way.
Another thing is for generations who have grown up with social media to develop a more detached attitude than those of us who have been thrust into it, without having experienced it before.
As I mentioned, more decentralized models might also be a way forward, and then learning to harness the good sides and amplify them, is also something that could contribute.
Are you an advocate for absolute free speech?
No, I don’t think that any serious person is in favor of absolute free speech. … Where I may be more absolutist is when it comes to viewpoints. I don't believe there's any viewpoint in and of itself that should be prohibited.
Keep the conversation going
Discuss this session and more while networking with your peers in academic medicine during, and long after, Learn Serve Lead ends, by joining the AAMC’s virtual community. More than 6,000 of your peers are already there!