© 2024 South Carolina Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

How Social Media Can Approach Free Speech During A Polarizing Time

SCOTT SIMON, HOST:

After President Trump was permanently suspended from Twitter, right-wing social media platform Parler was booted off of Apple's App Store and Amazon's web-hosting services, effectively putting them both offline. Parler is suing Amazon. We should note that both Apple and Amazon are NPR underwriters.

These decisions followed last week's deadly attack at the U.S. Capitol launched by President Trump's extreme supporters and are applauded by many. But many also are concerned the actions might set a precedent for censoring free speech on social media.

Daphne Keller, who is the Platform Regulation director at the Stanford Cyber Policy Center, joins us now. Professor, thanks so much for being with us.

DAPHNE KELLER: Thanks for having me.

SIMON: First big question - do you consider these bans a violation of the First Amendment?

KELLER: No, and I can't imagine a court considering them violations of the First Amendment either. There have been more than 30 lawsuits in the U.S. by users saying they have a right to speak on platforms and demanding that platforms reinstate them. And the platforms always win those cases. One big reason is because the First Amendment defines your rights against the government; it doesn't define rights against a private company.

SIMON: Yeah.

KELLER: Another big reason is because the platforms themselves have their own First Amendment rights to set editorial policy and decide what to take down.

SIMON: Yeah. But at the same time, do you have some concerns about these platforms seeming to regulate speech? And I note that Jack Dorsey, the CEO of Twitter, said this week that he thought the ban on President Trump was right, but he's concerned that it might set a precedent.

KELLER: You know, I think everyone has that concern. Everyone who pays attention is worried about the idea that a very small number of private companies exercise gatekeeper power over some of the most important forums for public discussion these days.

SIMON: And we should note these are some of the same platforms that have complied with tyrannies in some countries that suppress speech, haven't they?

KELLER: In some cases, yes. There's globalization that goes on where American platforms at one point were sort of net exporters of First Amendment values, and now they tend to be net importers of European speech rules. But there's a risk that they become importers of Chinese speech rules or Turkish speech rules.

SIMON: How can social media companies and web-hosting services moderate or regulate racist and violent messages when a lot of the people who post them speak in a coded language?

KELLER: Well, it's hard for a number of reasons. One reason is that we are very far from consensus among the American public about what speech platforms should be taking down. But then beyond that is the point that you raise in your question about, well, once they do set a policy, how can they tell when a new meme or a euphemism is in circulation? And part of the answer is that they employ people who are experts and following the research on this. And so hopefully, the big platforms who can afford to employ those people are relatively up to date.

SIMON: What about the concerns some people have that - I'll cite an old example - that someone like Lenny Bruce wouldn't be able to be on these social media platforms now?

KELLER: That is absolutely a concern. There's also a risk of the harm from taking down the wrong things falling disproportionately on certain groups - in particular, people of color, people who are not native English speakers. There are studies showing that when platforms rely on automation to figure out whose speech is hateful, they falsely penalize speakers of African American English more than everyone else. So it's not just a speech issue. There's an equality issue.

SIMON: Yeah.

KELLER: There's also an economic issue with proposing rules that the giant platforms can afford to comply with but their smaller competitors cannot.

SIMON: Let me cast back one last time to what Jack Dorsey of Twitter suggested this week. And to paraphrase him, he said, we took the step only reluctantly because we really do seek to be a worldwide platform for a free exchange of ideas all over the world, even ones that some people consider to be offensive. We have to be careful about this because this is the way that some national liberation groups - the only way that they can communicate with each other against a dictatorship.

KELLER: I think it's something we all should pay attention to as we press for more takedowns of bad content - the risk that that will become a mechanism for silencing important public discourse. But also, I think they don't want to have to make decisions that will make half of the country very angry with them. Any shift that would let them outsource that decision and point the finger at somebody else, as Facebook is doing with the Facebook Oversight Board, I think they would be very happy to find that.

SIMON: Are there some dangers as we become increasingly reliant for these platforms to carry national dialogue - is that instead of people talking to each other and exchanging different ideas, we're going to have everybody crawl under their favorite platform and exchange only ideas there?

KELLER: Well, that is a risk in particular as we drive hateful speakers off of mainstream platforms where other people can respond to them and disagree with them into smaller and more marginalized, you know, echo chambers where they're going to hear only views that agree with theirs or views that are more radicalizing. That is one of the costs.

SIMON: Daphne Keller, Platform Regulation director at the Stanford Cyber Policy Center, thanks so much for being with us.

KELLER: Thank you.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.