March 4, 2019
March 4, 2019

How should the world’s largest social media companies respond to a pernicious online climate, including hate speech and false content posted by users? For some, the answer is clear: take the fake and offensive content down. But for others, censorship – even by a private company – is dangerous in a time when digital platforms have become the new public square and many Americans cite Facebook and Twitter as their primary news sources. Rather than embracing European hate speech laws or developing platform-specific community standards that are sometimes seen as partisan, they argue, social media companies should voluntarily adopt the First Amendment and block content only if it violates American law. Should First Amendment doctrine govern free speech online? Or are new, more internationally focused speech policies better equipped to handle the modern challenges of regulating content and speech in the digital era?

12:00 AM Monday, March 4, 2019
down

Background (7 RESOURCES)

down

Arguments For (5 RESOURCES)

down

Arguments Against (5 RESOURCES)

down

Section 230 (6 RESOURCES)

down

German Laws (5 RESOURCES)

  • 00:00:00

    Congress shall make no law abridging the freedom of speech.  So says the U.S. Constitution, but what about Facebook or Twitter or any social media platform?  Those companies face no constraints on setting limits on what people can say on their platforms. It is, after all, their ballgame; they own the space.  But should they be limiting speech when it is so offensive and morally threatening that it crosses the line into what we call hate speech? Same for information that is false, pretending to be true, what some call fake news.  Would doing so turn these companies into censorship factories where definitions of what’s hateful could take us down a slippery slope in which free expression is put at serious risk? Or perhaps should these same companies take a page from the First Amendment and encourage speech to run as far as it wants to?  Well, we think all this has the makings of a debate, so let’s have it. Yes or no to this statement: Constitutional Free Speech Principles Can Save Social Media Companies From Themselves.

  • 00:01:05

    I’m John Donvan; I stand between two teams of two, experts in this topic, who will argue for and against this resolution. As always, our debate will go in three rounds, and then our audience here at the National Constitution Center in Philadelphia will choose the winner, and if all goes well, civil discourse will also win.  

    One more reminder to our audience to cast your pre-debate vote, and remember, it’s the difference between the first vote and the second vote that determines our winners. Our resolution one more time: Constitutional Free Speech Principles Can Save Social Media Companies From Themselves. Let’s meet the team arguing for the resolution. Please first welcome David French.


    [applause]


    John Donvan:
David, welcome to Intelligence Squared.  You are a senior writer for the National Review; you’re an attorney — sometimes you say you’re a recovering attorney — you’re a free-speech advocate; you’re a veteran of Operation Iraqi Freedom.  

  • 00:02:00

    You have a New York Times bestselling book out, and another called “The Great American Divorce” coming out this year. After your long career in law, and your love of the First Amendment, and the fact that we are at the National Constitution Center, a shrine to the Constitution, does being here give you chills?


    David French:
The debate gives me chills because it reminds me of two dates: 1789, the date the Constitution was ratified, and 1789, the date that the founding generation passed the Alien and Sedition Act indicating that debates about free speech have been alive as long as our Constitution has been alive.

    John Donvan:

    And are still witness as we’re going to be debating tonight.  Ladies and gentlemen, again, David French.

    [applause]

    And you have a partner arguing this with you.  Please welcome, ladies and gentlemen, Corynne McSherry.

    [applause]

    John Donvan:

    Corynne, thanks so much for joining us at IQ2.  You are the legal director at the Electronic Frontier Foundation.  That, if you don’t know, is a nonprofit that defends civil liberties in the digital world.  

  • 00:03:00

    Corynne, you have said that some of your favorite cases involved defending political expression.  Do you have a particular example you can share?

    Corynne McSheery:

    Well, you know, it’s hard to choose so I’ll just tell you about a recent case that was particularly fun.  

    And that involved a situation where a group of activists took to the streets of Washington, D.C., and Philadelphia and New York, handing out spoof copies of the Washington Post in which the headline announced that Trump had at last resigned.  It was lots of fun. They got lots of attention. The Washington Post also paid attention and was not happy. So they received a legal threat. They called us. We intervened, and we explained to The Washington Post what they should already know, which is that that spoof was protected by the First Amendment.  Washington Post should know that. They realized it, they backed down. We called it a win.

    [laughter]

    John Donvan:

    I can see what kind of debater you are already.  Again, ladies and gentlemen, the team arguing for the resolution.

    [applause]

    And we have a team arguing against this resolution.  Please, first welcome, ladies and gentlemen, Nate Persily.

  • 00:04:02

    [applause]

    Nate, it’s great to have you at Intelligence Squared, first time here.  And you are a professor at Stanford Law. You are the director of the Stanford Project on Democracy and the Internet.  You’re now working on a new research project that is likely to come up tonight. You are leading the charge to make Facebook data available for election research.  Tell us a little about what that’s about.

    Nathaniel Persily:

    So, along with a Harvard professor named Gary King, I co-chair something called Social Science 1.  And this is an attempt to make Facebook data available for the world’s scientific community in a safe, privacy-protected way to ensure that we can figure out the answers as to how special media is affecting democracy around the world.

    John Donvan:

    Oh, sounds very interesting.  And we’ll be hearing more about this after the election I’m guessing.  Okay. Ladies and Gentlemen, Nate Persily. Thank you.

    [applause]

    And your partner, Marietje Schaake.  You have traveled some way to come with us.  

    It’s great to have you here at Intelligence Squared.  You are a Dutch politician.

  • 00:05:00

    You are a member of the European Parliament.  You are also the founder of the European Parliament intergroup on the digital agenda for Europe.  In 2017, Politico named you one of the 28 most influential Europeans, calling you “the ultimate digital MEP.”  We don’t know what MEP means. What is that?

    Marietje Schaake:

    Oh, it means –

    John Donvan:

    Oh, member of European Parliament, sorry.

    Marietje Schaake:

    Yes, yes, yes.

    John Donvan:

    So what I’m really asking you is what does the “ultimate digital MEP” mean?  It’s “digital” is the word I should be focusing on.

    Marietje Schaake:

    Yeah, yeah.  So, for the past 10 years that I’ve had the pleasure of serving in the European Parliament, I tried to bring the world of technology and politics closer together because I think it is more important with every day that passes.  Technology is everywhere. Digitization impacts all aspects of life, and it’s very, very important that politicians are available to make the right decisions. So I suppose because I’m one of a few that’s focused on this, [unintelligible] that’s a curious title.

    John Donvan:

    You’ve got — very important.  You’re right, it is affecting all aspects of life which is why we’re debating it again.  Please welcome the team arguing against the resolution.

  • 00:06:01

    [applause]

    And so, onto the debate.  We start with round one. Round one will be opening statements by each debater in turn.  These statements will be six minutes each. The resolution is, “Constitutional free speech principles can save social media companies from themselves.”  Up first to argue for the motion, Corynne McSherry, legal director at the Electronic Frontier Foundation. Ladies and gentlemen, Corynne McSherry.

    [applause]

    Corynne McSheery:

    Well, thank you.  Thank you for having me.  I’m really excited to be here and to talk about this topic.  I feel like this is what I live and breathe these days because that’s — the reality of defending speech online is talking about content moderation and trying to figure out what to do about it.  Because I think one thing, we should surely be able to agree on is that it desperately needs saving. The content moderation system, what we call sometimes platform or private censorship, it’s fundamentally broken.  What Twitter, Facebook, Pinterest, Medium, everybody’s trying to manage all the content on their platforms and make sure nobody ever says anything bad, and it’s not working.

  • 00:07:07

    In fact, it’s fundamentally broken.

    Now, let me start by acknowledging something.  The internet offers extraordinary tools for us to connect, to organize, to educate, access information.  It’s an amazing, amazing thing. But, the reality is, online speech can also be awful, ugly, and cause real world harm.  So I want to get out of the gate by acknowledging that that is true.

    The question is, what the best way to address that problem?  And the reality is that the current system of content moderation is not the answer.  Let me count the ways. First of all, the fact of the matter is social media platforms are just really bad at deciding what speech should stay up and what should stay down.  So, I’ll just give you a few examples. We’ve seen provisions on hate speech used to shut down conversations between women of color about online harassment. We’ve seen rules against harassment used to shut down the accounts of activists in Egypt and the United States and around the world.  

  • 00:08:04

    We’ve seen a ban on nudity used to take down graphic artworks, including — the Philadelphia Museum of Art had its account flagged because it posted a so-called suggestive painting of a woman eating an ice-cream cone. That was taken down by Facebook. Queer and transgender youth hoping to connect with new communities are having difficulty doing so because of Facebook’s “real names” policy, which prevents them from engaging online anonymously.

    Regulations on violent content have been forced offline; documentation of police brutality, world trafficking, human rights abuses, and so on. In fact, it’s so bad that there were just two articles just last week detailing the difficulty that Facebook and other platforms are having trying to figure out a sane and coherent policy for online speech. They just can’t do it, and it may be because it’s impossible.

  • 00:09:00

    Another thing that’s happening related to that is that this content moderation is [unintelligible] to the moderators.  They are — end up having PTSD-like symptoms when they have to review all this awful content and make decisions about it. And then governments are getting into the mix, so they are inserting themselves into takedown decisions. And in Latin America just recently, campaigns against so-called fake news and misinformation are being used as an excuse to silence critics. So, we have a real problem here, and the final thing that makes me particularly frustrated is if the goal was to stop hate and counter extremism, it’s not working.  

    So, for example, Facebook’s “real names” policy that prevents people from engaging anonymously didn’t stop Russia from gaming the system. Counterterrorism expert [unintelligible] said, “Censorship is never an effective means of achieving security.

    Shuttering websites and suppressing content would be as unhelpful as smashing printing presses.” So, it strikes me as just a little bit crazy that so many people are pushing for the companies to double down on what is clearly a failing system.

  • 00:10:06

    We need a better approach, and we can start by turning to our core constitutional principles.  If we did that, if we looked to the First Amendment as a guidepost, we might have a few things. Perhaps we could get to clearer definitions of what content should and should not be restricted on what terms, while looking to, for example, decades of defamation law, where judges have wrestled with precisely this issue. Companies could apply a kind of strict scrutiny to their policies. Is doing [unintelligible] accomplish a compelling public interest here? Is this the best way to do it, the most narrowly tailored way to do it? Our default could be that speech goes up rather than going down, following our long tradition of no prior restraints on speech in the United States.  

    We would have stronger protections for anonymity, which would mean that ordinary users, activists, and organizers could find each other and activate communities with less fear of retaliation. And above all, we could have two process protections.

  • 00:11:10

    In the First Amendment context, the burden of proving that speech is unprotected or shouldn’t be protected rests with the censor, and the censor has to, if it takes content down, go to court, get a review quickly. We should have the same situation with respect to social media, so we could have notice to users before the content is taken down, an opportunity to appeal. And again, the burden is on the censor to explain why the speech should be taken down, and in the meantime, it stays up because that is our default belief in this country.

    I’m going to close with a quote from Learned Hand, which I think should resonate particularly strongly here. Judge Hand said in 1943, another difficult time in our history — he said, “The First Amendment presupposes that right conclusions are more likely to be gathered out of a multitude of tongues than through any kind of authoritative selection.

  • 00:12:05

    And to many this is, and always will be, folly, but we have staked upon it our all.” Social media platforms allow the multitudes to speak in ways they never could before, but the owners of those platforms are engaging in precisely the kind of authoritative selection that Learned Hand warned against, with predictable results. I suspect that Judge Hand would support this motion, and so should you.

    [applause]

    John Donvan:

    Thank you, Corynne McSherry.  That resolution again, “Constitutional free speech principles can save social media companies from themselves.”  Here is our first debater speaking against the resolution, Nate Persily, professor at Stanford Law. Ladies and gentlemen ladies and gentlemen, Nate Persily.

    [applause]

    Nathaniel Persily:

    So when I think about what might save social media companies from themselves, I think a little bit maybe about antitrust law, maybe about greater privacy protection.  

  • 00:13:03

    I rarely think that what we need is actually more hate speech, more pornography, more bots, more trolls, more foreign interference in elections. And that actually is what the First Amendment would require if these social media companies were to apply.

    Now, you can love the First Amendment like I do.  

    And I do in fact get chills when I walk by Independence Hall and come into this meeting room.  But the First Amendment is to restrict government. It’s not actually to restrict private companies like Facebook.  And, actually, if you believe in the First Amendment, you will actually believe that Facebook and other social media companies can apply different rules to the speech that’s on their platform.

    Now, my arguments against this resolution are just a few short points.  The first is that it’s actually naive to think that you can apply the First Amendment in the context of a social media platform.  The second is that it’s actually undesirable. The third is that it’s actually illegal. And the fourth is that it’s actually hypocritical.  Besides that, I agree with this resolution, okay?

  • 00:14:05

    [laughter]

    So, the first is that it’s naive.  What these social media companies do, the most important power that they do while we focus on these takedowns and the other types of restrictions that Corynne mentioned, is that they organize information, right?  They tell you what goes at the top of your news feed and what goes at the bottom, right? These are inherently content-based decisions that they decide that some type of content is going to be served to you first and something’s going to later.  

    Any factor that’s in the algorithm that is based on content violates the First Amendment, okay?  And these decisions that they are making about, for example, whether they’re going to put engage can content at the top, whether they’re going to prioritize disinformation or non-disinformation, whether they are going to put hate speech at the top of your news feed.  All of these decisions are going to be unconstitutional if they were applied by the government.

  • 00:15:00

    That these products that they are delivering you, as powerful as they are, and as much as we should have oversight and government regulation on these platforms, it’s not the place for the First Amendment.

    The second argument is that it’s actually undesirable.  So, let’s just take a tour through the First Amendment for a second and First Amendment case law recently to give you a sense of what it would mean if the social media companies were to apply it.

    So, as Corynne mentioned, yes, there’s the possibility that then you would end up having more nudity in your news feed. It’s much more than that. Virtually all pornography is protected by the First Amendment, right?  Because Facebook’s not going to know it when it sees it, right? And so, do they actually have an obligation. Does Instagram, which has 13-year-olds on its platform, have an obligation to respect the same Constitutional restrictions on pornography that the government does?

    Similarly, with hate speech, it’s perfectly well and good — I have a real problem with federal or state or even university-banned hate speech laws.  I think they’re overbroad. But does Facebook actually have to decide that just because Nazis are allowed to March in Scoffey [spelled phonetically] that they can march across your news feed, right?

  • 00:16:06

    These are private companies that have different ideas and different values and different priorities as to what should be coming at you in your social media feeds.

    In addition, under Citizens United vs. FEC [spelled phonetically], a case familiar to many of you, right, does Facebook actually have to allow uncontrolled, unlimited corporate politic ad spending on their platform?  You might think that that is actually protected by the First Amendment, but it doesn’t mean that Facebook actually has to allow it on its platform.

    And you can go on and on.  Violent video games, right?  No prior restraints which would prevent all algorithmic curation.  Or, as Corynne mentioned, the issue of anonymity, right? Can Facebook require that people actually use their real names and that they be sort of open and notorious in their speech in maybe not.  Maybe it’s desirable that they keep anonymity. And there are plenty of places on the internet that you can be anonymous. But, if Facebook is going to try to get at foreign interference in elections, hate speech and other kinds of unaccountable speech, it’s going to have to force a real names policy as ineffective as it may be in the interim, right, in order to get at this critical problem of anonymity online.

  • 00:17:12

    The third point is that it’s actually illegal for Facebook to do this.  We tend to think — look at Facebook through the American lens here, right, that they think that, I mean, how could anybody be against applying American Constitutional principles, right, to American audiences?  The truth is, Facebook is an international platform.

    And so, while holocaust denial is perfectly protected under the U.S. Constitution, right, and you can say all kinds of things that would be abhorrent.  You know what? Germany [spelled phonetically] might have a different view on this.  Myanmar, in the condition of civil strife and racial hatred that is really a tinderbox — they have different rules when it comes to hate speech, and for an American multinational corporation to then decide that there is one standard, a U.S. Constitutional standard, that then is going to apply around the world is a real problem.  The last point is that it’s actually hypocritical. Like I said, I want to wrap myself in the First Amendment, okay?

  • 00:18:06

    Because if you believe in the First Amendment, you actually believe that different social media companies can come up with different rules as to what kind of content should appear on their platform. If you want anarchy, if you want all the potential hate speech, pornography, and the like, there are plenty of places on the internet for you.  Okay? You can go to Gab [spelled phonetically] if you’re worried about censorship of conservatives; you could go to Reddit and create your own subreddit for a particular issue. You could go to, you know, closed email lists and bulletin boards. But also, if you’re a social media company that’s a little bit worried about what might happen to the community under those circumstances, that maybe it would have an effect, whether it’s on elections, or whether it’s on the users themselves, that then you could make the decision that “you know what?  

    There are going to be some rules that we don’t apply to government but would be particularly fitted in this case.” And for that reason, please vote that constitutional free [unintelligible] — yeah, I had a great closing there — constitutional free speech principles cannot save social media companies from themselves.

  • 00:19:05

    [applause]


    John Donvan:
Thank you, Nate Persily.  A reminder of what’s going on.  We’re halfway through the opening round of this Intelligence Squared U.S. debate.  I’m John Donvan. We have four debaters, two teams of two, fighting it out over this resolution: Constitutional Free Speech Principles Can Save Social Media Companies From Themselves.  You heard the first two opening statements, and now on to the third. Debating for the resolution, David French, senior writer at the National Review. Ladies and gentlemen, David French. 


    [applause]

    David French:
So, thank you very much for having me.  I was going to go all good-cop and uplifting until your statement, but I’m going to have to rebut some — go a little bad-cop and rebut it.  And here’s what I’m — I’m used to being a bad cop. I’m used to being a civil rights/civil liberties litigator. I used to brag that — and I think it was true when I used to brag about it — I sued more universities on free speech grounds than any living lawyer.  And so, I’m very used to arguing about how censors mess things up, and if there’s one thing that we know, and one thing that we have seen over 200-plus years of attempted censorship in the United States of America, it’s that censors constantly and consistently mess things up and exacerbate divisions.  

  • 00:20:16

    But I’m not going to dwell on that, because, you know, one of the things that we often end up doing is sort of talking about “how do we control the negative aspects of free speech?” What do we do with what’s bad about free speech? I want to talk about what’s good about free speech for a moment. So, last March there was an interesting poll that the New York Times reported on.

    It was a poll of college students, and it said to college students, “If you had to choose between inclusion and free speech, which would you choose? Inclusion or free speech?’

    And a majority of college students chose inclusion. Now, first I was mad about this poll. I was like, “That’s not the choice.” That is absolutely not the choice. It’s the choice that we tend to think is being made here on social media, that we need to censor people to protect, for example, disadvantaged or historically marginalized communities, that there is some sort of battle between inclusion and free speech, when the reality is exactly the opposite.  

  • 00:21:16

    What history has shown us is that free speech facilitates inclusion, free speech facilitates justice. As Frederick Douglass said, “Free speech is the great moral renovator of society.”

    And you don’t have to look anywhere besides the history of the United States of America to see this truth. We often talk about what’s bad about free speech; we often talk about the bad things that have occurred.

    But you know what? We have not lived in a truly free free-speech environment in this country for very long. It wasn’t until the 1920s that the First Amendment of the United States Constitution actually applied to the actions of state and local governments.  A lot of us forget that for much of our history the First Amendment was — didn’t apply in states. It suppressed, for example, abolitionist speech before the Civil War.

  • 00:22:06

    What has happened in the United States of America? Ask yourself, since free speech has been applied at every level of American government, since the First Amendment has been applied at every level of the American government since the 1920s, are we more free? Are we less free? Is the United States a more inclusive society, or is it a less inclusive society?

    I think if you looked at the state of civil liberties in this country in the 1920s, and you compare it to 2019, there is absolutely no comparison.  Frederick Douglas’ words were accurate and prophetic.  Free speech is a great moral renovator. Now, that does not mean that free speech doesn’t carry with it some negative effects and negative consequences and negative actions.  There is such a thing as bad speech.

    But, you know, we tend to make fun of college students these days.  We call them snowflakes because they don’t like to hear bad speech.

  • 00:23:03

    But you know what?  College students have to hear a lot more than the users of social media. Why is this? There’s something we haven’t talked about yet.  One thing that every user of social media has the ability to do is block or mute. And that has no problem and no impact on the First Amendment.

    When I was in law school, there are people who would boo and hiss me when I taught.  They didn’t like what I had to say. Think how much better they would feel if they’d been able to mute me live?  You know, it would have gone something like this: “I disagree the establishment clause –“

    [laughter]

    I mean, they would be relieved of any obligation to ever hear from me again.  You know, that’s a feature that we have in every major social media platform. If you don’t like to see speech, you don’t have to see it anymore.

    So what we’re talking about first — and when we’re talking about social media censorship, we’re often not talking about protecting your own eyes, because you can protect your own eyes in the same way that you can choose not to watch Game of Thrones and miss out on the greatest show in the history of television.

  • 00:24:08

    [laughter]

    You can make that choice.  I can choose not to follow people on Twitter.  Or more satisfyingly, I can choose to mute them so they’re just screaming into the void, not knowing I never see it.  I have an ability to cure rate and manage my own feed. And so when we’re talking about sanctioning social media censorship, what we’re talking about is enabling me, enabling a panel, enabling a board somewhere, some distance from me, of often incompetent composition, as my debating partner has indicated, to decide not just what I am going to see, because I can decide that, consistent with the First Amendment, but to decide what everyone else will see on the basis of criteria that are broad and that are vague and that the very action of arguing about this facilitates the division that is right now tearing this country apart.

  • 00:25:00

    The answer traditionally in American history to bad speech is better speech.  In social media, the answer to bad speech can be better speech, or it can just be blocking.  

    But what we should not do in these new platforms that span the globe, that dominate much of American political discourse, is to unlearn the lesson that we have learned since the incorporation of the Bill of Rights in the 1920s to every state, local branch of government, and that is this:  The free speech, for all of its problems, is the great moral renovator of our society, and we still have renovation to do. So I would urge you to vote yes on this resolution.

    John Donvan:

    Thank you, David French.

    [applause]

    And that resolution again, “Constitutional free speech principles can save social media companies from themselves.”  Here to make her opening statement against this resolution, Marietje Schaake, Dutch politician and member of the European Parliament.  Ladies and gentlemen, Marietje Schaake.

  • 00:26:01

    [applause]

    Marietje Schaake:

    As a student of American studies, it’s really quite special to be here in the National Constitutional Center.  And part of what I just heard reminded me a little bit of those history lessons that I followed at Amsterdam University when I was trying to understand America, including the First Amendment and the American Constitution.  And I can assure you that the First Amendment is the envy of many people in the world. But will it save social media companies from themselves? And that is the question this evening. So that — the resounding answer is no.  

    Because if the First Amendment could save social media companies from themselves, why hasn’t it? And why are these companies in so much trouble?

    Because last time I checked, but you know this better than I do as Americans, the First Amendment does apply in this country.  But privacy violations, illegal collecting and selling of data, the live streaming of a gang rape all happens.

  • 00:27:04

    And also, if the First Amendment could save Facebook, Google, YouTube, Twitter from themselves, why would Mark Zuckerberg and other CEOs have to mislead so much about their company practices?

    Secondly, I don’t think billion-dollar companies need to be saved.  I think people do. Free speech protection in this country goes very far.  Hear it is. Tweet: Hitler was right. Question: What about the holocaust?

    Answer: it was made up.  Question: what race is the most evil to you?  Answer: Mexican and black. Question: do you support genocide?  Answer: I do indeed. The person is free to say this under the First Amendment, but the artificial intelligence bot called Ted [spelled phonetically] that Microsoft developed was the one running the Twitter account that I was just quoting with the questions to people’s answers, and many, many more.

  • 00:28:12

    And 10 to 15 percent of accounts on Twitter are bots, and in other platforms and in other countries, these numbers are even higher. Now, rights apply to people and persons, and I really spare you my thoughts about Citizens United, but clearly a bot does not and should not enjoy the same protection of rights as people’s speech does.  But the problem is on social media platforms like Facebook, Twitter, or YouTube, you and I cannot tell the difference.

    Thirdly, the First Amendment is not the only law that applies to people and the protection of their rights online. The rights of privacy, data protection standards, intellectual property protection, children’s rights, public health — they all require safeguards, too.  

  • 00:29:00

    And thankfully, tobacco companies are not protected by the First Amendment if they were to go out and advertise that smoking is the best thing you can do for your health. So, why are people and bots alike on social media suggesting that vaccinations will cure children, that methamphetamines are excellent for teenagers if they want to lose weight, or that a better life awaits after taking a suicide pill or after blowing oneself up, taking others along?  And lastly, there is not enough oversight over algorithms that govern social media companies’ business models over the data that they collect and that they sell, and without this oversight, social media companies can make all kinds of lofty claims about how they’re going to filter out fake news, or how they’re going to avoid the posting of copyright-violating messages, but whether and how this is done — this filtering-out of harmful content — while respecting the First Amendment, and we’ve just heard how important that is — it is entirely unknown how these algorithms are working.  

  • 00:30:11

    So, concluding, while I believe that the First Amendment is crucially important, it is not at all enough to save social media companies from themselves from doing harm to children, to people, to societies, and the world. Tech companies do not distinguish between expression of a person and a bot, and they actively want to avoid oversight. I say this to you as a serving member of European Parliament who has been on the receiving end of lobby effort after lobby effort. These tech companies are allergic to regulation. Now, for the social media companies to save themselves, I believe they have to put people over profit.  Oh, and let me just stress this before I really end, especially being the only non-American on the panel this evening.

  • 00:31:04

    It is very important to remember that social media companies reach people all over the world, so that looking at American law for American people only is always going to fall short.

    So, I ask you to vote against the motion. 


    [applause]

    John Donvan:
Thank you, Marietje Schaake.  And that concludes Round One of this Intelligence Squared U.S. debate, where our resolution is Constitutional Free Speech Principles Can Save Social Media Companies From Themselves.  Now we move on to Round Two, and Round Two is where the debaters address one another directly and take questions from me and from you, members of our large audience here at the National Constitution Center in Philadelphia.  The resolution: Constitutional Free Speech Principles Can Save Social Media Companies From Themselves; the two debaters arguing against this resolution, Corynne McSherry and David French, are making the assertion that at present the system being used by these companies is fundamentally broken.  

    Their arguments in support of using — being inspired by First Amendment principles are both pragmatic and philosophical.

  • 00:32:06

    At the philosophical end, they argue that free speech, even though they can see there’s an awful lot of awful stuff happening out on social media, that free speech in the long run tends to benefit society; it is the great moral renovator, they call it.  They also say that efforts being made already by these companies to limit speech are proving that it can’t really be done in an effective way, that there are ridiculous and wrong-headed unintended consequences, and they cited numbers of them.

    And at the bottom line, they also say that the First Amendment guarantees a system where the burden is on the censor to prove that the speech is not — is not valid and should be taken down or otherwise limited.

    The team arguing against the resolution, Marietje Schaake and Nate Persily are arguing that, first of all, they are very pro First Amendment.  They are adamant about that, referring to it as “the envy of the world,” but as an inspiration for guiding these companies in forming their own policies.  

  • 00:33:05

    They say that doing so is blatant naive and undesirable and actually illegal, setting the example of countries like Germany which have their own rules on this.  In Germany, holocaust denial is actually illegal. In the United States, under the First Amendment, it is not illegal to do that. How would that work? They talked about the fact that a number of the social media accounts, 10 to 15 percent in some places, are actually bots.  

    Do bots have rights of free speech or not? And basically, they say the First Amendment on its own is not enough to save these companies from themselves, and it’s misguided to take American principles and try to apply them globally around the world.

    I want to go to the team arguing against the resolution.  You are saying that institutional free speech principles will not save social media companies from themselves.  I notice that your opponents used the C word a number of times, censor and censorship. It’s a pejorative word, I think, in this context.  They’re calling your — they’re saying that you’re basically arguing for censorship. Nate Persily, are you pro censorship?

  • 00:34:06

    Nathaniel Persily:

    No, I’m not pro censorship.

    John Donvan:

    So what’s the difference?

    Nathaniel Persily:

    But the problem, again, is that you have no choice here, right.  What the social media companies are doing is organizing information for you, right?  They make a — they have to make some decision as to what goes at the top and what goes at the bottom.  That, in and of itself, right, if it’s done by the government, is going to violate the First Amendment.

    So you can call it censorship, right, but it’s essentially what the platform is, which is that it’s cure rating and delivering information to you, right?  And so it — there’s certainly the case that there are — you know, there are a parade of horribles on the other side, right, that, you know, you could — and Corynne mentioned many of them, that they could be overinclusive in their content.

    But the slope is slippery in both directions, okay?  And so all of us are going to dig our heels in on some part of that slope.  We’re not going to say that the — you know, Facebook has to allow everything onto the platform, right?  

  • 00:35:00

    The question is, is it the First Amendment? And I’ve given you the whole sort of array of First Amendment cases that would constrain Facebook, whether it’s Citizens United, whether it’s the ban on — or the prevention of a ban on anonymity and the like.  All of those things are then going to apply to Facebook, and they really wouldn’t be able to regulate the platform.

    John Donvan:

    Corynne McSherry, you did use the censorship word, and your opponents are saying — I think the response was, you can call that if you want, but it’s not really the point here.  So can you take that — take on that response?

    Corynne McSheery:

    Sure, absolutely.  So I think what we call it is private censorship.  I mean, traditionally, when I’m thinking about censorship is censorship by governments.  And this is different from that. But I would posit that if, in some ways, given how important social media platforms are right now, the decisions that are being made by a few corporations in Silicon Valley are having as much effect on online speech and the future of online speech as anything any government is doing at the moment, although I would flag that, by the way, governments often participate in these takedown decisions.  Of governments take advantage of community standards policies to force content off line that they could never do officially, but they do it via a community standards flag.

  • 00:36:09

    John Donvan:

    What’s an example of that?

    Corynne McSheery:

    Well, so one thing that’s happening is that, as I understand it, the government of France is embedding — it’s either France or Germany is embedding someone at Facebook to help them make decisions about what content should stay up and take down.  We also get a lot of sort of quiet reports from companies where they don’t want to admit publicly that this is what’s happening, but it nevertheless is what is happening.

    John Donvan:

    Let me ask Marietje Schaake, is that problematic that officials governing in France is taking part in the conversations at Facebook about what to take up or take down?

    Marietje Schaake:

    Well, I think there should be an independent assessment of whether a takedown is appropriate or not based on the law.  But what we see now is indeed that these private companies also, for their own business decisions, are taking down content for a variety of reasons.  So while the First Amendment, at least in this country — and, you know, you mentioned France, but let’s — I’m going to try to stay focused on the area where these companies are incorporated and where the First Amendment applies.  

  • 00:37:04

    They are themselves not respecting it. So how is the First Amendment going to save these companies from themselves if they’ve had all the opportunity in the world to actually live by the Constitution and make the law leading? They haven’t.  So I just don’t think it’s going to save social media companies from themselves.

    John Donvan:

    You said — you said earlier that the First Amendment is not enough.  What else is —

    Marietje Schaake:

    That’s right.

    John Donvan:

    What is needed then?  What is your — what is your vision for the solution?

    Marietje Schaake:

    Well, I believe one of the main challenges that we face both in Europe and in the United States is that currently, these social media companies, the platforms specifically, they enjoy an exception of liability, and this is in our law part of the ecommerce directive.  I think it is U.S. §230, for the lawyers in the room. But it basically means that they are claiming under this law that they are the neutral platform.  They’re simply just connecting a –

    
John Donvan:
Like the phone company would be.

  • 00:38:00

    
Marietje Schaake:
Like the phone company, or like eBay, you know.  Does eBay know that a stolen car is being sold, for example?  No, it would say, “We’re not liable for that. We’re simply connecting the seller and the buyer.”  So, this exemption has gone, I think, very, very far, and it is at friction with all kinds of business decisions that the ranking — that Professor Persily talks about, but also the takedowns that these companies do.  So, it is way too narrow to look at the problem that social media companies present only through the First Amendment lens. That is the problem. And then there’s also the right to privacy, which has long been seen in the United States as a European sensitivity, and I think we are right to be sensitive about it, looking at our history.  

    But I see a big awakening in the United States that privacy should be worth something, and so we have to look at the broad spectrum to understand what the threats are that the social media companies –

    
John Donvan:
Okay, well, let me bring in David French.

    
David French:
Let me lodge a really strong objection to the characterization of the world that the First Amendment creates.  Okay?

  • 00:39:02

    I feel like we’re creating a bit of a strawman that says, “The First Amendment world is anything goes at any time, no matter how awful and horrible.”  But we have 200 years of jurisprudence in the United States of America that has established — there are multiple situations where there are categories of conduct and behavior that are not protected by the First Amendment, such as targeted harassment, such as libel and slander and defamation.  We have viewpoint-neutral decency rules that govern the airwaves that are constitutional to prevent children from seeing pornography on broadcast television. So, when we’re talking about the First Amendment, we’re not talking about a strawman First Amendment here. We’re talking about the world we understand and experience, a world that protects people from harassment, that protects people from invasion of privacy, that protects people from libel, slander, and defamation.  The cardinal virtue of the Second Amendment is not “anything goes no matter how much of a nightmare it is.”

  • 00:40:04

    It is viewpoint neutrality. It is that the government is not discriminating amongst viewpoints to determine which viewpoints are going to be privileged and which viewpoints are going to be suppressed.

    That’s the cardinal rule of the First Amendment. There are time, place, and manner restrictions; there are anti-harassment regulations. But viewpoint neutrality is a cardinal rule, and that’s what the social media companies struggle with so darn much.  It took a village of people at Facebook, according to a recent Vanity Fair article, to decide whether feminists could argue, “Men are scum.” Okay? I’ll tell you, I don’t care. If they want to say men are scum, I might either engage, or I might block them. But the idea that we’re going to trust to Silicon Valley executives the determination of how feminists make their case is — I believe that is what is repugnant to the First Amendment.

    
John Donvan:
Men are scum?  I’m wondering if we’re going to debate that at some point [inaudible].

    
[laughter]

    
David French:
Resolved.  

  • 00:41:02

    
John Donvan:
Nate Persily, I want to pick up from the point that your opponent has made.

    
Marietje Schaake:
[unintelligible].

    
John Donvan:
Sorry?

    
Marietje Schaake:
[unintelligible].

    
[laughter]

    
John Donvan:
You want to be in that debate?  Is that what you’re saying? 


    Marietje Schaake:
[unintelligible]. 


    [laughter]


    John Donvan:
You’re signing up [unintelligible].  Nate, I want to pick up a point that David is elaborating on, the way that they’re discovering what they mean by the First Amendment, and he’s basically making the point that the First Amendment just doesn’t mean anything goes, because you can — under the First Amendment you can still sue somebody for libel; you can jail somebody for inciting violence; you can jail somebody for harassing — for harassment, et cetera.  So, it’s not anything goes. But the core point, they’re saying is this notion of viewpoint neutrality, that the First Amendment can’t tell anybody that their views — their political views, their religious views or ideological views — are — don’t have a place. Can you respond to that? Because, in fact, is Facebook not in the position, as it tries to govern its space, to be telling people certain views aren’t — don’t have a place there?

    Nathaniel Persily:
So, you know, the strategy that I think they’re taking is to narrow the First Amendment to basically 10 percent of First Amendment [inaudible] –


    David French:
Not to the actual First Amendment.

  • 00:42:05

    Nathaniel Persily:
Well, no, take the — let’s just talk about anonymity, because that part, I think, really kind of crystallizes the issue, right?  This is an area where the government has to respect anonymous speech. In fact, the Federalist Papers, right? Written not too far from here, and Publius as the author.  Right? You know, proud tradition of anonymous speech. It is right for the government to respect anonymity, but it’s anonymity which gives us the bot problem that Marietje talked about.  It gives us the foreign interference in elections problem. It gives us the unaccountable hate speech problem online. Okay?

    Now, again, you can clothe yourselves, as I’m trying to do, in the First Amendment and say that this should be the restriction on government, and in fact, that social media companies, if they want to let everything go, they can, or, just to adopt the First Amendment, they could.  But the point is different — what the First Amendment provides, right, is that each one of these companies, if they want to decide that, hey, we’re going to have a different type of environment on our platform, we’re not going to allow foreign interference in elections because we’re worried about what the impact is going to be.  We’re not going to allow sort of sexualizing the children and alike.

  • 00:43:07

    By the way, this is really — and if I could, just to give you a real concrete example that I just learned about, Reddit.  

    Reddit, which is one of the most libertarian, anything-goes kinds of platforms, has adopted a rule not about child pornography, it’s about, you know what?  On our platform, you can’t sexualize children because what they found was that there were these communities that were developing on Reddit where there were perfectly sort of legal photos, but then the way people were talking about it, right, was really, really scary, right?

    And to be honest, if a platform wants to allow that kind of speech, they have to right to.  But they also have the right to make a decision that that’s not appropriate on the platform.

    John Donvan:

    I want to a question to Corynne.  We’ve been talking about offensive speech for the most part.  But there’s another thing that’s out there, and that’s this thing called fake news.  That became very, very prevalent and problematic since 2016, actually before that. And it’s — I mean real fake news — I can’t believe I’m saying, “Real fake news.”

    [laughter]

  • 00:44:00

    Stories that are put out there that are by — whose authors are knowingly aware that the information is false, not the news that you don’t like, but the stuff that’s false.  How does the First Amendment issue work into that? We’re not talk — you know, does that content neutrality come into play there?

    Corynne McSheery:

    Well, I think that the best way to approach — there’s actually a lot of people who are trying to do research right now to figure out what’s the best way to get at the fake news problem.  It is not obvious to me that asking Silicon Valley to solve it for us is the right path forward any more than asking Silicon Valley to solve any many, many other problems that we have.

    Just, if you let me digress just for a second, I notice that Pinterest is now taking down content that’s part of the sort of anti-vaccination movement.  It seems to me that if you’re worried about parents not vaccinating their children, maybe you shouldn’t look to Pinterest to fix that problem for you. Maybe there are more effective ways of getting at that issue.  

    And I feel the same way about a lot of the concerns that we have around protecting our elections.

  • 00:45:01

    If we’re worried about protecting democracy and protecting elections, maybe we should be thinking about things like, I don’t know, sharing Android [spelled phonetically] or any number of millions of other things that are affecting our elections and perhaps focus a little bit less on this one symptom that we think maybe we can get at by calling Mark Zuckerberg on the carpet and asking him to do more and double down on what is already a completely failed system.

    John Donvan:

    I’d like to go to the audience for some questions now.  Okay, right down front row here.

    Male Speaker:

    David Harrison.  It seems to me the problem is one, the huge mass of information that you have with the social media, newspapers or radio are traditional forms of information, have limited pool that they could pull from.  

    Certainly, there were commercial and maybe standard [spelled phonetically] the news, but there was one responsibility, and, too, there just was enough that they could pick and choose because that’s all they could publish.

  • 00:46:02

    Here, I don’t understand, really, how Zuckerberg or any of the major platforms can actually —

    John Donvan:

    Yeah.

    Male Speaker:

    — pick and choose.  So I’d like you to talk about that because it is so vast.

    John Donvan:

    Let me bring that question to the side —

    Male Speaker:

    So it becomes — it becomes almost an impossible — either way, it’s impossible.

    John Donvan:

    Let me bring that question to the side arguing against the resolution, because it also picks up on something Corynne McSherry said at the beginning, which is that it’s kind of impossible to do.  There’s just so much out there that you couldn’t possibly keep up with everythings that needed, and to make the judgment calls being able to read, first of all, intents and outcome and consequences.  So take that on, Marietje. How — if these companies are going to make these choices, how do they do it, and do it well?

    Marietje Schaake:

    Well, we need much more scrutiny on what the companies are doing already.

    John Donvan:

    But let’s say you have that.  How does it work? I mean, is there — is there an army of 100,000 people checking?  Is it algorithms? How does it actually happen?

  • 00:47:03

    Marietje Schaake:

    Call me a passivist, but I don’t like to think about armies.

    John Donvan:

    Legions.

    Marietje Schaake:

    I think —

    John Donvan:

    — [unintelligible] military also.

    Male Speaker:

    A mob, a mob [unintelligible].

    Marietje Schaake:

    So, but what are the main selectors of what information comes first?  This was the gentleman’s question as well. And what information shoots to the top or goes down to the bottom is determined by algorithms.  These algorithms are extremely influential. I’m the daughter and the sister of a doctor, and I recently saw a sign that I thought was very clear in explaining this, which was in a doctor’s office, saying, “Please don’t confuse your Google search with my medical degree.”


    [laughter]


    Marietje Schaake:
And unfortunately, the consequences of these searches are actually not that funny.  There are people who think that by drinking olive oil or doing some kind of rain dance to be cured from cancer, even though they are very, very ill.  

    So, I believe that the way in which information is ranked and the deeper impact on our democracy, on our rights — constitutional rights, universal human rights — needs to be scrutinized much more, so the oversight over the algorithms, in order to even know what the decisions are that they’re making –

  • 00:48:15

    John Donvan:
Okay. 


    Marietje Schaake:
– and whether the intended and unintended consequences are not causing unacceptable harm. 


    John Donvan:
So, part of the answer, then, is that algorithms need to be [unintelligible] –


    Marietje Schaake:
[unintelligible]. 


    John Donvan:
– and watched over. 


    Marietje Schaake:
Yes. 


    John Donvan:
And someone needs to be doing the oversight.  Okay, [unintelligible] –


    Marietje Schaake:
But there [unintelligible]. 


    John Donvan:
– [unintelligible] back to your point. 


    Marietje Schaake:
Like the regulator.   


    John Donvan:
Corynne, [unintelligible]?


    Corynne McSheery:
I think more transparency would be fantastic.  I could not agree more. But I don’t think it changes the outcome of this.  What I do think — what I would say is that rather than — in answer to your concern, one of the things that was great about the internet is that we didn’t have so many gatekeepers, and that was actually very important and continues to be really important that you don’t have just a few gatekeepers making decisions about what you do and do not see.  But picking up on what David said, maybe part of the answer is to ask the social media companies that do have more money than God to take some of that, and rather than investing in an army or a mob or whatever of content moderators that are then going to be psychologically damaged, take that energy and invest it in user tools, in empowering the users to control their own internet experience rather than taking the power unto themselves.

  • 00:49:24

    Male Speaker:
So, my question is for the pro side.

    So, you talked about effects of social media on elections, and you talked about [unintelligible] — I think the term you used is the million other things — gerrymandering, what-have-you.  So, it seems that you disagree with your opposition in regard to the effects that social media have on the democratic process. It seems that they’re trying to maximize, and you’re trying to minimize it.  So, I guess [unintelligible] ask, to what extent do you think social media has an impact on the democratic process and the way people cast their ballots?


    John Donvan:
Can I ask you to rephrase your question to make it more on the point and ask each side to take on this?  

    Does their vision of the way things want to — should go actually enhance or diminish democracy?

  • 00:50:08

    So, would you say that the First Amendment approach you’re suggesting would be an enhancement to democracy, and why?  


    David French:
I think it’s an enhancement to democracy for –


    John Donvan:
Are you good with my rephrasing your question that way?  Thank you. Thumbs-up, thank you.


    David French:
I think it’s an enhancement to democracy for a couple of reasons.  Now, I will say, and I will acknowledge, that social media is a divided place.  America is a divided place. It was getting very divided before social media. It’s continuing to be divided.  But the fundamental reality is that censorship itself is extraordinarily divisive. Censorship itself generates intense disagreement.  Censorship itself can also even generate violence. The problem is when we talk about the dichotomy — the negative effects of free speech — and there’s bad political — there’s been bad political free speech forever.  I mean, we act as if bad political speech is created — is just now creating problems.

  • 00:51:09

    I mean, they made a musical about a vice president shooting a former secretary of the Treasury in a duel.

    I mean, this is something that’s been going on for a long time, and the answer is to empower the user, this [unintelligible].  It is not to turn to Silicon Valley and say, “Save me from fake news.” It is not to turn Silicon Valley and say, “Save my delicate eyes” when I can press a button and save my eyes myself. This is one of the fundamental problems. When we say — turn to Silicon Valley and say, “Save me,” what we almost always mean is “punish them.”  That’s what we tend to mean. 


    John Donvan:
All right — 


    David French:
Looking to somebody else to fight our political battle for us, and that is not Silicon Valley’s place. 


    John Donvan:
So, I — rather than responding directly to what he just said, I want you to respond to the question.  Nate, the question is, the vision, as you lay it out, where these companies would be making more of these choices that your opponents find offensive — would that be an enhancement or diminishment of democracy?

  • 00:52:05

    Nathaniel Persily:
Again, they make these choices no matter what. 


    John Donvan:
But what’s the answer to the question?


    Nathaniel Persily:
Well, the answer is that — 


    John Donvan:
It’s irrelevant, are you saying?

    Nathaniel Persily:
No, but the First — they cannot respect the First Amendment in the way that they’re proposing here because they are making content-based decisions.

    John Donvan:

    I think the nature of the question is, if we’re in a world where Facebook is saying you can’t — you can’t be a holocaust denier on our site, or you’re off, you can’t be offensive against certain minority groups or you’re off, is that an enhancement to the democracy or a diminishing [unintelligible]?

    Nathaniel Persily:

    Well, it’s important to understand that there’s more than just the American democracy at stake here, right?  There are different democracies around the world that have different values. And so, you know, my personal view is, look, I do take a more Libertarian approach for what I want to see on my feed.  But you know what? I go and I go into those dark corners of the internet and I see what people are saying. But you have that right. All this user power that he’s talking about, you have that right right now.

    John Donvan:

    Okay, if —

    Male Speaker:

    [unintelligible] Facebook.

    John Donvan:

    If the law needs to tell Facebook that they have to shut down a particular guy who’s talking that the government doesn’t like, does Facebook go along with that?  And is that an enhancement to democracy?

  • 00:53:08

    Nathaniel Persily:

    Sometimes yes, sometimes no.  And that is an extremely difficult decision for them, right?  So holocaust denial, right, they will take down. Actually, in Egypt, they wouldn’t.  And they’re — and this is — you know, they have really difficult choices that they have to make in different situations.

    Marietje Schaake:

    Well —

    John Donvan:

    Marietje.

    Marietje Schaake:

    I think the example of Egypt is a very important one because this is not about what social media companies are doing and not doing only.  This is the about the rights that protect people. And people in countries where the rule of law does not apply are not protected. Yes, they are subject to all kinds of information that’s being shared.  Now, I believe very much that sharing more information and having more freedoms is good for democracy, but we should not confuse those rights with the behavior of those companies.

    And I feel like that is not clear enough in the arguments that the other team are making.

    I would like to see how social media companies are then actually applying the First Amendment besides the point that people might be empowered because I personally don’t think we could put all the responsibility to understand the terms of use of all these tech platforms that would take hours if not days to read for nonlawyers, 16-year-old children, my 80-year-old neighbor alike.  I mean, do you understand what you’re saying yes to?

  • 00:54:22

    John Donvan:

    So the fact that your opponents are arguing that in certain countries, what you’re talking about just can’t fly because they have different tolerance for free speech.  What do you do about that?

    David French:

    Well, you know, look, the default position of an American company, a social media platform that is seeking in its purpose, its founding purpose of these companies are to provide platforms for people’s expression.

    Marietje Schaake:

    No, it’s to sell ads.

    David French:

    This is part of the founding purpose of many of these — of these organizations.  That should be their default position. Now, if a law in another country is prohibiting them from following this platform, then they have a choice to make.  

  • 00:55:02

    Do they comply with the law, or do they pull out from that country? And we’re passing a lot of these laws, guys, as if these laws are all really — all the laws out there are laws against holocaust denial.  No. No.

    If you’re operating in China, if you’re operating in so many other countries, the laws are suppressing dissent in a truly authoritarian and brutal way.  And so some of these companies need to make hard choices and say, “That’s just — you know, the buck is not compatible with the purpose of this platform,” and pull out.  

    And so that — we are continually casting censorship.  I feel like my worthy opponents here are casting censorship as this beneficially and well-intentioned move.  

    Internationally, censorship is not the same thing as like a bunch of administrators at Brown or Dartmouth trying to figure out a way to make their universities more welcoming.  Internationally, censorship is, more often than not, a bunch of authoritarian thugs trying to make sure that they retain a decade’s or generation’s-long hold on power.

  • 00:56:07

    So let’s not sugar-coat what this is here.  And I believe Silicon Valley, if its going to advocate, and if it’s going to be a part of trying to foster and cultivate a healthy democracy needs to think long and hard before it’s going to operate in some of these countries.

    Marietje Schaake:

    Well —

    John Donvan:

    Very quickly.

    Marietje Schaake:

    But just quickly, because they already do, we need to think more about what freedom after speech really means in these countries.  What about being incentivized to share all kinds of political opinions on a platform like Facebook, like young people in Syria, Egypt, Turkey do?  And then they’re on the record, great, because their rights are not protected in those countries. So it is an illusion to think that this is about the companies.  It is about the laws in these other countries.

    And I truly think — and this is — this is really beyond sort of the resolution that we’re discussing — that American companies and possibly also the American people do not appreciate sufficiently what the impact is of these company models in countries where human rights are not protected and people can be dragged to prisons, tortured, or killed.

  • 00:57:11

    Corynne McSheery:

    But either way, should they be making the decision?  

    Do you want Mark Zuckerberg to decide for a Syrian journalist whether she is going to take the risk of putting that video online.  I don’t want Mark Zuckerberg deciding that; I want the journalist deciding it for herself.  Of course, she can take that risk. 


    John Donvan:
We’re going to take one more question.


    Male Speaker:
Thank you.  Something struck me that one of the speakers just said, and that was the purpose of the existence of these companies. 


    John Donvan:
Is to [unintelligible].  I heard that, too. I heard that, too. 


    Male Speaker:
These companies are not there to spread democracy.  They’re not there to spread truth, they’re not there to spread justice. The purpose of these companies is to collect data on users, both upfront and surreptitiously; to sell advertising; to sell products; and to take that data and sell it other places?  Have these companies not made that clear? They are not [unintelligible] –

  • 00:58:10

    John Donvan:
I’m going to break in to the question to our resolution, which is this guy saying baloney when you say these companies are there in any way to advance democracy and free speech and interconnectedness.  They are there to make money. 


    Corynne McSheery:
So, I want to speak directly to that point, because I think that speaks to the irony of this whole proposition.  We — let’s take it as a given that we don’t trust Facebook or any of these other companies very much. Right? Maybe about as far as we can throw them?  Why? Why in the world, then, do we nonetheless trust them to make good content moderation decisions? Why would we put those things together? Of course, we don’t trust them either way, which is why we have to ask them to adhere to a higher standard and demand it. 


    John Donvan:
Ten-second response from the other side?  Should we trust the company to do it? It sounds like you don’t necessarily –


    Nathaniel Persily:
No, you know, we’re here to bury Facebook, not to praise it. 


    [laughter]

  • 00:59:00

    Nathaniel Persily:
I think — you don’t have to — 


    John Donvan:
I have to say that concludes Round Two of this Intelligence Squared U.S. debate — 


    [applause]

    John Donvan:
– where our resolution is Constitutional Free Speech Principles Can Save Social Media Companies From Themselves.  Now we move on to Round Three, and Round Three will be brief closing statements from each debater in turn. Once again, the resolution: Constitutional Free Speech Principles Can Save Social Media Companies From Themselves.  Making her closing statement, Corynne McSherry, legal director at the Electronic Frontier Foundation. 


    Corynne McSheery:
Thank you.  My job at the Electronic Frontier Foundation is to fight for your digital rights, and there’s a reason I do that job.  And that’s because I’m still a little bit of an idealist about the internet. Now, I know that a free and open internet — well, it’s never been fully free, and it’s never been fully open, so that’s an ongoing project.  But still the internet represents an extraordinary, extraordinary idea that anyone with a computing device can connect to anyone else around the world to speak, to be heard, and to learn.

  • 01:00:09

    Private censorship is fundamentally incompatible with that idea, but if the social media platforms are going to insist on putting themselves in the role of judge and jury, drawing lines between what speech is okay and what speech is not.  Then it seems to me that at the very least, they could be looking to the work of judges who have struggled for years to figure out exactly — to draw those lines for themselves and for the nation.  Again, what we are saying here is they should look to First Amendment principles to guide their conduct.

    My opponents had said, “I wish that they would apply the First Amendment. They never have.”  Well, exactly. Maybe they should start. Maybe they should start. Maybe they should make that choice.

    Just a few months ago, in the Packingham case, the Supreme Court observed that social media can provide the most powerful mechanism available to a private citizen to make his or her voice heard.  

  • 01:01:07

    That’s extraordinary.

    Now, we’re here in the Constitution Center, a very special place.  It seems to me that in this place, in this center devoted to Constitutional principles, it would be very strange for us to decide that powerful corporations shouldn’t respect those principles when they are deciding whose voices will be heard and what those voices can say.  I urge you to support the motion. Thank you.

    John Donvan:

    Thank you, Corynne McSherry.

    [applause]

    And that motion again, “”Constitutional free speech principles can save social media companies from themselves.”  Here making his closing against the resolution, Nate Persily, professor at Stanford Law.

    Nathaniel Persily:

    So, an old law professor once said that when the facts are on your side, you pound the facts.  When the law is on your side, you pound the law. When neither is on your side, you pound the table, right?

  • 01:02:04

    [laughter]

    I kind of think that our opponents are pounding the table here a little bit, right?  To bring the specter of China, to bring up the specter of these censors as if that is what we are arguing for is really misrepresenting the argument, okay?  

    The facts and the law are on our side.  So if you look at the facts of what is happening out there, the internet is this incredible, liberating technology that allows people to talk to anyone around the world, right?  It also has these deep, dark spaces. You can choose what space you want to be in. The question is, should these companies be serving up this kind of information to you at the top of your news feed, at the top of your Twitter feed and the like?

    Now, again, you can choose to go into Gab, you can choose to go into Twitter.  

    David French is wrong that each one of these platforms has said what they’re about is providing free speech opportunities for every person around the world and the like.  That actually is Twitter’s principle position, that they are trying to allow individuals to broadcast everywhere around the world.

  • 01:03:04

    And you know? For what reason, they have an anonymity policy.  They have 10 to 15 percent of their accounts are bots and the like. Facebook actually has a different mission, so does Google, right? These companies, they decide, you know what? We’re going to build transnational safe communities and the like, something along those lines.

    Now, it’s not as if the resolution forces us in the position of saying, yes, Mark Zuckerberg should censor Republicans, right?  That’s not what we’re arguing here. We are simply saying that the same rules that apply to governments are not the rules that should apply to platforms.  And so that’s why I focus in the rebuttal on anonymity, right? We all agree that anonymity is Constitutionally protected. But if you are going to get at any of the problems that we all agree are happening online, whether it’s foreign election interference, hate speech, right, bots that are essentially polluting the marketplace of ideas, you cannot respect the Constitutional right of anonymity in online conversations.

    The final point, and one that Marietje I think made very well, which is that you cannot just think of these as American platforms with an American audience.  80 percent of Facebook users are outside the United States. It’s wrong for this company to then export its values to the rest of the world.

  • 01:04:14

    John Donvan:

    Thank you, Nate Persily.

    [applause]

    Resolution one more time, “Constitutional free speech principles can save social media companies from themselves.”  Making his closing for the resolution, David French, senior writer at National Review.

    David French:

    So let’s be really clear here.  My worthy opponent said he came to bury these companies, not to praise them.  It is a very odd method of burying them to grant them greater power over your speech, to grant them greater authority over the reach of your words.  We’re talking about companies that, in many ways, have demonstrated their untrustworthiness not just with your privacy, but with your voice. That is not burying the company.  That is empowering a company.

  • 01:05:00

    If you vote “no” on this resolution, if you vote “no” on this resolution, you’re voting yes to a committee somewhere — no, that’s best case.  

    Real world, you’re voting yes to a person who’s in a cubicle somewhere halfway across the country looking at pictures and deciding with a click of a mouse, “yes,” “no,” after a PowerPoint training in community standards who’s maybe seen enough weird stuff that they’re now a flat-earther.  That was actually a report recently. Some of these content moderators are now flat-earthers and 9/11 truthers, and they’re deciding whether you get to speak.  No. No. The bottom line is the person who decides whether you speak or not should be you. You do — you have the power to see what you want to see on social media.  You should also have the power to say what you want to say on social media, and if you don’t like what’s out there, the answer is to mute.

  • 01:06:05

    The answer is to block, or, God forbid, shut down the app or put down the phone.  Now, I can’t do that as a journalist. I’m just hopelessly addicted.

    But that is something that you have in your power. The last thing that you should do when troubled by speech is to turn to Silicon Valley and say, “Help me, Mark Zuckerberg, you’re our only hope.”  And that’s why you should vote for viewpoint neutrality and say yes to this motion.


    [applause]


    John Donvan:
Thank you, David French.  The resolution again: Constitutional Free Speech Principles Can Save Social Media Companies From Themselves.  Our final speaker will be speaking against the resolution. Mariejte Schaake, Dutch politician and member of the European Parliament. 


    Marietje Schaake:
I think from everything we’ve heard from both sides, it is safe to conclude that technology is not neutral and that no one wants censorship, at least not here in either one of the teams, or that a minister of truth, so to say, would be a very bad idea, and I appreciate your concerns about that even more with a president who calls out journalists as being of the fake news media on a daily basis.

  • 01:07:18

     

    But there are many places in the world where we should be very concerned about the notion of a minister of truth. So, then why would we have effectively except that someone like Mark Zuckerberg, but the other CEOs of the other social media companies are in a position of power to operate with their business models like ministers of truth? And this is far beyond free speech and the First Amendment.  We’re talking about the collection of data, the selling of data, the micro-targeting of people with ads to categories such as fans or interested in Josef Goebbels or having searched for information about depression and suicide.

  • 01:08:02

    So, what I find baffling and unacceptable is that these companies, on the one hand, have made billions and billions of dollars by perfecting the targeting of ads to people on the basis of ever more precise categories, and yet they claim, “How can we be part of preventing conspiracies and all kinds of lies that jeopardize the public health, for example, to go from spreading around, or child pornography for that matter?”  The reason why they do not want to engage in it is that it would make them liable, and if they would be liable, their entire business would be over. And the idea that it should be all the responsibility of the individual — anyone from you all — to decide what you want to see, what you want to engage in, what you don’t want to engage in to what we heard is the responsibility of the Syrian journalist to want to engage or not — I believe that people are not informed enough currently to know what they’re saying yes to, and in fact, these companies have violated agreements by selling and collecting data invisibly all the time.

  • 01:09:08

    So, I think it is safe to say that people don’t know enough to make informed decisions; that this is not only about free speech and the First Amendment, but about much more; and that certainly we can conclude that the First Amendment cannot save social media companies from themselves.  So, I urge you to vote against the motion. 


    [applause]


    John Donvan:
Thank you, Marietje Schaake.  And that concludes our closing statements and Round Three of this Intelligence Squared U.S. debate.

    I want to thank our debaters.  The thing that we like to do is to show that two sides can disagree with respect, with facts, with argument, with logic, with humor, with charm.  It doesn’t have to be nasty and sarcastic and put-down. You did nothing like that. You were all professionals. You were all really, really interesting.  

    And you all showed respect for what we’re doing here. So I want to thank you for doing all of that.

    [applause]

  • 01:10:01

    So what I do want to do, I want to take a — sometimes when we have a little bit of time while we’re tabulating results, we like to have a little bit of a chat that’s not part of the competition.  This is not competitive. You know, we’re heading into the 2020 elections here in this country. This is an American focus question, but actually, we would really like an international perspective on it.  We’re in a world where these companies are under enormous pressure to do something about the mess. Everybody agrees that it’s a mess. But what do you think — what do you anticipate in terms of regulations actually being posed by governments abroad and especially here on these companies because of concerns about the last election being repeated this time around?  And, Nate, I’ll start with you because you’re nodding as you think about this one.

    Nathaniel Persily:

    Yeah.  Well, I think one of the problem — I am worried about, because the social media companies have dropped the ball on getting at some issues like foreign interference, that then the governments are going to overreach, right, and really have an impact on free speech online.

  • 01:11:03

    And we are seeing that around the world, where — that the governments are then saying, well, our Facebook isn’t able to clean up this mess, so therefore they’re going to pass real — whether it’s viewpoint-based discrimination or something else like that. So I am concerned about that.

    You know, at the risk of getting too complicated here, all the different areas of law that regulate the internet intersect with each other.  So whether you’re talking about antitrust, you’re talking about privacy or you’re talking about these content moderation issues, I actually think Europe may be the tail that wags the American dog here a little bit, and so the decisions that Marietje makes is going to be very important.

    John Donvan:

    What are those decisions, Marietje?  Where are you going with this? Yeah.

    Marietje Schaake:

    Well, we —

    [laughter]

    We also have elections, and the concerns are actually very, very similar.  And what we see is that a number of these social media companies are now coming out with some initial measures of, for example, making transparent who is buying political ads.  Well, hallelujah.

    [laughter]

  • 01:12:01

    You know, that should not be rocket science, frankly.  I mean, if you want to put an ad in the newspaper, it should be identifiable as an ad, not as a report.  And if it’s a political message, I believe especially also here in the U.S., you get some kind of notification that this is approved by the super Pac of whichever side.

    And so, this kind of transparency, I believe, is a first step.  But it doesn’t even begin to touch on the impact of the business models and the algorithms that I mentioned earlier that we don’t know enough about.  

    So I believe algorithmic accountability will be coming, and I also believe that this exemption from liability that I briefly mentioned to you is going to come under a lot of pressure in both U.S. and the E.U. because it simply doesn’t add up anymore.

    John Donvan:

    Okay.  Take it to the other side.  What do you see coming down the road in terms of [unintelligible] regulation?  Corynne, do you want to go first?

    Corynne McSheery:

    Well, so a couple things.  One thing I see most immediately is that a bunch of the major companies desperate to avoid regulation are going to be ever more aggressive in their content moderation in the hopes that somehow that that will stave off the regulators.  

  • 01:13:04

    And I think that effort’s going to fail. But nonetheless, that’s what we’re going to see immediately.

    What I would like to see, or what it is I worry about is that we are going to have regulations proposed here and abroad that are going to backfire in the sense not just that they might be too heavy handed and shut down certain kinds of speech, but also that may be difficult to comply with.  And what that’s going to mean is that only the big companies like Facebook, like Google can afford to comply with whatever the new regulations are and put in place all the, you know, filtering mechanisms or whatever that they have to. And that means we’re going to be stuck with only Facebook and Google, you know, for time immemorial, like I want the next Google, I want the next Pinterest, I want the next YouTube, I want all — you know, a thousand flowers to bloom so that people can really choose between more and less moderated forums.  

    But you can only do that if you actually have those choices out there. 


    John Donvan:
David?

  • 01:14:00

    David French:
Yeah, you know, I think that the danger of government overreach is very big, but I think that one of the problems that we have here is that social media has put human nature on blast.  So, we’re able to see our fellow citizens’ political perspective more readily and easily than we ever have in the past, and a shocking number of people are — find it compelling that there’s — when they see in the name of Jesus arm-wrestling Satan, you know, or when they see a news story that, if you’re somebody who’s closely and carefully engaged in the process, seems to you to be facially [spelled phonetically] ridiculous that they find extremely compelling.  And what this has done is expose a massive problem we have in this country on two fronts: polarization and civic ignorance. And so, social media is not going to fix that, and we keep trying to look to social media to say, “Do something about all the crappy memes.”

  • 01:15:01

    You know, “Do something about this ridiculous news story.” But the problem here is us, more broadly.

    It’s the failure of civic education; it’s the rise of contempt and hate, all of these things that long predated social media.  And that’s why, you know, I feel like the government is going to come in some of these regulatory efforts and say, “Well, human nature has been put on blast. We’re going to have to tamp it down in some way.” [unintelligible] terrible at that. 


    John Donvan:
All right, giving us a lot to think about for future debates.  Okay, I now have the results, and I want to let you know that voting continues online for people who are watching via livestream.  

    You can vote online, and if you even watch this or listen to this video — this debate afterwards on video, on podcasts, you can continue to vote online. So, the resolution can Constitutional Free Speech Principles Can Save Social Media Companies From Themselves — we had you vote twice. In the first vote, 38 percent of you agreed with this resolution; 27 percent disagreed; 35 percent were undecided.

    Those are the first results. Remember, again, it’s the difference between the first vote and the second vote that determines our winner.

  • 01:16:03

    In the second vote, the team arguing for the resolution Constitutional Free Speech Principles Can Save Social Media Companies From Themselves — their first vote was 38 percent, their second vote, 36 percent. They lost two percentage points. The team arguing against the resolution: their first vote was 27 percent; their second vote was 59 percent.  They pulled up 32 percentage points. That makes them the winners: the team arguing against the resolution Constitutional Free Speech Principles Can Save Social Media Companies From Themselves. 


    [applause]

    John Donvan:
All right, winners.  Congratulations to them.  Thank you for me, John Donvan, and Intelligence Squared U.S.  We’ll see you next time.

    [end of transcript]

    This is a rough transcript. Please excuse any errors.

Pre-Debate

AGAINST THE MOTION
25%
FOR THE MOTION
42%
Undecided
33%

Post-Debate

BIGGEST SHIFT

AGAINST THE MOTION
53%
FOR THE MOTION
42%
Undecided
5%

Breakdown

BIGGEST SHIFT

AGAINST THE MOTION
53%
AGAINST THE MOTION
Change in voter behavior
21% - Remained on the Side
15% - Swung from the Side
16% - Swung from Undecided
FOR THE MOTION
42%
FOR THE MOTION
Change in voter behavior
4% - Swung from the Side
26% - Remained on the Side
13% - Swung from Undecided
Undecided
5%
Undecided
Change in voter behavior
0% - Swung from the Side
4% - Remained Undecided
1% - Swung from the Side
JOIN THE CONVERSATION
18

Have an idea for a debate or have a question for the Open to Debate Team?

DEBATE COMMUNITY
Join a community of social and intellectual leaders that truly value the free exchange of ideas.
EDUCATIONAL BRIEFS
Readings on our weekly debates, debater editorials, and news on issues that affect our everyday lives.
SUPPORT OPEN-MINDED DEBATE
Help us bring debate to communities and classrooms across the nation.