Social Media: Free Speech or Censorship?

President Trump continues to threaten Twitter after the platform fact-checked some of his tweets.
So far, it is not clear how Twitter, a private company with 330 million accounts, plans to police its entire platform for accuracy rather than just the president.
And is the fact-checking legal? Exactly who is making the “true” or “false” decisions? What are his or her biases?

The American Bar Association recognizes that social media is a hot legal battle ground.

Retired U.S. Supreme Court Justice Anthony Kennedy’s 2017 opinion on a First Amendment case, called the cyber age a revolution of historic proportions, noting that “we cannot appreciate yet its full dimensions and vast potential to alter how we think, express ourselves, and define who we want to be.”

Kennedy said cyberspace, and social media in particular, was among the “most important places … for the exchange of views.” He compared the internet to a public forum, akin to a public street or park. Although Justice Samuel A. Alito concurred in the opinion, he also chastised Kennedy for his “undisciplined dicta” and “unnecessary rhetoric.”

This hot battleground raises serious concerns about the future of free speech, including attempts at censorship by government actors critical of comments on social media, the shifting standards of private platforms to censor online expression and the rise of hate and extremist speech in the digital world.

One issue involves government officials blocking or removing critical comments online. In a sense, this violates the core First Amendment principle that individuals have the right to criticize government officials. In the landmark free-press/libel decision New York Times Co. v. Sullivan (1964), Justice William Brennan wrote that there is a “profound national commitment that debate on public issues should be uninhibited, robust, and wide-open, and that it may well include vehement, caustic, and sometimes unpleasantly sharp attacks on government and public officials.”

More recently, the case of Knight First Amendment Institute for Columbia University v. Trump presents these issues in pristine form. President Donald Trump and a staffer named Donald Scavino were accused of violating the First Amendment by blocking several people from Trump’s engine of self-expression, his personal Twitter account, @realDonaldTrump. The plaintiffs’ tweets were not vulgar, but they criticized the president and his policies. For example, one of the plaintiffs was blocked after tweeting: “To be fair, you didn’t win the WH: Russia won it for you.”

In May 2018, Judge Naomi Reice Buchwald of the U.S. District Court for the Southern District of New York ruled that the president violated the blocked users’ First Amendment rights by engaging in impermissible viewpoint discrimination. She reasoned that while Twitter is a private company, Trump and his staffer exercised government control over the content of the tweets by blocking users who criticized the president in the interactive space on Twitter. The judge determined that this interactive space was a designated public forum and that the president could not discriminate against speakers because of their viewpoints.

The government appealed the decision to the New York City-based 2nd U.S. Circuit Court of Appeals. In its appellate brief, the government argues that the district court decision is “fundamentally misconceived” in part because “the @realDonaldTrump account belongs to Donald Trump in his personal capacity and is subject to his personal control, not the control of the government.” In other words, the government contends that Trump’s Twitter feed is not the speech of the government and thus not subject to First Amendment dictates.

On the other hand, the Knight First Amendment Institute at Columbia contends that the interactive space on Twitter, where individuals can tweet responses to the president’s expression, represent a designated public forum—a space the government has intentionally opened up for the expression of views. The Knight Institute contends that Trump and Scavino violated the most fundamental of all free-speech principles: that the government cannot engage in viewpoint discrimination of private speakers.

“The case is a game-changer for both free speech and the right to petition the government,” says Clay Calvert, director of the Marion B. Brechner First Amendment Project in the University of Florida College of Journalism and Communications. “The district court’s ruling highlights not only the importance of online social media platforms’ forums for interacting with government officials, but also confirms that when government officials use nongovernment entities like Twitter to comment on policy and personnel matters, the First Amendment comes into play.”

“I think it is potentially very important,” agrees constitutional law expert Erwin Chemerinsky, dean of the University of California at Berkeley School of Law and a contributor to the ABA Journal. “It is not just about Trump, but ultimately about government officials at all levels to exclude those who disagree with them from important media of communications.”

The decision is important also because there are countless disputes involving state and local government officials who have blocked users or removed comments that are critical of them. In April 2018, Maryland Gov. Larry Hogan agreed to a settlement with the American Civil Liberties Union of Maryland in a federal lawsuit over the blocking of those who criticized him from his Facebook page.

Under the settlement, the government admitted no liability but did agree to a new social media policy and the creation of a new “Constituent Message Page” that allows individuals to post their political expression, even if critical.

The blocking of critical speakers from Twitter feeds or comment pages on government pages is far from the only First Amendment issue on social media. The internet has led to a cottage industry of defamation lawsuits arising from intemperate online expression. For example, a federal district court in California recently reasoned that the president did not defame Stormy Daniels, an adult film actress who claimed she engaged in an intimate relationship with Trump in 2006. Daniels, whose real name is Stephanie Clifford, says that in 2011 she faced threats from an unknown man who said she must leave Trump alone. Daniels worked with a sketch artist to produce a picture of the man after Trump was elected president. Trump tweeted: “A sketch years later about a nonexistent man. A total con job, playing the Fake News Media for Fools (but they know)!” Daniels sued the president for defaming her, but the U.S. District Court for the Central District of California in Clifford v. Trump (2018) dismissed the suit, explaining that Trump had engaged in protected rhetorical hyperbole rather than protected speech.
Private censorship

Much of the censorship on social media does not emanate directly from the government. Often, the censorship comes from social media companies that police content pursuant to their own terms-of-service agreements. Outcries of political censorship abound. Recent controversies include radio show provocateur Alex Jones being removed from Facebook, YouTube and Apple for engaging in hateful speech; Facebook at least temporarily removing a newspaper’s online postings of sections of the Declaration of Independence; and uneven or inconsistent application of hate speech removal policies.

President Trump entered the arena—via Twitter as he usually does—accusing Google of censoring more conservative speech. He tweeted: “Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation-will be addressed!”

If such private entities are not subject to First Amendment constraints, what should be the obligation of social media platforms when it comes to regulating private expression, particularly expression that advocates hate or includes calls for violence?

These issues are becoming more important, particularly as there is an increase in hate and extremist speech on the internet.

“While I don’t believe any hard numbers exist, with an ever-increasing increase of available online platforms, it seems very likely that hate speech has risen significantly over the past decade,” says Shannon Martinez, program manager for Free Radicals Project, a group that provides support for those seeking to leave hate groups. “The internet is the main recruiting ground for most of today’s violence and hate-based groups.”

Under the First Amendment, hate speech is a form of protected speech unless it crosses the line into narrow unprotected categories of speech, such as true threats, incitement to imminent lawless action, or fighting words. Controversy abounds over what actually constitutes hate speech. In her recent book, Hate: Why We Should Resist It with Free Speech, Not Censorship, Nadine Strossen writes that hate speech “has no single legal definition, and in our popular discourse it has been used loosely to demonize a wide array of disfavored views.”

But this raises the question of whether such private entities will do more to respect freedom of expression and regulate the type of speech that perhaps does need to be removed. “There is no one-size-fits-all answer to this question because these platforms operate differently and with different commitments to a transparent process for users,” says Suzanne Nossel, CEO of PEN America, a human rights and First Amendment advocacy organization for writers. “But we are concerned about the discretion that exists at the hands of these platforms, and we advocate for greater transparency for the public to understand how Facebook, Twitter, Instagram, etc., make decisions that affect their individual online expression.”

What is clear is that Kennedy was correct when he talked about the importance of the cyber age on free expression. “The online world and social media have drastically changed the way we engage with each other and how we consume information,” Nossel says. “The law will necessarily begin setting boundaries to define acceptable forms of online expression, and it’s already doing that to some degree.”

SHARE