Loyola University > Center for Digital Ethics & Policy > Research & Initiatives > Essays > Archive > 2019 > Year in Review Part IV
Year in Review Part IV
February 12, 2019
As 2018 lies firmly behind us, CDEP Program Director Bastiaan Vanacker takes a look at some of the major digital ethics and policy issues of the past year that will shape the debate in 2019. The first three installments of this overview can be found here and here and here.
October: A Supreme Court Case Rankles Silicon Valley
As social media platforms collectively kicked Alex Jones off their platforms last summer, some critics invoked the First Amendment to label this move as an attack on freedom of speech. Others were quick to point out the flaw of this censorship argument: social networks are privately-owned enterprises free to decide what speech to ban from their digital spaces. However, last October the Supreme Court decided to hear a case that could up-end this analysis.
At first sight, the case does not seem to have anything to do with social media, as it evolves around the question of whether or not public access television networks should be considered state actors. The case stemmed from a pair of videographers who claimed that they had been barred from a public-access TV network because it disapproved of the content of their program. While such actions would be perfectly legal if done by private actors, under the First Amendment government actors may not restrict speech on that basis.
The network in question is privately-owned under a city licensing agreement. Reversing a lower court, the U.S. Court of Appeals for the Second Circuit ruled that privately-owned public access networks are public forums, as they are closely connected to government authority. As a result, owners of these forums are considered government actors bound by the First Amendment when regulating speech, the court ruled.
It is quite possible that the Supreme Court will issue a narrow ruling that only applies to the peculiar and particular ownership structure of public access TV. However, if these types of networks were to be considered a public forum, an upholding of the decision in a broader ruling could have significant consequences for social media companies’ ability to regulate speech on their networks. In that case, only speech that runs afoul of the First Amendment could be removed from their networks and the government could at the same time dictate certain rules for content moderation. While the chances of the Supreme Court issuing a ruling broad enough to have this consequence seem slim, the mere possibility is sure to make this case one that will be closely watched.
November: Neo-Nazi Gets Sued for Online Hate Campaign
What is the difference between incitement and a threat? At first glance, the answer seems straightforward. Incitement requires that a speaker tells other people to engage in an illegal action, while a threat requires that a speaker credibly communicates to an individual an intent to harm that individual.
Both types of speech are not protected speech. Incitement is illegal because it leads to illegal actions that can harm someone’s safety. Threats are illegal because they put people in fear for their physical safety (which is why there is no requirement that the sender intends to execute the threat). But on the Internet this distinction is not always clear. What if a person posts the home addresses of staff from abortion clinics online, crossing out the names of those who have been killed? Does it matter if the poster claims to have just wanted to act as a record keeper?
Or what if an extremist Muslim group puts the names and work addresses of the creators of South Park on a site, after they created an episode mocking the prophet Mohamad? Does it matter that they claim they only wanted to “warn” them, if their message is accompanied by a picture of a slain filmmaker, killed by an extremist after being accused of mocking Islam?
In those instances, the messages are a mixture of threats and incitements. They are threats because they put the intended targets in fear of their lives, but at the same time, the senders do not communicate any intention to commit an act of violence. They merely suggest that others might/should commit these acts, rendering them more incitement than threats.
However, ever since Brandenburg v. Ohio (1969), the standard that needs to be met to establish incitement is that the illegal action advocated in the speech is “directed to inciting or producing imminent lawless action” and that it is “likely to incite or produce such action.” This is a high bar to be cleared, particularly for mediated Internet speech, where speakers are rarely in close proximity to one another and where there is often a time lapse between sending and reception of the message. It is unlikely for an online statement to meet the definition of incitement.
Consequently, speech that appears to be online incitement often is treated as a threat or intimidation. Take for example the case of Tanya Gersh, a Jewish woman from Whitefish, MO who found herself in the cross hairs of a “troll storm” by the neo-Nazi site the Daily Stormer. She had drawn the ire of its founder, Andrew Anglin, after the mother of white nationalist Richard Spencer accused Gersh of strong arming her into selling her property in Whitefish because of her son’s radical politics.
Through the Daily Stormer, Anglin called on his followers to contact Gersh and to tell her what they “thought about her,” resulting in Gersh and her family being bombarded with vicious anti-Semitic hate messages. Some of these messages clearly constituted illegal threats, but they came from anonymous senders, not from Anglin, who had warned his followers not to engage in threats of violence.
Gersh nevertheless sued Anglin for invasion of privacy, intentional infliction of emotional distress, and violations of Montana's Anti-Intimidation Act. In November, a federal judge denied Anglin’s motion to dismiss the lawsuit on First Amendment grounds. How the Anti-Intimidation Act (essentially an anti-threat statute) will be applied to this case will provide further guidance on the applicability of anti-threat statutes to these types of online incitement.
December: Tumblr Bans Adult Content
In December, Tumblr’s ban on pornography took effect. The ban was rumored to have been precipitated by Tumblr’s removal from the Apple store due to the presence of child-pornography on its network. Barring all adult content might be more convenient than policing all the accounts containing nudity for the presence of under-aged subjects. The ban has been criticized because Tumblr was a preferred platform for people interested in less conventional ways of experiencing sexuality, who used it to self-express and find like-minded souls.
Even though these users might ultimately find a platform and community elsewhere, the issue did bring to light yet again the ultimate powerlessness of users against the arbitrary content-restricting decisions made by the powers that be in Silicon Valley. Mark Zuckerberg has a suggestion for how to make this process more democratic and transparent: a Supreme Court for Facebook, in which various stakeholders could be involved in this decision-making process. While this seems more a thought experiment than concrete plan, the mere suggestion of farming out this crucial decision-making task illustrates how exasperated social media platforms have grown with the damned-if-you-do-damned-if-you-don’t reality of censoring online content. A dilemma that is unlikely to be resolved anytime soon.
Bastiaan Vanacker's work focuses on media ethics and law and international communication. He has been published in the Journal of Mass Media Ethics. He is the author of Global Medium, Local Laws: Regulating Cross-border Cyberhate and the editor of Ethics for a Digital Age.