Loyola University > Center for Digital Ethics & Policy > Research & Initiatives > Essays > Archive > 2019 > Year in Review Part I
Year in Review Part I
January 22, 2019
Looking Ahead by Looking Behind: The Year 2018 in Digital Ethics and Policy, Part One. (January-March)
As 2018 lies firmly behind us, CDEP Program Director Bastiaan Vanacker takes a look at some of the major digital ethics and policy issues of the past year that will shape the debate in 2019.
January: Crucial Court Win for Section 230
Often credited as the law that “made the web,” section 230 of the Communications Decency Act (“section 230”) shields internet service providers from legal liability for what third parties post on their platforms or networks. For example, if a student were to falsely write on a professor review site that Professor Vanacker has a habit of drinking scotch in the classroom, I would not be able to successfully sue the review site. Even if the site owners did not verify the information or refused to take this info down after I showed it to be false, section 230 would preclude me from successfully suing them. The only person I could potentially sue would be the anonymous poster (provided I could find out who it is), which might be more trouble than it is worth.
Section 230 was passed in 1996 after lawmakers grew concerned about a court verdict that held a financial bulletin board responsible for a libelous comment made by one of its users. Since moderators edited the posts, the court argued it had editorial control over the content and therefore could be held liable. Lawmakers thought this ruling sent the wrong message as it created an incentive not to moderate the content of online communications by creating liability for those who tried to regulate content. Consequently, they enacted a Good Samaritan law of sorts that shields intermediaries from legal liability in these instances.
Section 230 stems from the notion that, just as a library cannot be held responsible for a libelous statement contained in one of the publications it houses, internet providers should not be held responsible for what takes place on their platforms. The law has been generous towards social media networks and web sites in protecting them from a wide range of civil suits, excluding federal criminal law, intellectual property law, and electronic communications privacy law.
Some think the law has gone too far. For example, when a Cincinnati Bengals cheerleader sued TheDirty.com for an anonymous post suggesting that she had had intercourse with half of the football team and had contracted STDs in the process, her lawsuit failed because of section 230. Even though the web site actively solicited this type of unverified gossip, and added commentary to the posts, it found protection under section 230 when the 6th U.S. Circuit Court of Appeals overturned a lower court’s ruling.
On the other hand, some have argued that no line of federal code has generated more wealth and prosperity than section 230. They contend that this law has enabled the tech industry’s boom, by allowing companies to dedicate their resources to innovation and development, rather than to compliance and legal fees.
But section 230 has been chipped away at; last April Congress enacted a law that excluded from section 230’s protection certain activities linked to human sex trafficking and prostitution. In other words, sites can now be held civilly and criminally responsible for content posted by its users that promotes or facilitates prostitution and sex trafficking. Critics have argued that this would result in sites shutting down parts of their platforms rather than policing them, and some fear this is the beginning of the end for section 230.
However, while this legislation was pending, section 230 netted an important victory last January when a U.S. district court in California ruled that AirBnb could not be held liable for renters violating their lease agreements by subletting their units on the platform. An L.A. housing company sued the online home rental service because it allowed this practice to continue, profited from it, and refused to reveal the names or addresses of the renters subleasing their units. The court ruled in AirBnb’s favor, granting it protection under section 230.
How the courts will interpret section 230 and whether or not lawmakers will further limit its scope will be crucial policy issues for years to come.
February: Embedding a Tweet Might Violate Copyright
One of many yet-to-be resolved issues in digital copyright law is whether or not one can use images from social media platforms in news reporting. Since these images are copyrighted, the question is whether or not this practice would be covered under fair use. There is not enough case law available to answer this question with certainty.
However, until recently, embedding content was a way that these copyright concerns could be side-stepped. When one embeds content, for instance a tweet, one does not make an actual copy of the tweet. Instead a line of code is inserted instructing a user’s browser where to locate the image to be displayed, in this case on Twitter’s servers. Since the creator of the site never made a copy or does not display it on her site, there is no copyright infringement and the fair use issue is not even reached.
From a copyright perspective, this practice of inline linking, where a line of code links to an object or image hosted on another site, was generally considered to be safe, as courts applied this “server test” to copyright questions. As long as you only inline link and don’t make or host a copy on your server, a judge applying the server test would not consider this a “public display” of a copyrighted work.
This test was adapted in 2007 by the 9th Circuit in a case in which a pornographic site sued Google for copyright infringement for displaying images of the site through image search. According to cyberlaw expert Eric Goldman, this ruling, as well as a ruling from the 7th Circuit, “dating back over a decade and with minimal conflicting precedent, led to many settled expectations and the proliferation of in-line linking.”
However, in February, a giant asterisk was added to this general rule, as a New York district court ruled that inline linking of a social media post could constitute an infringing public display of a copyrighted work. The case stemmed from a Tom Brady-related Snapchat post that consequently made the rounds on Twitter, leading to numerous media organization embedding the tweet in stories they posted about the matter. (The post contained newsworthy information.)
The February ruling merely answered “yes” to the question of whether or not embedding the tweet constituted a “public display.” The court could still rule in later proceedings that this use was protected by fair use or accept other defenses from those defendants who did not settle. However, even if defendants were to prevail on those defenses, this ruling still means that inline linking is much riskier in the states covered by the Second Circuit than was previously thought. In the summer of 2018, an appeal was denied, leaving the applicability of the server test in instances of embedding social content very much in doubt.
March: Can The President Block Twitter Users?
In March, oral arguments were held in a case questioning whether the president could block Twitter users from his account under the First Amendment. The case was brought by seven citizens who were blocked by President Trump after replying critically to his tweets. Represented by the Knight First Amendment Institute, the plaintiffs argued that the space on Twitter where users can interact with the president’s tweets is a government-controlled space. Two months later, the U.S. District Court for the Southern District of New York issued an opinion agreeing with the plaintiffs.
Applying the forum doctrine, the court argued that the part of the president’s account where people can interact with the president is a designated public forum. It is a long-standing principle in American jurisprudence that government officials cannot deny access to a forum designated for public discussion on the basis of viewpoint. The case is currently being appealed.
Government officials taking to privately-owned social media platforms to communicate with their constituents should realize that the nature of social media allows for feedback, reactions and criticism. And if government officials open themselves up to reactions, they cannot then selectively close themselves off from them.
Government officials would probably not be held responsible if a social medium deleted a reaction to one of their posts because it violated its own policies, even if that speech would be protected by the First Amendment. Since the government officials do not exercise control over the social medium’s acceptable uses policies and their enforcement, they could not be held accountable for a platform’s decision to delete a reaction or an account. But in this case, Twitter did not have a problem with the tweets, the president did.
There are currently a number of lawsuits stemming from public officials blocking people from their social media accounts. Public officials should realize that they cannot use their accounts as a pulpit to preach to the converted, without expecting some amount of criticism and dissent.
At the center of cases like this one is the question who truly controls these accounts. The government entity’s control will always be limited and easily overridden by the social media platform. Whether or not this amount of control qualifies these accounts as (designated) public forums will be an important question for courts to answer.
Bastiaan Vanacker's work focuses on media ethics and law and international communication. He has been published in the Journal of Mass Media Ethics. He is the author of Global Medium, Local Laws: Regulating Cross-border Cyberhate and the editor of Ethics for a Digital Age.