US Supreme Court – Oral Argument
On February 21, the US Supreme Court heard oral arguments in Gonzalez v. Google, a case where the plaintiffs are seeking to hold Google and other companies liable for recommending terrorist content on platforms like YouTube.
This case, from a legal perspective is of importance since it raises questions on the direct or indirect liability of internet platforms such as Google, YouTube and Twitter in the distribution of harmful online user content, and in particular content related to terrorism.
The upcoming decision of the US Supreme Court on this matter will determine whether the interpretation of legal dispositions regulating the use of internet platforms need to be narrowed. It may reshape the relationship between internet platforms as hosts and their users and may also impact free speech online beyond the US.
Questions, Questions and more Questions
This hearing that lasted over two hours, has raised several questions for which clear answers are not yet available. Those questions point to some legal measures that need to be taken in order to reduce violent and harmful online content, but there are uncertainties regarding the direction to be taken. Here are some of those questions:
- Are internet platforms liable for the algorithms they use to sort and present content? Are they liable for recommending a video or a book through algorism, even without making any further explicit comment on the video or book (“Go watch this ISIS video! It’s the greatest of all time!”) or even without promoting the content by repeating anything about that video or book?
- Is an algorithm recommendation neutral? What about cases where videos and articles produced by ISIS are ranked higher, as opposed to articles on ISIS published by third parties and containing a much more critical view on the terrorist organization? Where is the distinction between hosting user content and amplifying or encouraging their online consumption?
- Should internet platforms be treated as publisher for content created by third parties? Indeed, contrary to obligations of internet platforms, a newspaper publisher who decides to put offensive and harmful content on page one of the newspaper could be held liable. They are legally responsible for the content they print, whereas online platforms are relieved from this liability.
- Should internet platforms be legally forced to remove contents considered harmful upon request of a court? Would such a measure be the end of freedom of speech online? Would it be the end of free speech, but the beginning of something else that we cannot define yet?
Current Legal Framework
Under Section 230, a provision of the 1996 Communications Decency Act, internet platforms cannot be treated as the publishers or speakers for information provided by its users. Section 230 embodies the concept that the internet should be a place where the users should feel free to share ideas and the internet platforms, as intermediaries, should not be held responsible for the actions and statements of their users. Thus, internet platforms cannot be held liable for content created and developed by others, but available on their platforms, even though this content can be considered harmful.
In essence, any claim against, for example, Twitter for defamatory tweets posted by a user of the platform will be dismissed. Why? Because Twitter is just a host and not the content creator. And, Twitter cannot be forced to remove those tweets.
Section 230 does not only protect big platforms such as Google, YouTube or Twitter, but also any websites, small blogs and mobile app against harmful posts, photos and videos that users share on their services.
This law also protects those internet platforms from legal responsibility for targeted recommendations through algorithms. But are targeted recommendations on ISIS videos covered by Section 230?
Case Background
Nohemi Gonzalez, a 23-year-old U.S citizen, studied in Paris, France during the fall of 2015. On November 13, 2015, when Nohemi was enjoying an evening meal with her friends at a café, three ISIS terrorists fired into the crowd of diners, killing her. This tragic event occurred within a broader series of attacks perpetrated by ISIS in Paris on November 13 (Paris Attacks). ISIS carried out several suicide bombings and mass shootings in Paris that day, including a massacre at the Bataclan theatre. The day after the Paris Attacks, ISIS claimed responsibility by issuing a written statement and releasing a YouTube video.
The Gonzalez family argued that YouTube, which is part of Google, aided and abetted the terrorist group, because its algorithms “recommended ISIS videos to users,” which helped spread its message. The Gonzalez complaint alleges that YouTube “has become an essential and integral part of ISIS’s program of terrorism,” and that ISIS uses YouTube to recruit members, plan terrorist attacks, issue terrorist threats, instill fear, and intimidate civilian populations.
They also allege that Google uses computer algorithms to match and suggest content to users based upon their viewing history. In this way, Google has “recommended ISIS videos to users” and enabled users to “locate other videos and accounts related to ISIS,” and that by doing so, Google assists ISIS in spreading its message.
Decision of the US Supreme Court
More details on the decision of the US Supreme Court are to be expected by the end of June 2023. However, some observers have already expressed the idea that the Supreme Court may decide in favor of the platforms. So let wait and see…