The US Supreme Court heard oral arguments from lawyers representing Google, the Department of Justice, and the family of a 23-year old woman killed in Paris by terrorists in 2015.
The case, Gonzalez v. Google, represents a crucial legal landmark in how the US legal system holds large technology platforms like Google responsible for the content they host.
The family of Nohemi Gonzalez argues that Google acted as a recruiting platform for the Islamic State group, which the US State Department describes as a terrorist organisation. By recommending Islamic State-related videos on YouTube, Google violated US laws against providing aid to terrorist groups, the family argues.
Google, however, has argued that it is legally immune from such suits thanks to Section 230 of the Communications Decency Act, which prevents internet-based companies from liability for user-generated content.
The hearing was a contentious one, with the assembled justices peppering the lawyers for each party with questions.
Google’s lawyer, Lisa Blatt, argued strenuously that algorithmically generated recommendations for content are covered by Section 230, and that the legal immunity provided by that law is a fundamental building block of the modern internet.
Without Section 230, Blatt said, every content-driven platform on the internet, from Yelp to Zillow to Amazon, would be liable for each and every piece of content that they host.
Google says eliminating liability protections threatens the internet
“Exposing websites to liability for implicitly recommending third-party context defies the text [of Section 230] and threatens today's internet,” she said.
The thrust of Google defense was echoed and backed up by multiple briefs filed to the Supreme Court by big tech companies including Microsoft, Twitter and Facebook parent company Meta.
The lawyer for the Gonzalez family, University of Washington law professor Eric Schnapper, argued that recommendations provided by platforms like YouTube are essentially editorial choices — those platforms could have been designed such that they don’t surface or recommend harmful or defamatory content, but they were not.
The decision to let YouTube recommend that harmful content, therefore, is one that the platform providers made consciously, which means that they should be held accountable for its publication.
“In some circumstances, the manner in which third-party content is organised or presented could convey other information from the defendant itself,” he said, underscoring the point that the ability to provide recommendations is not necessarily neutral.
Twitter liability case also goes before Supreme Court
In the Gonzalez case, as well as the closely related matter of Twitter v. Taamneh, which is scheduled for a hearing, the stakes are high. Any finding that large tech companies are liable for the content they promote or recommend, even in an automated, algorithmic way, could represent a massive sea change in the way tech giants operate.
In the Taamneh case, the suit by the family of a Jordanian national killed in a terrorist attack alleges that Twitter wasn’t sufficiently aggressive in prohibiting the Islamic State group from using that platform. It’s a similar “aiding-and-abetting” issue to Gonzalez.
Liability for user-generated content could have any number of follow-on effects, from vastly increased oversight and heavier restrictions from the internet-based companies, to simply invalidating the business model for companies that rely on user-generated content to function.
The justices seemed to be concerned that any change to Section 230 could generate a wave of new lawsuits against big tech.
“Really anytime you have content, you [would] also have these presentational and prioritisation choices that can be subject to suit,” said Associate Justice Elena Kagan.
A decision is expected by the time the court’s term ends in June.