Reason Foundation’s amicus brief in Gonzalez v. Google answers many of the questions raised by Supreme Court justices 
Photo 169684 © Nick Cannella | Dreamstime.com

Commentary

Reason Foundation’s amicus brief in Gonzalez v. Google answers many of the questions raised by Supreme Court justices 

Congress originally made clear that Section 230 is part of a law intended not to limit free speech but to allow the internet to grow “with a minimum of government regulation.” 

On Feb. 21, the United States Supreme Court heard oral arguments in Gonzales v Google. (You can listen to the arguments here via C-Span.) Reason Foundation submitted an amicus brief on the case in January, and I found it interesting to see how some of the justices dug in on the issues raised in our brief.  

This is a matter for Congress. Supreme Court Justice Brett Kavanaugh asked, “Isn’t it better for–to keep it the way it is, for us, and Congress–to put the burden on Congress to change that, and they can consider the implications and make these predictive judgments?” 

Reason’s brief pointed out that the questions raised in this case are matters of policy, not law, and Congress, not the Supreme Court, should resolve them.

“Whether Section 230 creates good policy is not a question for this Court to decide. That question remains where it was in 1996—with Congress,” Reason’s brief says.  

Congress originally made it clear that Section 230 is part of a law intended not to limit free speech but to allow the Internet to grow “with a minimum of government regulation.” 

Recommendations and “thumbnails” are not content creation. The petitioners argued that when a site creates ‘thumbnails’ that summarize or in some way represent the content they suggest you might want to click on, they are creating content. Chief Justice John Roberts questioned that argument, saying, “…it seems to me that the language of the statute doesn’t go that far. It says that –their claim is limited, as I understand it, to the recommendations themselves.”

This is central to the question before the Supreme Court—Is recommending or suggesting content that a user might want to see the same as creating that content in terms of liability?

As we argued in our amicus brief, this is not content creation. The central value proposition most online platforms offer customers is a way to find the content they want to consume, which requires some means of making recommendations. If any form of “you might like this” is equivalent to “here is what we think about this” in terms of liability, customers will no longer be able to get recommendations. 

Section 230 explicitly excludes most digital platforms from liability. Indeed, Justice Neil Gorsuch points out that Section 230 itself says that a content provider is defined by doing more than “picking, choosing, analyzing or digesting content” (it also includes “transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content”). These things are exactly what Google does to online content for its users, as do many other platforms, and so the letter of the law in Section 230 clearly states that Google’s core service is not content creation. 

Justice Kavanaugh stated, “[petitioner’s] position, I think, would mean that the very thing that makes the website an interactive computer service also means that it loses the protection of 230. And just as a textual and structural matter, we don’t usually read a statute to, in essence, defeat itself.”

As we put it in our brief:

“…as both a provider and user of such software, [Google] falls squarely within the class protected by Section 230(c)(1). Insofar as Petitioners are seeking to hold Google liable for the consequences of having presented or organized the’ information provided by another,’ rather than for creating and publishing Google’s own information content, Section 230(c)(1) bars such liability.” 

Actively adopting or endorsing content is required to be liable. There was a lengthy conversation about whether the algorithms are “neutral” when they recommend like to like or if they could cross the line to adopt or endorse some content or be “designed to push a particular message,” as Justice Elena Kagan put it. Google’s lawyers argued that even if their algorithm did in some way push a piece of content, any harm from that content (like libel) flows from the original content, not the platform’s actions with respect to it.

In our brief, we argue that there is a line that can be crossed, but it would have to go beyond the activities defined in Section 230 as immune from liability:  

“There is, after all, a difference between a provider or user suggesting the content of others to its users or followers based on their prior history or some other predictive judgment about likely interest and a provider or user actively adopting such content as its own, such as by endorsing the truth or correctness of a particular message or statement. … YouTube is not taking a stance when it, having collected enormous amounts of data on a user’s interests, points that user to content relevant to those interests. For example, if YouTube sends a list of new cat videos to a user that has watched cat videos in the past, the separate information content of that organizational effort is no more than: ‘You seem to like cats, here is more cat content’.” 

It would be madness to make users of digital platforms liable for likes and shares. Finally, Amy Coney Justice Barrett raised the critical question of how the petitioner’s arguments would affect internet users like you and me:

“So, Section 230 protects not only providers but also users. So, I’m thinking about these recommendations. Let’s say I retweet an ISIS video. On your theory, am I aiding and abetting and does the statute protect me, or does my putting the thumbs-up on it create new content? … [B]ut the logic of your position, I think, is that retweets or likes or check this out, for users, the logic of your position would be that 230 would not protect in that situation either, correct?”

To which the petitioners responded that yes, it would.  

As we point out in our brief:

“Section 230 provides its protection not only to the ‘providers’ of interactive computer services, but to the ‘users’ of such services as well. Removing immunity from Google here would equally remove immunity for persons hosting humble chat rooms, interest- or politics-focused blogs, and even for persons who ‘like’ or repost the information content of others on their blog, their Facebook page, or their Twitter account… Petitioners’ theory is wrong and would lead to absurd results. Section 230 protects both providers and users of interactive computer services from liability for sharing, recommending, or displaying the speech of another. Any attempt to split liability regimes between the ‘providers’ and ‘users’ of interactive computer services, or to distinguish the choices made manually by individual users about what to recommend or highlight to others versus the automated incorporation of the same or comparable choices into an algorithm, would be completely divorced from the text of the statute.”  

Indeed, Justice Kavanaugh pointed out that many of the amici, including Reason Foundation, argued there would be significant damage to the digital economy if Section 230 were pulled back and people could no longer share a broad range of useful information via digital platforms.  

While we still have to wait months for the Supreme Court’s decision in Gonzalez v. Google, seeing the justices’ questions hitting on these crucial points was heartening. The exchanges in oral arguments seemed to crystalize that petitioners are asking the Supreme Court to go against the explicit language of the law Congress put in place to expand liability to online platforms for shared content and further to make users of online platforms liable for any content they like or share. That would be disastrous.