15 Wake Forest L. Rev. Online 46

William Gilchrist

Enacted as part of the Telecommunications Act of 1996, section 230 of the Communications Decency Act was originally introduced to shield children from inappropriate content online.[1] Despite being passed for a relatively limited purpose, section 230’s broad liability protections for interactive computer services have since been credited with shaping the modern internet.[2] Today, it stands as one of the few federal statutes recognized for having “fundamentally changed American life.”[3]

As social media and internet use have evolved, the language of section 230 has generally adapted to new technologies. But with the rise of artificial intelligence (AI) as a mainstream tool, section 230’s scope has become increasingly uncertain. Due in part to its brevity and resulting ambiguity, questions have emerged over whether its liability protections extend to online service providers’ use of AI,[4] particularly in recommender systems.[5] The Supreme Court first addressed section 230’s applicability to AI use in Gonzalez v. Google.[6] Although many hoped the case would bring clarity, the Court issued a three-page per curiam opinion dismissing it for failure to state a claim, leaving stakeholders back at square one.[7]

In Gonzalez, the Supreme Court considered for the first time whether section 230 shields online platforms from liability for using AI to recommend third-party content.[8] While the case was a critical first step in addressing AI-related liability, the Court’s ruling left concerned parties with more questions than answers. Critics argue the opinion fell short of fulfilling the judiciary’s responsibility to “say what the law is,” emphasizing the need for additional guidance on section 230’s scope.[9] Ultimately, the Court’s decision in Gonzalez not only reflects the judiciary’s lack of understanding of AI but also kicks the can down the road, leaving future courts unable to fairly and consistently interpret section 230’s scope. Accordingly, clearer legal standards are essential to help U.S. companies assess their liability exposure when deploying new products and to ensure they remain competitive in the global AI race.[10]

Today, hundreds of active AI-related lawsuits are making their way through the American legal system, typically involving intellectual property, amplification of dangerous content, and discrimination issues.[11] And while AI offers undeniable economic benefits, its widespread and varied application has made it difficult for lawmakers to understand and regulate.[12] As AI becomes increasingly embedded in daily life, AI-related litigation is only expected to increase.[13]

This Comment begins with an explanation of what AI is and how it is currently being used in American society. It then provides background on Gonzalez, analyzes the Court’s opinion and its implications, and argues that the Court should have directly addressed section 230’s applicability. Because a more effective resolution of Gonzalez would have defined section 230’s scope, this Comment critiques the Court’s decision and argues that affirming a broad interpretation of section 230 would have been the better outcome. Finally, this Comment examines the challenges of applying a broad interpretation of section 230, ending with a discussion of the challenges associated with current and future AI regulation.

I. Background

Prior to the 1950s, AI existed only in science fiction.[14] But after Alan Turing introduced the concept in his 1950 paper, Computing Machinery and Intelligence, AI began its gradual evolution into the tool it is today.[15] Beginning as “little more than a series of simple rules and patterns,” AI has advanced exponentially and is now “capable of performing tasks that were once thought impossible.”[16]

The private sector has embraced this expansion, with many companies taking advantage of the technology and incorporating it into various parts of their operations.[17] While doing so offers clear advantages, it has also raised new and increasingly frequent questions about potential liability exposure.[18] Until recently, U.S. courts have reliably turned to section 230 for guidance when evaluating liability arising from online AI use.[19] And while section 230’s text provided sufficient guidance in AI’s early stages, the technology’s growing complexity and evolving uses have rendered section 230’s applicability increasingly unclear.

Since section 230’s adoption in 1996, Americans’ internet access and use have dramatically increased.[20] As internet access has improved, so has Americans’ exposure to and awareness of AI.[21] The AI of the 1990s was virtually nonexistent compared to the AI of today, and new capabilities allow for the technology to be used in ways never before thought possible.[22] These advancements have seamlessly integrated AI into nearly every aspect of daily life, often in ways that go unnoticed.[23] Nevertheless, with new technology comes new legal issues, and AI is no exception.[24]

To understand Gonzalez and its global implications, it is first necessary to define what constitutes AI. At the highest level, AI is “a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem solving, and exercising creativity.”[25] And while AI use continues to evolve, the following discussion outlines the broad categories of AI and how they are currently being used.

A. A Spectrum of Systems

There are seven general categories of AI: three based on capabilities and four based on functionalities.[26] The three kinds of AI based on capabilities are Artificial Narrow, General AI, and Super AI.[27] Artificial Narrow—the only type of AI in use today—refers to technology that is “designed to perform a specific task or a set of closely related tasks.”[28] The other two types of AI based on capabilities—General and Super AI—remain theoretical, as neither has been successfully developed.[29] These forms are expected to match or surpass human intelligence.[30]

The four types of AI based on functionalities are Reactive Machine, Limited Memory, Theory of Mind, and Self-Aware.[31] Reactive Machine systems include AI “with no memory [that is] designed to perform a very specific task,” such as Netflix’s movie and TV show recommendation system.[32] Limited Memory AI differs from Reactive Machine AI because it can recall past events and monitor objects and situations over time.[33] Limited memory AI includes generative AI such as ChatGPT, virtual assistants such as Siri and Alexa, and self-driving vehicles.[34] Theory of Mind and Self-Aware AI are forms that are still in development or entirely theoretical.[35] Theory of Mind AI would allow machines to understand the thoughts and emotions of other entities, while Self-Aware AI would allow machines to understand their own internal conditions and traits.[36]

B. Teaching the Machine: How AI Learns

For each category of AI, there are several tools that software developers can use to create and enhance their systems.[37] One of these tools is machine learning (ML), a term that is often incorrectly used interchangeably with AI.[38] Though AI and ML are closely related, ML is a subset of AI[39] that involves “developing algorithms and statistical models that computer systems use to perform tasks without explicit instructions, relying on patterns and inference instead.”[40] While AI is “the ability of a machine to act and think like a human,” ML is a type of AI that involves humans “relying on data and feeding it to computers so they can simulate what they think we’re doing.”[41] The broad advantages of ML allow it to be used in a variety of contexts, including rapidly processing large datasets, using algorithms that change and improve over time, and spotting patterns or identifying anomalies.[42]

Broadly put, ML works by “exploring data and identifying patterns.”[43] Most tasks involving data-defined patterns or rule sets can be automated with ML,[44] which can be used to explore data and identify patterns in two ways: supervised learning and unsupervised learning.[45] Supervised learning involves humans labeling inputs and outputs that train an algorithm to accurately classify data and predict outcomes.[46] In contrast, unsupervised learning models work independently to discover the structure of unlabeled data. For example, an unsupervised learning model could be used to identify products often purchased together online.[47] Supervised learning, which is more widely used than unsupervised due to its ease of use, is the type of ML behind the recommender systems at issue in Gonzalez.[48]

C. Recommender Systems and Content Curation

Recommender systems, like those in Gonzalez, are “algorithms providing personalized suggestions for items that are most relevant to each user.”[49] Today, many social media platforms use AI and ML recommender systems in a variety of ways.[50] For example, YouTube uses AI and ML to automatically remove objectionable content, label imagery for video background editing, and to recommend videos.[51] In addition to YouTube, recommender systems are commonly used by social media platforms like Spotify, Amazon, Netflix, TikTok, and Instagram to tailor content and product suggestions to their users.[52]

AI, ML, and recommender systems are also being adopted outside the social media context.[53] “From manufacturing to retail and banking to bakeries, even legacy companies are using machine learning to unlock new value or boost efficiency.”[54] As explained by Aleksander Madry, Director of the MIT Center for Deployable Machine Learning, “machine learning is changing, or will change, every industry.”[55]

Though statistics about the adoption of AI differ widely, the number of global companies that use AI is likely in the realm of 35 to 55 percent, with some estimates as high as 67 percent.[56] Beyond its use by companies, individuals are increasingly incorporating AI into their daily lives.[57] But despite the increasing popularity of AI in American society, the only real framework federal courts have to interpret liability for AI use is section 230, an almost thirty-year-old federal statute that was initially passed to promote commercial internet use and shield children from harmful content online.[58]

II. The Legal Backbone of the Internet

In 1996, Congress passed section 230 in response to the “rapidly developing array of Internet and other interactive services.”[59] At the time, section 230 was necessary because of the First Amendment’s inability to adequately protect online platforms providing forums for third-party content.[60] A key catalyst for the legislation was the decision in Stratton Oakmont, Inc. v. Prodigy Services Co., a libel case from 1995.[61]

In Stratton Oakmont, the Supreme Court of New York, Nassau County, found that Prodigy Services, the owner-operator of a computer network that sponsored subscriber communication through online bulletin boards, was liable for third party statements posted on its site.[62] The court reasoned that Prodigy was liable as a “publisher” because it “monitor[ed] and edit[ed]” the individual bulletin board at issue, which gave Prodigy the benefit of editorial control.[63] In response, “to ensure that Internet platforms would not be penalized for attempting to engage in content moderation, Congress enacted Section 230.”[64]

A. Where Immunity Begins: Section 230(c)(1)

Known as “the twenty-six words that created the internet,”[65] the operative provision of the Communications Decency Act is section 230(c)(1),[66] which states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[67]

Section 230(c)(1) generally “protects websites from liability for material posted on the website by someone else.”[68] But interactive service providers are only protected from liability if they are not also an information content provider, or “someone who is ‘responsible, in whole or in part, for the creation or development of‘ the offending content.”[69] As explained by Chief Judge Kozinski in Fair Housing Council v. Roommates.com:

A website operator can be both a service provider and a content provider: If it passively displays content that is created entirely by third parties, then it is only a service provider with respect to that content. But as to content that it creates itself, or is “responsible, in whole or in part” for creating or developing, the website is also a content provider. Thus, a website may be immune from liability for some of the content it displays to the public but be subject to liability for other content.[70]

Thus, the key question in assessing recommender system liability is whether the system contains content for which the operator is “responsible in whole or in part for creating or developing,” or whether the system simply dictates how existing content is displayed.

Although section 230 does not expressly address the use of AI or recommender systems, it was drafted in response to the internet’s rapid growth and evolution.[71] To account for the inevitable emergence of more advanced technologies, section 230 was drafted in a technology-neutral manner that would allow the statute to be applied to emerging and future technology.[72] Unsurprisingly, the exponential increase in the commercial use and complexity of AI has also led to a high volume of litigation, as well as subsequent contradictory state and federal court rulings.[73] But despite the expectation that section 230 would be applied to future technology, the exceedingly complex nature of today’s AI has surpassed the clear bounds of section 230.

B. Uncertainty and Calls for Change

Increasing litigation and uncertainty have led to growing calls for regulation—calls that have not gone unnoticed by lawmakers and courts.[74] One of these lawmakers, Senator Dick Durban, Chairman of the Senate Judiciary Committee, compared the rise of AI to that of the social media industry.[75] “When it came to online platforms, the inclination of the government was to get out of the way. I’m not sure I’m happy with the outcome as I look at online platforms and the harms they have created . . . I don’t want to make that mistake again,” he said.[76] Other senators have agreed, with Senator Lindsey Graham even calling for an entirely new agency to regulate the technology.[77]

Even with increasing calls for regulation, the majority of current AI-related laws and regulations have been implemented by individual states with little to no guidance from Congress or the Supreme Court.[78] And even with bipartisan support and a potential model statute from the European Union,[79] Congress has yet to pass any meaningful regulation.[80] This lack of guidance at the federal level has led companies and courts to rely on conflicting interpretations of section 230 in AI-related claims. This growing uncertainty has also made Supreme Court guidance necessary to achieve clarity and consistency in future litigation.

III. Gonzalez v. Google: A Ripple, Not a Wave

In response to these concerns and calls for action, the Supreme Court granted certiorari to hear Gonzalez v. Google. As Gonzalez moved through the courts, it became a focal point for many AI executives and other stakeholders seeking guidance on how section 230 applies to AI.[81]

The case involved claims brought against Google under the Anti-Terrorism Act (ATA)[82] by the father of Nohemi Gonzalez, a 23-year-old who was murdered while studying abroad in Paris, France.[83] Gonzalez was one of 130 people killed during a series of attacks—known as the “Paris Attacks”—carried out by ISIS on November 13, 2015.[84] The Gonzalez plaintiffs claimed that Google was liable for the victims’ deaths because it “aided and abetted international terrorism and provided material support to international terrorists by allowing ISIS to use YouTube.”[85] Specifically, they argued that because Google’s YouTube algorithms “match and suggest content to users based upon their viewing history,” YouTube actively recommended ISIS videos to users and, in effect, “facilitat[ed] social networking among jihadists.”[86] The plaintiffs further alleged that YouTube “has become an essential and integral part of ISIS’s program of terrorism,” serving as “a unique and powerful tool of communication that enables ISIS to achieve its goals.”[87]

The district court concluded that the plaintiffs’ claims were barred by section 230 and dismissed the case pursuant to Rule 12(b)(6).[88] On appeal, the Ninth Circuit consolidated Gonzalez with Twitter v. Taamneh and Clayborn v. Twitter, two cases with similar facts and claims.[89] Taamneh was brought by the survivors of a victim killed in the Reina nightclub attack in Istanbul, Turkey, on January 1, 2017, while Clayborn was brought by the survivors of a victim killed in a 2015 attack on an office Christmas party in San Bernardino, California.[90] As in Gonzalez, the attacks in Taamneh and Clayborn were later connected to ISIS.[91]

In each case, the plaintiffs sought damages from Google, Twitter, and Facebook under the ATA, which “allows United States nationals to recover damages for injuries suffered ‘by reason of an act of international terrorism.’”[92] The scope of the ATA was broadened in 2016 by the Justice Against Sponsors of Terrorism Act (JASTA), which “amended the ATA to include secondary civil liability for ‘any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed’ an act of international terrorism.”[93] The claims theorized that the defendants were liable under the ATA because their “social media platforms allowed ISIS to post videos and other content to communicate the terrorist group’s message, to radicalize new recruits, and to generally further its mission,” effectively aiding and abetting international terrorism.[94]

The district court granted Google’s motion to dismiss in Gonzalez after concluding that all of the plaintiffs’ claims were barred by section 230 except for the revenue-sharing claims,[95] which were dismissed for failure to allege proximate cause.[96] The courts in Taamneh and Clayborn also granted the defendants’ motions to dismiss for failure to allege secondary liability under the ATA.[97] The Ninth Circuit affirmed the dismissals in Gonzalez and Clayborn, and reversed and remanded for further proceedings in Taamneh.[98] The Gonzalez plaintiffs’ filed a petition for a writ of certiorari on April 4, 2022, followed by the Taamneh plaintiffs’ on May 26. The Supreme Court granted both petitions on October 3, 2022.[99]

Prior to Gonzalez, the Supreme Court had never addressed how section 230 applies to liability stemming from the use of AI by a social media company, or any company in general.[100] And while any case before the Supreme Court has the potential to have a significant impact, the rapid growth and increasing pervasiveness of AI in American society, combined with the lack of meaningful regulation, has created an urgent need for guidance in the industry. Because section 230 is one of the “most important laws in tech policy,” organizations across the political spectrum would be impacted by the Supreme Court’s interpretation of its scope.[101]

The significance of the Court’s decision in Gonzalez resulted in, and is underscored by, the unusually high number of amicus briefs filed. Since 2010, Supreme Court cases have averaged about a dozen amicus briefs each.[102] In Gonzalez, seventy-eight organizations filed amicus curiae briefs in hopes of influencing the Court’s opinion.[103] While each organization had its own motives, one thing is clear: Many organizations had a stake in the outcome of Gonzalez, and the Court’s opinion left them with more questions than answers.[104]

A. Confusion at Oral Argument: A Decision in Twitter v. Taamneh

Many of the issues raised by amici were discussed during oral arguments.[105] The oral arguments—lasting nearly three hours in each case—were held in February 2023.[106] The Justices posed questions about everything from the use of AI to generate content[107] to hypotheticals about a bank’s potential liability for allowing Osama Bin Laden to open an account.[108] On multiple occasions, several of the Justices expressed confusion—not only about the arguments being made, but also about the questions before the Court.[109] But after countless hypotheticals and endless back-and-forth between counsel and the Justices, the Justices were apparently left with more questions than answers.

The Court’s opinion highlighted its confusion over the issues, the available options, and the potential consequences of various interpretations of section 230. After hundreds of pages of amicus briefs and oral arguments that went over the time limit by an hour and thirty-four minutes,[110] the Court’s three-page per curiam opinion was released on May 18, 2023.[111] Despite high hopes from stakeholders and members of the AI community, the Court declined to address the application of section 230, concluding that the plaintiffs’ complaint appeared to state “little, if any, plausible claim for relief.”[112] This conclusion led the Court to vacate the Ninth Circuit’s judgment and remand the case for consideration in light of the decision in Taamneh.[113]

The Court overturned the Ninth Circuit’s ruling in the more robust Taamneh opinion. Although Taamneh provided significantly more analysis than Gonzalez, the analysis focused on what it means to “aid and abet” and “what precisely must the defendant have ‘aided and abetted’” when determining liability under JASTA.[114] The Court looked to Halberstam v. Welch[115] to provide the legal framework for “civil aiding and abetting and conspiracy liability.”[116] After acknowledging that “the point of aiding and abetting is to impose liability on those who consciously and culpably participated in the tort at issue,” the Court noted that the nexus between the defendants and the terrorist attack was far removed.[117] Seeming skeptical, the Court acknowledged the plaintiffs’ allegations that Twitter “failed to do ‘enough’ to remove ISIS-affiliated users and ISIS-related content—out of hundreds of millions of users worldwide and an immense ocean of content—from their platforms.”[118] However, because the plaintiffs ultimately failed to allege intentional aid or systematic assistance, the Court held the allegations were insufficient under the ATA.

B. Gonzalez, Taamneh, and Their Effects

While the Court offered a relatively substantive aiding and abetting analysis in Taamneh, the Court’s decisions in both Gonzalez and Taamneh ultimately fell short. Touted as an act of misguided judicial minimalism, the Court’s decisions “simultaneously avoid[ed] the risk of erroneous judgment on a technical question with far-reaching consequences and [left] the politically contentious issue of § 230’s scope to the democratically accountable Congress.”[119] And although doing so may have been the safer short-term decision given the Court’s questionable understanding of the ins and outs of recommender systems and AI,[120] deferring the decision to Congress is hardly likely to yield meaningful regulations anytime soon.

Nonetheless, the Court’s decision not to rule on section 230 was not a result of a lack of awareness of the need for guidance on the issue. While it was the first petition the Court granted, Gonzalez was not the first case to petition the Court to define or provide clarity on the scope of section 230.[121] The Court denied cert in Doe v. Facebook, a case involving allegations that a sexual predator used Facebook to groom the plaintiff for sex trafficking.[122] In his concurrence denying certiorari, Justice Thomas noted that “‘the United States Supreme Court—or better yet, Congress—may soon resolve the burgeoning debate about whether the federal courts have thus far correctly interpreted section 230.’ Assuming Congress does not step in to clarify § 230’s scope, we should do so in an appropriate case.”[123]

Gonzalez was the appropriate case. Yet, the Court’s questions and admitted confusion at oral argument[124] indicate that it ultimately took the advice outlined by Justice Thomas in Doe—that “before we close the door on such serious charges, ‘we should be certain that is what the law demands.’”[125] But even though the Justices may remain uncertain about what the law demands, the Court’s internal justifications for avoiding the substance of section 230 will have lasting consequences for social media conglomerates and other companies who have come to rely on recommender systems and other forms of AI.

IV. Critical Error: The Need to Affirm Section 230’s Broad Scope

As lower courts have consistently held in the past, immunity should only be withheld when an interactive service provider makes “substantial or material edits and additions” to content.[126] Here, the Court ultimately reached the correct outcome in Gonzalez by dismissing the plaintiff’s claims, but its fatal flaw was failing to validate section 230’s broad immunity for future litigants.

An affirmance of the broad scope of section 230 was necessary for two reasons. First, providing current and future online service providers with a dependable, broad grant of immunity is in line with the plain language of the statute and Congress’s intent for section 230—“to protect Internet platforms’ ability to publish and present user-generated content in real time, and to encourage them to screen and remove illegal or offensive content.”[127] Second, policy considerations support a broad application of section 230 because, as the evolution of the internet has shown, strong liability protections encourage beneficial technological and economic development in the United States, particularly for small businesses.[128]

A. Gonzalez Ignores Congressional Intent and the Plain Language of Section 230

Two primary purposes of section 230 were “to protect Internet speech from content regulation by the government,” and to reverse a New York Supreme Court case that held “an online service provider’s decision to moderate the content of its message boards rendered it a ‘publisher’ of users’ defamatory comments on the boards.”[129] Both purposes were aimed at promoting the continued development of the internet, and while AI and the internet were once separate and distinct, they have become increasingly intertwined.[130]

Like the internet, AI has and continues to evolve at extreme speed.[131] The drafters were aware of the rapidly changing nature of the internet, and section 230’s immunity for “publisher[s]” and “speaker[s]” was drafted without highly specific or limiting language to account for inevitable and unforeseeable technological changes.[132] The first web page was launched in 1991, just five years before section 230 was passed.[133] In the early 1990s, people were only just beginning to hear about the new information superhighway that would one day change their lives.[134] By 2024, contemporary AI—including recommender systems and ML algorithms—is viewed much like the internet was when section 230 was first drafted in the early 1990s.[135]

As highlighted by Senator Ron Wyden and former Representative Christopher Cox, “many of the major Internet platforms engaged in content curation [were] a precursor to the targeted recommendations that today are employed by YouTube and other contemporary platforms.”[136] Senator Wyden and former Representative Cox agree that the recommender systems at issue in Gonzalez—which are representative of typical AI systems used by online service providers—are the “direct descendants” of early content curation efforts.[137] And just as Wyden, Cox, and other regulators of the 1990s were seeking to promote the development of the internet, regulators are now seeking to promote AI.[138] So because the internet and AI are intrinsically linked, regulation of companies’ use of AI should fall within the scope of section 230.

Beyond the original intent and plain language of section 230, the statute has also been applied as a broad shield to protect online service providers from liability since its inception.[139] As noted by Justice Thomas in Malwarebytes, Inc. v. Enigma Software Group, USA, LLC, “the first appellate court to consider the statute held that . . . § 230 confers immunity even when a company distributes content that it knows is illegal.”[140] This broad interpretation set the stage for future section 230 jurisprudence, and subsequent decisions “adopted this holding as a categorical rule across all contexts.”[141]

Courts have also upheld the principle that section 230 should be interpreted broadly, even in the context of AI.[142] Although Gonzalez was the first time the issue reached the Supreme Court, it is not the first time a court considered whether AI use could fall within the scope of the statute.[143]

In Force v. Facebook, Inc., the Second Circuit interpreted section 230 to protect AI use.[144] There, the court noted that because the algorithms at issue were “content ‘neutral,’ . . . merely arranging and displaying others’ content . . . [was] not enough to hold Facebook responsible.”[145] However, the court went further, providing additional clarification on section 230’s scope:

We do not mean that Section 230 requires algorithms to treat all types of content the same. To the contrary, Section 230 would plainly allow Facebook’s algorithms to, for example, de-promote or block content it deemed objectionable. We emphasize only—assuming that such conduct could constitute “development” of third-party content—that plaintiffs do not plausibly allege that Facebook augments terrorist-supporting content primarily on the basis of its subject matter.[146]

By recognizing the plain language and overall intent behind the statute—to allow online service providers to monitor what is on their sites, while recognizing that no provider could prevent all illegal or undesirable content—the court in Force reached the conclusion the Supreme Court should have affirmed in Gonzalez.

The plain language of section 230, express legislative intent behind its drafting, and the subsequent interpretation of the statute all support the prevailing view that section 230 should be interpreted broadly. When considering these aspects of section 230, as well as others discussed below, the decision is clear: The Supreme Court should have used Gonzalez as an opportunity to affirm the broad scope of section 230 and extend liability protection to online service providers that incorporate AI recommender systems into their platforms.

B. Congress or the Courts? Promoting Beneficial AI Development in the United States

Interpreting section 230’s liability protections to include AI was necessary to foster innovation and strengthen AI development in the United States. As noted by section 230’s drafters, “[b]y providing legal certainty for platforms, the law has enabled the development of innumerable internet business models based on user-created content.”[147] Like the internet, AI has the potential to have a dramatic impact on our lives,[148] and while AI has become increasingly integrated into large scale business models, small and midsize businesses have begun to fall behind.[149] This is partly because larger businesses typically have the resources and capital to implement AI and are better able to offset the costs and litigation risks associated with testing and developing cutting-edge technology.

Despite litigation risks and other obstacles, AI use more than doubled between 2017 and 2022.[150] However, the proportion of global businesses that use AI has plateaued between 50 and 60 percent,[151] and a May 2023 report found that only 25 percent of small businesses have begun testing or using AI in their operations.[152] Compared with larger companies, the benefits of AI have the potential to generate an even greater impact for small businesses; the benefits include cost savings through improved processes, accelerated time from production to market for new products, and access to talent that would otherwise be too expensive.[153]

Despite its many benefits, AI is still largely underutilized by small businesses.[154] Fortunately, small percentage increases in AI adoption have the potential to have a major impact, as small businesses of 500 employees or less make up 99.9 percent of all U.S. businesses.[155] Promoting small business growth is a high priority among government regulators,[156] and lawmakers should be doing everything in their power to help wherever possible. Accordingly, because the legal certainty provided by section 230 “enabled the development of innumerable internet business models,”[157] interpreting section 230 to include AI would provide crucial opportunities and support for small businesses, just as it did for early internet sites.

Finally, the Gonzalez courts’ sole focus on whether recommender systems are within the scope of section 230 does not limit the applicability of the decision to other types of AI. Increasingly popular generative AI products, such as ChatGPT and other chatbots, “can and do rely on and relay information that is provided by another.”[158] Thus, it is likely that a broad interpretation in Gonzalez would extend to other forms of AI, like generative AI.

In sum, a broad application of section 230 is supported by the plain text of the statute, the legislative intent of the drafters, subsequent interpretation by lower courts, and prevailing policy considerations. Gonzalez presented a great opportunity to solidify these concerns by affirming section 230’s broad scope, resulting in the conclusion that the decision not to reach the issue was misguided.

V. Guidance from Abroad and the Potential for Regulation by Default

By default, the Gonzalez decision left lower courts and AI-reliant companies in the same position as before the Court granted certiorari. But questions about the scope of section 230 and companies’ liability for the AI use are not going away; as AI advances and becomes more prevalent in society, these questions will continue to pop up with greater frequency. Although the Supreme Court may argue that the decision is better left for Congress, continued inaction risks allowing foreign regulations to dictate the outcome instead.

For example, a decision may come in the form of AI or speech regulations from the European Union (EU). In 2018, the EU passed the General Data Protection Regulation (GDPR), the self-proclaimed “strongest privacy and security law in the world.”[159] Even though the GDPR is only targeted towards protecting EU residents, many companies “made global changes to their services to comply with European regulations.”[160] Shortly after the GDPR was passed, the European Union passed the Digital Services Act (DSA), which came into effect on November 16, 2022.[161] The DSA requires big tech companies, like Google and Facebook, “to police their platforms more strictly to better protect European users from hate speech, disinformation, and other harmful online content.”[162] Both the GDPR and DSA threaten large fines for noncompliant companies,[163] and while the laws only require compliance inside the EU, it is often more practical to make global changes rather than region-specific adjustments.

On December 9, 2023, the European Parliament reached a provisional agreement with the European Council for “a bill to ensure AI in Europe is safe, respects fundamental rights and democracy, [and allows] businesses [to] thrive and expand.”[164] Known as the AI Act, the bill would be the world’s first comprehensive AI law, creating “obligations for providers and users depending on the level of risk” from artificial intelligence.[165] Although still in its early stages, the AI Act would, among other things, ban categorization systems that use sensitive characteristics, such as political, religious, or philosophical beliefs, as well as sexual orientation and race.[166] If passed, the effects of the Act would likely be similar to the GDPR and DSA: The risk of non-compliance and practical difficulties of making region-specific changes would lead companies to tailor their algorithms in areas outside the EU to ensure compliance. So, by failing to outline the protections for AI stemming from section 230, the Supreme Court missed an opportunity to set the rule for what was protected in the United States, opening the door for EU regulations to set the standard.

VI. No Perfect Solution

Although a broad interpretation of section 230 is the best solution, it is not a perfect solution. The online world is a dangerous place, and bad actors will inevitably take advantage of or work around online algorithms to commit crimes and other bad acts. Beyond concerns that algorithms help promote terrorism, interest groups have warned that several other problems—including human trafficking, child exploitation, and the spread of misinformation—will become worse if section 230 is interpreted broadly.[167] While mitigating these harms is difficult, a highly specific and restrictive interpretation would cause more harm than good, and the novel, dynamic nature of AI makes comprehensive regulation currently impractical. As such, broad regulation is the only reasonable step at this stage.

As highlighted by the National Center on Sexual Exploitation (NCOSE), the internet is the primary location for the sexual exploitation of children, and section 230 “was never intended to provide legal protection to websites that . . . facilitate traffickers in advertising the sale of unlawful sex acts.”[168] Both points are uncontroverted and address abhorrent societal problems which require continued commitment and action by regulators to eradicate. But preventing exploitation and human trafficking online is a complex challenge. And while narrowing the scope of section 230 might provide limited assistance in addressing these pinpoint issues, altering the interpretation of a broad statute based on the concerns of a small subset of stakeholders would do more harm than good. As noted in an amicus brief filed by Reddit Inc., “[j]udicial interpretation should not move at Internet speeds, and there is no telling what a sweeping order removing targeted recommendations from the protection of Section 230 would do to the Internet as we know it.”[169]

Section 230 has been interpreted broadly since its enactment.[170] Although the significant immunity from liability given to online service providers has resulted in negative consequences, the broader implications of a drastic change would be difficult for the Court to predict. Thus, a narrow interpretation of section 230’s scope would have been misguided.

In the realm of free speech, less regulation has traditionally been associated with more freedom.[171] But some argue that AI has the potential to disrupt that balance. In its July 2023 report, PEN America argued that “generative A.I. threatens free expression by ‘supercharging’ the dissemination of disinformation and online abuse,” resulting in “the potential for people to lose trust in language itself, and thus in one another.”[172] While the dissemination of misinformation online is of increasing concern, online service providers are already taking steps to mitigate misinformation risks on their platforms.[173] And while there is always more that can be done, the “massive volume of content and the nuanced nature of misinformation”[174] make creating effective regulations difficult, if not impossible. Interpreting section 230 narrowly in hopes of addressing these concerns would still fail to effectively confront these issues, while chilling freedom of the press by discouraging journalists from reporting on issues that might lead to legal trouble.[175]

Despite the pitfalls of interpreting section 230 broadly, the novel and increasingly complex nature of AI has resulted in a lack of currently feasible alternatives. AI is particularly difficult to regulate because it is used to perform a wide variety of tasks, exists in many different forms with distinct characteristics, often involves the use of multiple algorithms working together, and consistently evolves through updates and new data.[176]

These characteristics are part of what makes AI so useful. It is dynamic, easily adaptable, and able to advance on its own. Unfortunately, Congress does not share these characteristics, and targeted regulations in the near future are unlikely. As a result, it is important to make do with what we have—section 230. Drafted nearly thirty years ago, section 230 has served as an effective regulator of internet speech since its creation, and even though applying its language to AI is by no means a perfect solution, it currently is the best available option.

Conclusion

AI is new, complex, and changing daily—as a result, lawmakers have struggled to develop and pass regulations that can keep up with AI’s rapid development. Referring to the European AI Act,[177] Tom Siebel, founder and CEO of C3.ai, an emerging AI company, said that “[i]f you can understand one sentence of it, you will understand one more sentence than I, and I think you will understand one more sentence than the people who wrote it.”[178] Regulating AI presents a significant challenge, but like any emerging technology, it comes with obstacles. Leaders in the industry still haven’t found the perfect solution, and a perfect web of AI laws will not emerge overnight.

Still, it is important to maximize the effectiveness of the regulations already in existence by tailoring our interpretation of existing law to include AI. In Gonzalez, the Supreme Court had the opportunity to do just that, by affirming the way many lower courts have interpreted section 230 in the past. By failing to affirm lower courts’ previous interpretations, the Supreme Court effectively affirmed the status quo—that section 230 might be applied to protect online service providers from liability—while also spreading uncertainty about companies’ future exposure to liability for the use of AI.

  1.  47 U.S.C. § 230; Gonzalez v. Google LLC, 2 F.4th 871, 942 (9th Cir. 2021).
  2. Interactive computer services are “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” See 47 U.S.C. § 230(f)(2); see also Jeff Kosseff, The Twenty-Six Words That Created the Internet 1 (2019).
  3. Kosseff, supra note 2, at 3.
  4. Brief of Senator Ron Wyden and Former Representative Christopher Cox as Amici Curiae in Support of Respondent, Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023) (No. 21-1333); see, e.g., Gonzalez, 2 F.4th 871; Dyroff v. Ultimate Software Grp., 934 F.3d 1093 (9th Cir. 2019); Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
  5. Recommender systems generate “personalized suggestions for items that are most relevant to each user.” See Francesco Casalegno, Recommender Systems – A Complete Guide to Machine Learning Models, Medium (Nov. 25, 2022), https://towardsdatascience.com/recommender-systems-a-complete-guide-to-machine-learning-models-96d3f94ea748.
  6. 143 S. Ct. 1191 (2023) (per curiam); see also Ron Wyden & Christopher Cox, The Authors of Section 230: ‘The Supreme Court Has Provided Much-Needed Certainty About the Landmark Internet Law–but AI Is Uncharted Territory, Fortune (Sept. 7, 2023), https://fortune.com/2023/09/07/authors-of-section-230-supreme-court-certainty-landmark-internet-law-ai-uncharted-territory-politics-tech-wyden-cox/; Gonzalez, 2 F.4th at 942.
  7. Gonzalez, 143 S. Ct. 1191.
  8. Id. at 1191–92.
  9. Leading Case, Twitter, Inc. v. Taamneh, 137 Harv. L. Rev. 400, 400 (2023) (quoting Marbury v. Madison, 5 U.S. (1 Cranch) 137, 177 (1803)).
  10. See Riccardo Righi et al., Eur. Comm’n, JRC 125613, EU in the Global Artificial Intelligence Landscape (2021).
  11. John Kell, AI Is About to Face Many More Legal Risks. Here’s How Businesses Can Prepare, Fortune (Nov. 8, 2023), https://fortune.com/2023/11/08/ai-playbook-legality/.
  12. Shari Davidson, The Growth of AI Law: Exploring Legal Challenges in Artificial Intelligence, Nat’l L. Rev. (Jan. 28, 2025), https://natlawreview.com/article/growth-ai-law-exploring-legal-challenges-artificial-intelligence.
  13. Kell, supra note 11.
  14. Michael Haenlein & Andreas Kaplan, A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence, Cal. Mgmt. Rev., Aug. 2019, at 5, 6–7.
  15. Id.
  16. Tanya Roy, The History and Evolution of Artificial Intelligence, AI’s Present and Future, All Tech Mag. (July 19, 2023), https://alltechmagazine.com/the-evolution-of-ai/.
  17. Kell, supra note 11.
  18. Id.
  19. See Doe v. Facebook, Inc., 142 S. Ct. 1087, 1088 (2022) (Thomas, J., concurring in denial of certiorari).
  20. Susannah Fox & Lee Rainie, Pew Rsch. Ctr., The Web at 25 in the U.S. 9 (2014) (finding that only 14% of U.S. adults had internet access in 1995).
  21. See Brian Kennedy et al., Pew Rsch. Ctr., Public Awareness of Artificial Intelligence in Everyday Activities (2023).
  22. See Max Roser, The Brief History of Artificial Intelligence: The World Has Changed Fast – What Might Be Next?, Our World in Data (Dec. 6, 2022), https://ourworldindata.org/brief-history-of-ai.
  23. AI is now used in everything from determining airline ticket prices to deciding who is released from jail. See id.
  24. See Lyria B. Moses, Recurring Dilemmas: The Law’s Race to Keep up with Technological Change 4 (Univ. of New S. Wales Working Paper No. 2007-21, 2007), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=979861.
  25. What is AI?, McKinsey & Co. (Apr. 3, 2024), https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-ai; see Understanding the Different Types of Artificial Intelligence, IBM Data & AI Team (Oct. 12, 2023), https://www.ibm.com/think/topics/artificial-intelligence-types.
  26. IBM Data & AI Team, supra note 25; see also Naveen Joshi, 7 Types of Artificial Intelligence, Forbes (June 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/06/19/7-types-of-artificial-intelligence/.
  27. IBM Data & AI Team, supra note 25. General AI and Super AI are both strictly theoretical concepts; even OpenAI’s ChatGPT is considered a form of Narrow AI because it’s limited to the single task of text-based chat. Id.
  28. Narrow AI, DeepAI, https://deepai.org/machine-learning-glossary-and-terms/narrow-ai (last visited May 24, 2025).
  29. Ben Nancholas, What Are the Different Types of Artificial Intelligence?, Univ. Wolverhampton (June 7, 2023), https://online.wlv.ac.uk/what-are-the-different-types-of-artificial-intelligence/. General AI, also known as Artificial General Intelligence (AGI), uses “previous learnings and skills to accomplish new tasks in a different context without the need for [humans] to train the underlying models.” IBM Data & AI Team, supra note 25. Super AI, if ever successfully developed, “would think, reason, learn, make judgments and possess cognitive abilities that surpass those of human beings.” Id.
  30. IBM Data & AI Team, supra note 25.
  31. Id. The four types of AI based on functionalities all fit into the broader category of Artificial Narrow AI. Id.; see also Joshi, supra note 26.
  32. IBM Data & AI Team, supra note 25; see also How Netflix’s Recommendations System Works, Netflix: Help Ctr., https://help.netflix.com/en/node/100639 (last visited May 24, 2025).
  33. IBM Data & AI Team, supra note 25.
  34. Id.
  35. Id.
  36. Id. Theory of Mind AI is currently being developed, and Self-Aware AI is strictly theoretical. Id.
  37. See Artificial Intelligence (AI) vs. Machine Learning, Columbia Eng’g, https://ai.engineering.columbia.edu/ai-vs-machine-learning/ (last visited May 24, 2025).
  38. See Artificial Intelligence (AI) vs. Machine Learning (ML), Microsoft Azure, https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/artificial-intelligence-vs-machine-learning (last visited May 24, 2025).
  39. Id.
  40. What’s the Difference Between Business Intelligence and Machine Learning?, AWS, https://aws.amazon.com/compare/the-difference-between-business-intelligence-and-machine-learning/ (last visited May 24, 2025).
  41. Kristin Burnham, Artificial Intelligence vs. Machine Learning: What’s the Difference?, Ne. Univ. Graduate Programs (May 6, 2020), https://graduate.northeastern.edu/resources/artificial-intelligence-vs-machine-learning-whats-the-difference/.
  42. Id.
  43. The Evolution and Techniques of Machine Learning, DataRobot (Jan. 7, 2025), https://www.datarobot.com/blog/how-machine-learning-works/.
  44. Id.
  45. Julianna Delua, Supervised Versus Unsupervised Learning: What’s the Difference?, IBM (Mar. 12, 2021), https://www.ibm.com/blog/supervised-vs-unsupervised-learning/.
  46. Id.
  47. Id.
  48. See Gaudenz Boesch, Supervised vs Unsupervised Learning for Computer Vision, viso.ai (Dec. 21, 2023), https://viso.ai/deep-learning/supervised-vs-unsupervised-learning/; Alyshai Nadeem, Machine Learning 101: Supervised, Unsupervised, Reinforcement Learning Explained, datasciencedojo (Sept. 15, 2022), https://datasciencedojo.com/blog/machine-learning-101/.
  49. Gonzalez v. Google, LLC, 2 F.4th 871, 881 (9th Cir. 2021). Recommender systems fall into the category of Artificial Narrow and are a type of reactive machine AI. See IBM Data & AI Team, supra note 25; Casalegno, supra note 5.
  50. See Rem Darbinyan, How AI Transforms Social Media, Forbes (Mar. 16, 2023), https://www.forbes.com/sites/forbestechcouncil/2023/03/16/how-ai-transforms-social-media/.
  51. Bernard Marr, The Amazing Ways YouTube Uses Artificial Intelligence and Machine Learning, Forbes (Aug. 23, 2019), https://www.forbes.com/sites/bernardmarr/2019/08/23/the-amazing-ways-youtube-uses-artificial-intelligence-and-machine-learning/.
  52. Id.; see Nadeem, supra note 48; see also Tamara Biljman, AI in Social Media: Benefits, Tools, and Challenges, Sendible (Jun. 4, 2024), https://www.sendible.com/insights/ai-in-social-media.
  53. Sara Brown, Machine Learning, Explained, MIT Mgmt. Sloan Sch.: Ideas Made to Matter (Apr. 21, 2021), https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained; see Katherine Haan & Robb Watts, How Businesses Are Using Artificial Intelligence, Forbes Advisor (Apr. 24, 2023), https://www.forbes.com/advisor/business/software/ai-in-business/.
  54. Brown, supra note 53.
  55. Id.
  56. Id.; Anthony Cardillo, How Many Companies Use AI? (New Data), Exploding Topics, https://explodingtopics.com/blog/companies-using-ai (May 1, 2025); IBM, IBM Global AI Adoption Index 2022 (May 2022), https://www.ibm.com/downloads/cas/GVAGA3JP; The State of AI in 2023: Generative AI’s Breakout Year, McKinsey & Co. (Aug. 1, 2023), https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year#steady.
  57. Ryan Tracy, ChatGPT’s Sam Altman Warns Congress That AI ‘Can Go Quite Wrong, Wall St. J. (May 16, 2023), https://www.wsj.com/tech/ai/chatgpts-sam-altman-faces-senate-panel-examining-artificial-intelligence-4bb6942a.
  58. See Wyden & Cox, supra note 6, at 2; Stratton Oakmont, Inc. v. Prodigy Serv. Co., No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995).
  59. 47 U.S.C. § 230(a)(1).
  60. See Kosseff, supra note 2, at 9–10.
  61. Stratton Oakmont, 1995 WL 323710; Wyden & Cox, supra note 6, at 2; see also Kosseff, supra note 2, at 45–56.
  62. Stratton Oakmont, 1995 WL 323710, at *1.
  63. Id. at *4–5.
  64. Wyden & Cox, supra note 6, at 2.
  65. See Kosseff, supra note 2, at 2.
  66. Id.; Gonzalez v. Google LLC, 2 F.4th 871, 886 (9th Cir. 2021).
  67. 47 U.S.C. § 230(c)(1).
  68. Gonzalez, 2 F.4th at 886–87 (quoting Doe v. Internet Brands, Inc., 824 F.3d 846, 850 (9th Cir. 2016)).
  69. Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157, 1162 (9th Cir. 2008) (quoting 47 U.S.C. § 230(f)(3)).
  70. Id. at 1162–63.
  71. Section 230, EFF, https://www.eff.org/issues/cda230 (last visited May 24, 2025).
  72. Id.
  73. Rebecca Kern, SCOTUS to Hear Challenge to Section 230 Protections, Politico (Oct. 3, 2022), https://www.politico.com/news/2022/10/03/scotus-section-230-google-twitter-youtube-00060007. Compare Prager Univ. v. Google LLC, 85 Cal. App. 5th 1022 (Cal. Ct. App. 2022), and Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093 (9th Cir. 2019), with Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
  74. Zach Schonfeld, Chief Justice Centers Supreme Court Annual Report on AI’s Dangers, Hill (Dec. 31, 2023), https://thehill.com/regulation/court-battles/4383324-chief-justice-centers-supreme-court-annual-report-on-ais-dangers/.
  75. Tracy, supra note 57.
  76. Id.
  77. Id.
  78. Lawrence Norden & Benjamin Lerude, States Take the Lead on Regulating Artificial Intelligence, Brennan Ctr. for Just. (Nov. 6, 2023), https://www.brennancenter.org/our-work/research-reports/states-take-lead-regulating-artificial-intelligence.
  79. See EU AI Act: First Regulation on Artificial Intelligence, Eur. Parl.: Topics (Feb. 19, 2025), https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
  80. Norden & Lerude, supra note 78.
  81. Kern, supra note 73.
  82. 18 U.S.C. § 2333.
  83. Gonzalez v. Google LLC, 2 F.4th 871, 880 (9th Cir. 2021). Gonzalez’s initial complaint was later amended and joined by other family members and similarly situated plaintiffs. Id. at 882.
  84. Id. at 880; Lori Hinnant, 2015 Paris Attacks Suspect: Deaths of 130 ‘Nothing Personal, AP News (Sept. 15, 2021), https://apnews.com/article/europe-france-trials-paris-brussels-f2031a79abfae46cbd10d4315cf29163.
  85. Gonzalez, 2 F.4th at 882.
  86. Id. at 881.
  87. Id.
  88. See Gonzalez v. Google, Inc., 282 F. Supp. 3d 1150, 1171 (N.D. Cal. 2017); Fed. R. Civ. P. 12(b)(6).
  89. Gonzalez, 2 F.4th at 880. Taamneh and Clayborn involve claims against Google, Twitter, and Facebook. Id.
  90. Gonzalez, 2 F.4th at 879, 883, 884; 1 Artificial Intelligence: Law and Litigation § 3.02, Lexis (database updated May 2024).
  91. Gonzalez, 2 F.4th at 879.
  92. Id. at 880 (quoting 18 U.S.C. § 2333(a)).
  93. Id. at 885 (quoting Justice Against Sponsors of Terrorism Act (JASTA), Pub. L. No. 144-222, 130 Stat. 852 (2016)).
  94. Id. at 880.
  95. The Gonzalez plaintiffs’ revenue-sharing theory is distinct from their other theories of liability because the allegations were not based on the content ISIS placed on YouTube. Id. at 898. Instead, the allegations were “premised on Google providing ISIS with material support by giving ISIS money.” Id. The revenue-sharing allegations stemmed from Google’s AdSense program, which involved “Google shar[ing] a percentage of revenues generated from those advertisements with ISIS.” Id.
  96. Id. at 882.
  97. Id. at 880. The district court in Taamneh did not reach the issue of section 230 immunity. Id.
  98. Id. The Taamneh plaintiffs only appealed the dismissal of their aiding and abetting claim. Id. at 908. The Ninth Circuit reversed the district court’s dismissal after concluding that the complaint’s allegations “that defendants provided services that were central to ISIS’s growth and expansion, and that this assistance was provided over many years,” adequately alleged the defendants’ assistance to ISIS was substantial. Id. at 910.
  99. Gonzalez v. Google LLC, 143 S. Ct. 80 (2022) (mem.); Twitter, Inc. v. Taamneh, 143 S. Ct. 81 (2022) (mem.).
  100. Gonzalez v. Google, Elec. Priv. Info. Ctr., https://epic.org/documents/onzalez-v-google/ (last visited May 24, 2025); see also Gonzalez v. Google LLC, 143 S. Ct. 1191, 1191–92 (2023) (per curiam).
  101. See Danielle Draper & Sean Long, Summarizing the Amicus Briefs Arguments in Gonzalez v. Google LLC, Bipartisan Pol’y Ctr. (Feb. 21, 2023), https://bipartisanpolicy.org/blog/arguments-gonzalez-v-google/.
  102. Richard L. Pacelle, Jr., Amicus Curiae Briefs in the Supreme Court, Oxford Rsch. Encyclopedias (April 20, 2022), https://doi.org/10.1093/acrefore/9780190228637.013.1992.
  103. Draper & Long, supra note 101.
  104. Id.
  105. See generally Transcript of Oral Argument, Gonzalez v. Google, 143 S. Ct. 1191 (2023) (No. 21-1333) [hereinafter Gonzalez Oral Argument Transcript]; Transcript of Oral Argument, Twitter v. Taamneh, 143 S. Ct. 1206 (2023) (No. 21-1496) [hereinafter Taamneh Oral Argument Transcript].
  106. See Gonzalez Oral Argument Transcript, supra note 105, at 1, 164; Taamneh Oral Argument Transcript, supra note 105, at 1, 151.
  107. Gonzalez Oral Argument Transcript, supra note 105, at 49.
  108. Taamneh Oral Argument Transcript, supra note 105, at 72–73.
  109. Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72; Taamneh Oral Argument Transcript, supra note 105, at 12–13, 54, 126.
  110. Kate Klonick, How 236,471 Words of Amici Briefing Gave Us the 565 Word Gonzalez Decision, Klonickles (May 29, 2023), https://klonick.substack.com/p/how-236471-words-of-amici-briefing.
  111. Gonzalez v. Google, 143 S. Ct. 1191 (2023) (per curiam).
  112. Id. at 1192.
  113. Id.
  114. Taamneh, 143 S. Ct. at 1218.
  115. 705 F.2d 472 (D.C. Cir. 1983).
  116. Taamneh, 143 S. Ct. at 1218 (quoting Justice Against Sponsors of Terrorism Act (JASTA), Pub. L. No. 114-222, § 2(a)(5), 130 Stat. 852, 852 (2016)).
  117. Id. at 1230.
  118. Id. at 1230–31.
  119. See Leading Case, supra note 9, at 404–06. “Judicial minimalism is the principle that judges should ‘say[] no more than necessary to justify an outcome.’” Id. at 405 (alteration in original) (quoting Cass R. Sunstein, The Supreme Court, 1995 Term — Foreword: Leaving Things Undecided, 110 Harv. L. Rev. 4, 6 (1996)).
  120. See Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72; Taamneh Oral Argument Transcript, supra note 105, at 12–13, 54, 126.
  121. See Doe v. Facebook, Inc., 142 S. Ct. 1087, 1088–89 (2022) (Thomas, J., concurring in denial of certiorari).
  122. See id. at 1087.
  123. Id. at 1088 (quoting In re Facebook, 625 S.W.3d 80 (Tex. 2021)).
  124. Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72.
  125. Doe, 142 S. Ct. at 1088 (2022) (Thomas, J., concurring in denial of certiorari) (quoting Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 18 (2020)).
  126. See Malwarebytes, 141 S. Ct. at 16.
  127. Wyden & Cox, supra note 6, at 2.
  128. See Kosseff, supra note 2, at 2.
  129. Wyden & Cox, supra note 6, at 6.
  130. See George Glover, It’s Time to See Whether AI Is the New Internet — or the Next Metaverse,’ Bus. Insider (July 11, 2023), https://www.businessinsider.com/ai-chatgpt-artificial-intelligence-internet-dot-com-metaverse-crypto-blockchain-2023-7; Einaras Von Gravrock, How AI Empowers the Evolution of the Internet, Forbes (Nov. 15, 2018), https://www.forbes.com/sites/forbeslacouncil/2018/11/15/how-ai-empowers-the-evolution-of-the-internet/.
  131. See generally How Has the Internet Changed in the Last 20 Years, in.house.media, https://www.ihm.co.uk/blog/how-has-the-internet-changed-in-the-last-20-years/ (last visited May 24, 2025).
  132. 47 U.S.C. § 230(c)(1); see Wyden & Cox, supra note 6, at 2 (“Congress drafted Section 230 in light of its understanding of the capabilities of then-extant online platforms and the evident trajectory of Internet development.”).
  133. Josie Fischels, A Look Back at the Very First Website Ever Launched, 30 Years Later, NPR (Aug. 6, 2021), https://www.npr.org/2021/08/06/1025554426/a-look-back-at-the-very-first-website-ever-launched-30-years-later.
  134. See Fox & Rainie, supra note 20.
  135. See Danny Hajek et al., What Is AI and How Will It Change Our Lives? NPR Explains., NPR (May 25, 2023), https://www.npr.org/2023/05/25/1177700852/ai-future-dangers-benefits; How Artificial Intelligence Is Changing Your Life Unknowingly, Econ. Times (Mar. 15, 2023), https://economictimes.indiatimes.com/news/how-to/how-artificial-intelligence-is-changing-your-life-unknowingly/articleshow/98455922.cms?from=mdr; Mike Thomas, The Future of AI: How Artificial Intelligence Will Change the World, builtin, https://builtin.com/artificial-intelligence/artificial-intelligence-future (Jan. 28, 2025).
  136. Wyden & Cox, supra note 6, at 8.
  137. Id. at 12–13.
  138. See, e.g., Exec. Order No. 14,110, 88 Fed. Reg. 75191 (Oct. 30, 2023).
  139. See Zeran v. Am. Online, Inc., 129 F.3d 327, 331–34 (4th Cir. 1997).
  140. Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 15 (2020) (Thomas, J., concurring in the denial of certiorari) (citing Zeran, 129 F.3d at 331–34).
  141. Malwarebytes, 141 S. Ct. at 15 (Thomas, J., concurring in the denial of certiorari) (citations omitted).
  142. See Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
  143. See id.
  144. Id. In Force, victims of terrorist attacks in Israel alleged that Facebook provided material support to Hamas terrorists by enabling Hamas “to disseminate its messages directly to its intended audiences and to carry out communication components of its terror attacks.” Id. at 59.
  145. Id. at 70.
  146. Id. at 70 n.24.
  147. Christopher Cox, The Origins and Original Intent of Section 230 of the Communications Decency Act, Rich. J.L. & Tech. Blog (Aug. 27, 2020), https://jolt.richmond.edu/2020/08/27/the-origins-and-original-intent-of-section-230-of-the-communications-decency-act/.
  148. See sources cited supra note 135.
  149. See Poornima Apte, How AI is Leveling the Marketing Playing Field Between SMBs and Big Business, U.S. Chamber of Comm.: CO (Aug. 7, 2023), https://www.uschamber.com/co/good-company/launch-pad/how-small-businesses-are-using-ai.
  150. Michael Chui et al., The State of AI in 2022—and A Half Decade in Review, McKinsey & Co. (Dec. 6, 2022), https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review.
  151. Id.
  152. Report: Small Business Owners Embrace the Future – Majority Say They Will Adopt Generative AI, FreshBooks, https://www.freshbooks.com/press/data-research/data-research-majority-of-small-business-owners-will-use-ai (last visited May 24, 2025); see also Michelle Kumar, Navigating the Era of AI: Implications for Small Businesses, Bipartisan Pol’y Ctr. (Nov. 3, 2023), https://bipartisanpolicy.org/blog/navigating-the-era-of-ai-implications-for-small-businesses (highlighting a recent survey that found that 23% of small businesses use AI in some form).
  153. See Apte, supra note 149.
  154. See id.
  155. Martin Rowinski, How Small Businesses Drive The American Economy, Forbes (Mar. 25, 2022), https://www.forbes.com/councils/forbesbusinesscouncil/2022/03/25/how-small-businesses-drive-the-american-economy/.
  156. See, e.g., FACT SHEET: The Small Business Boom Under the Biden-Harris Administration, White House (Apr. 28, 2022), https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2022/04/28/fact-sheet-the-small-business-boom-under-the-biden-harris-administration/.
  157. Cox, supra note 147.
  158. Christopher MacColl, Defamatory Bots and Section 230: Navigating Liability in the Age of Artificial Intelligence, JD Supra (July 18, 2023), https://www.jdsupra.com/legalnews/defamatory-bots-and-section-230-3202468 (quoting 47 U.S.C. § 230(c)(1)).
  159. The General Data Protection Regulation, Eur. Council (June 13, 2024), https://www.consilium.europa.eu/en/policies/data-protection-regulation/.
  160. Jared Schroeder, Meet the EU Law That Could Reshape Online Speech in the U.S., Slate (Oct. 27, 2022), https://slate.com/technology/2022/10/digital-services-act-european-union-content-moderation.html.
  161. See Questions and Answers On the Digital Services Act, Eur. Comm’n (Feb. 23, 2024), https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_2348.
  162. Kelvin Chan & Raf Casert, EU law targets Big Tech Over Hate Speech, Disinformation, Associated Press (April 23, 2022), https://apnews.com/article/technology-business-police-social-media-reform-52744e1d0f5b93a426f966138f2ccb52.
  163. See Schroeder, supra note 160.
  164. Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI, Eur. Parl.: News (Sept. 12, 2023), https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai.
  165. See EU AI Act: First Regulation on Artificial Intelligence, Eur. Parl.: News, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (Feb. 19, 2025); The Digital Services Act Package, Eur. Comm’n, https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package (Feb. 12, 2025).
  166. Artificial Intelligence Act, supra note 164.
  167. See, e.g., Brief of the National Center on Sexual Exploitation, the National Trafficking Sheltered Alliance, and RAINN, as Amici Curiae in Support of Petitioners, Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023) (No. 21-1333) [hereinafter NCSE Brief]. See generally Sivile Manene et al., Mitigating Misinformation About the COVID-19 Infodemic on Social Media: A Conceptual Framework, NIH Nat’l Libr. Med., May 2023, at 1, 2 (“Social media platforms have taken steps to mitigate the spread of COVID-19 misinformation by implementing policies . . . which prohibit[] users from using the platform’s services to share false or misleading information about COVID-19.”).
  168. NCSE Brief, supra note 167.
  169. Brief for Reddit, Inc. and Reddit Moderators as Amici Curiae in Support of Respondent, Gonzalez, 143 S. Ct. 1191 (No. 21-1333).
  170. See Zeran v. Am. Online, Inc., 129 F.3d 327, 331–34 (4th Cir. 1997).
  171. See John Samples, Why the Government Should Not Regulate Content Moderation of Social Media, CATO Inst. (Apr. 9, 2019), https://www.cato.org/policy-analysis/why-government-should-not-regulate-content-moderation-social-media.
  172. Sue Halpern, The Year A.I. Ate the Internet, New Yorker (Dec. 8, 2023), https://www.newyorker.com/culture/2023-in-review/the-year-ai-ate-the-internet.
  173. See Manene et al., supra note 167, at 2 (“Social media platforms have taken steps to mitigate the spread of COVID-19 misinformation by implementing policies . . . which prohibit[] users from using the platform’s services to share false or misleading information about COVID-19.”).
  174. See Nandita Krishnan et al., Research Note: Examining How Various Social Media Platforms Have Responded to COVID-19 Misinformation, Harv. Kennedy Sch. Misinformation Rev. (Dec. 15, 2021), https://misinforeview.hks.harvard.edu/article/research-note-examining-how-various-social-media-platforms-have-responded-to-covid-19-misinformation/.
  175. See Gabrielle Lim & Samantha Bradshaw, Chilling Legislation: Tracking the Impact of “Fake News” Laws on Press Freedom Internationally, Ctr. for Int’l Media Assistance (July 19, 2023), https://www.cima.ned.org/publication/chilling-legislation/.
  176. See Cary Coglianese, Regulating Machine Learning: The Challenge of Heterogeneity, Competition Pol’y Int’l, Feb. 2023, at 1, 3.
  177. Artificial Intelligence Act, supra note 164.
  178. Kell, supra note 8.

By Mary Catherine Young

Last month, an Azerbaijani journalist was forced to deactivate her social media accounts after receiving sexually explicit and violent threats in response to a piece she wrote about Azerbaijan’s cease-fire with Armenia.[1] Some online users called for the Azerbaijan government to revoke columnist Arzu Geybulla’s citizenship—others called for her death.[2] Days later, an Irish man, Brendan Doolin, was criminally charged for online harassment of four female journalists.[3] The charges came on the heels of a three-year jail sentence rendered in 2019 based on charges for stalking six female writers and journalists online, one of whom reported receiving over 450 messages from Doolin.[4] Online harassment of journalists is palpable on an international scale.

Online harassment of journalists abounds in the United States as well, with females receiving the brunt of the persecution.[5] According to a 2019 survey conducted by the Committee to Protect Journalists, 90 percent of female or gender nonconforming American journalists said that online harassment is “the biggest threat facing journalists today.”[6] Fifty percent of those surveyed reported that they have been threatened online.[7] While online harassment plagues journalists around the world, the legal ramifications of such harassment are far from uniform.[8] Before diving into how the law can protect journalists from this abuse, it is necessary to expound on what online harassment actually looks like in the United States.

In a survey conducted in 2017 by the Pew Research Center, 41 percent of 4,248 American adults reported that they had personally experienced harassing behavior online.[9] The same study found that 66 percent of Americans said that they have witnessed harassment targeted at others.[10] Online harassment, however, takes many shapes.[11] For example, people may experience “doxing” which occurs when one’s personal information is revealed on the internet.[12] Or, they may experience a “technical attack,” which includes harassers hacking an email account or preventing traffic to a particular webpage.[13] Much of online harassment takes the form of “trolling,” which occurs when “a perpetrator seeks to elicit anger, annoyance or other negative emotions, often by posting inflammatory messages.”[14] Trolling can encompass situations in which harassers intend to silence women with sexualized threats.[15]

The consequences of online harassment of internet users can be significant, invoking mental distress and sometimes fear for one’s physical safety.[16] In the context of journalists, however, the implications of harassment commonly affect more than the individual journalist themselves—free flow of information in the media is frequently disrupted due to journalists’ fear of cyberbullying.[17] How legal systems punish those who harass journalists online varies greatly both internationally and domestically.[18]

For example, the United States provides several federal criminal and civil paths to recourse for victims of online harassment, though not specifically geared toward journalists.[19] In terms of criminal law, provisions protecting individuals against cyber-stalking are included in 18 U.S.C. § 2261A, which criminalizes stalking in general.[20] According to this statute, “[w]hoever . . . with the intent to kill, injure, harass, intimidate, or place under surveillance with intent to . . . harass, or intimidate another person, uses . . . any interactive computer service . . . [and] causes, attempts to cause, or would be reasonably expected to cause substantial emotional distress to a person . . .” may be imprisoned.[21] In terms of civil law, plaintiffs may be able to allege defamation or copyright infringement claims.[22] For example, when the harassment takes the form of sharing an individuals’ self-taken photographs without the photographer’s consent, whether they are explicit or not, the circumstances may allow the victim to pursue a claim under the Digital Millennium Copyright Act.[23]

Some states provide their own online harassment criminal laws, though states differ in whether the provisions are included in anti-harassment legislation or in their anti-stalking laws.[24] For example, Alabama,[25] Arizona,[26] and Hawaii[27] all provide for criminal prosecution for cyberbullying in their laws against harassment, whereas Wyoming,[28] California,[29] and North Carolina[30] include anti-online harassment provisions in their laws against stalking.[31] North Carolina’s stalking statute, however, was recently held unconstitutional as applied under the First Amendment after a defendant was charged for posting a slew of Google Plus posts about his bizarre wishes to marry the victim.[32] The North Carolina Court of Appeals decision in Shackelford seems to reflect a distinctly American general reluctance to interfere with individuals’ ability to freely post online out of extreme deference to First Amendment rights.

Other countries have taken more targeted approaches to legally protecting journalists from online harassment.[33] France, in particular, has several laws pertaining to cyberbullying and online harassment in general, and these laws have recently provided relief for journalists.[34] For example, in July 2018, two perpetrators were given six-month suspended prison sentences after targeting a journalist online.[35] The defendants subjected Nadia Daam, a French journalist and radio broadcaster, to months of online harassment after she condemned users of an online platform for harassing feminist activists.[36] Scholars who examine France’s willingness to prosecute perpetrators of online harassment against journalists and non-journalists alike point to the fact that while the country certainly holds freedom of expression in high regard, this freedom is held in check against other rights, including individuals’ right to privacy and “right to human dignity.”[37]

Some call for more rigorous criminalization of online harassment in the United States, particularly against journalists, to reduce the potential for online harassment to create a “crowding-out effect” that prevents actually helpful online speech from being heard.[38] It seems, however, that First Amendment interests may prevent many journalists from finding relief—at least for now.


[1] Aneeta Mathur-Ashton, Campaign of Hate Forces Azeri Journalist Offline, VOA (Jan. 8, 2021), https://www.voanews.com/press-freedom/campaign-hate-forces-azeri-journalist-offline.

[2] Id.

[3] Tom Tuite, Dubliner Charged with Harassing Journalists Remanded in Custody, The Irish Times (Jan. 18, 2021), https://www.irishtimes.com/news/crime-and-law/courts/district-court/dubliner-charged-with-harassing-journalists-remanded-in-custody-1.4461404.

[4] Brion Hoban & Sonya McLean, ‘Internet Troll’ Jailed for Sending Hundreds of Abusive Messages to Six Women, The Journal.ie (Nov. 14, 2019), https://www.thejournal.ie/brendan-doolin-court-case-4892196-Nov2019/.

[5] Lucy Westcott & James W. Foley, Why Newsrooms Need a Solution to End Online Harassment of Reporters, Comm. to Protect Journalists (Sept. 4, 2019), https://cpj.org/2019/09/newsrooms-solution-online-harassment-canada-usa/.

[6] Id.

[7] Id.

[8] See Anya Schiffrin, How to Protect Journalists from Online Harassment, Project Syndicate (July 1, 2020), https://www.project-syndicate.org/commentary/french-laws-tackle-online-abuse-of-journalists-by-anya-schiffrin-2020-07.

[9] Maeve Duggan, Online Harassment in 2017, Pew Rsch. Ctr. (July 11, 2017), https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/.

[10] Id.

[11] Autumn Slaughter & Elana Newman, Journalists and Online Harassment, Dart Ctr. for Journalism & Trauma (Jan. 14, 2020), https://dartcenter.org/resources/journalists-and-online-harassment.

[12] Id.

[13] Id.

[14] Id.

[15] Id.

[16] Duggan, supra note 9.

[17] Law Libr. of Cong., Laws Protecting Journalists from Online Harassment 1 (2019), https://www.loc.gov/law/help/protecting-journalists/compsum.php.

[18] See id. at 3–4; Marlisse Silver Sweeney, What the Law Can (and Can’t) Do About Online Harassment, The Atl. (Nov. 12, 2014), https://www.theatlantic.com/technology/archive/2014/11/what-the-law-can-and-cant-do-about-online-harassment/382638/.

[19] Hollaback!, Online Harassment: A Comparative Policy Analysis for Hollaback! 37 (2016), https://www.ihollaback.org/app/uploads/2016/12/Online-Harassment-Comparative-Policy-Analysis-DLA-Piper-for-Hollaback.pdf.

[20] 18 U.S.C. § 2261A.

[21] § 2261A(2)(b).

[22] Hollaback!, supra note 19, at 38.

[23] Id.; see also 17 U.S.C. §§ 1201–1332.

[24] Hollaback!, supra note 19, at 38–39.

[25] Ala. Code § 13A-11-8.

[26] Ariz. Rev. Stat. Ann. § 13-2916.

[27] Haw. Rev. Stat. § 711-1106.

[28] Wyo. Stat. Ann. § 6-2-506.

[29] Cal. Penal Code § 646.9.

[30] N.C. Gen. Stat. § 14-277.3A.

[31] Hollaback!, supra note 19, at 39 (providing more states that cover online harassment in their penal codes).

[32] State v. Shackelford, 825 S.E.2d 689, 701 (N.C. Ct. App. 2019), https://www.nccourts.gov/documents/appellate-court-opinions/state-v-shackelford. After meeting the victim once at a church service, the defendant promptly made four separate Google Plus posts in which he referenced the victim by name. Id. at 692. In one post, the defendant stated that “God chose [the victim]” to be his “soul mate,” and in a separate post wrote that he “freely chose [the victim] as his wife.” Id. After nearly a year of increasingly invasive posts in which he repeatedly referred to the victim as his wife, defendant was indicted by a grand jury on eight counts of felony stalking. Id. at 693–94.

[33] Law Libr. of Cong., supra note 17, at 1–2.

[34] Id. at 78–83.

[35] Id. at 83.

[36] Id.

[37] Id. at 78.

[38] Schiffrin, supra note 8.

Post Image by Kaur Kristjan on Unsplash.

By Rachel L. Golden

To mitigate the spread of COVID-19, millions of students have been forced to move from in-person to distance learning. The success of distance learning hinges on a student’s ability to access the virtual classroom.[1] For two girls in East Salinas, California, distance learning meant having to sit in a Taco Bell parking lot to complete their homework.[2] In August 2020, a photo of these two young girls sitting in the Taco Bell parking lot went viral on Twitter because the parking lot provided something that their home environment could not: access to the internet.[3]

For many Americans, access to online services is not a given.[4] A 2018 Federal Communications Commission (“FCC”) study found that “there are more than 14 million people without any internet access and 25 million without faster and more reliable broadband access.”[5] The COVID-19 pandemic has further illuminated this digital divide.[6] The digital divide “refers to the growing gap between the underprivileged members of society . . . who do not have access to computers or the internet” and the more affluent Americans who do have access to computers and the internet.[7] This divide stems from not only not having access to the internet, but also lacking access to a device that can connect to the internet.[8]

The digital divide does not exclusively affect school-aged children, but the consequences of the digital divide are clear when examining these children.[9] Even prior to the current public health crisis, a 2018 Pew Research Center analysis showed that due to a lack of broadband internet access, poor school-aged children were less likely to finish their homework than more affluent students with access to the internet.[10] This problem has been exacerbated during the COVID-19 pandemic when the primary mode of teaching, at all levels, has switched to virtual learning.[11] Moreover, to complete remote work, students may be forced “to go outside and ignore quarantine or shelter-in-place guidelines” to find internet access—actions contrary to the original health and safety purposes of distance learning.[12]

However, COVID-19’s illumination of the digital divide has “produced new political will to reduce inequality in the global digital economy.”[13] Congress, in the most recent COVID-19 response and relief package, acknowledges the need for broadband funding and access.[14] The Consolidated Appropriations Act of 2021[15] (“Act”) establishes an Emergency Broadband Connectivity Fund (“Fund”) of 3.2 billion dollars.[16] The Act directs the FCC to use the Fund “to establish an Emergency Broadband Benefit Program, under which eligible households may receive a discount off the cost of broadband service and certain connected devices . . . relating to the COVID-19 pandemic.”[17]

Broadband providers’ participation in the Emergency Broadband Benefit Program (“Benefit Program”) is entirely voluntary.[18] However, if the provider chooses to participate, it must be designated as an eligible telecommunications carrier or be approved by the FCC.[19] Once approved to participate in the Benefit Program, the broadband provider will give monthly discounts “off the standard rate for an Internet service offering and associated equipment” to eligible households of up to $50 per month.[20] The broadband providers are then entitled to reimbursement from the Benefit Program for the discounts they have provided.[21] Moreover, the Benefit Program not only enables discounted internet services, but also encourages broadband providers to supply eligible households with a connected device, such as a laptop, desktop computer, or tablet.[22] The Benefit Program, however, is not without its limitations. For example, an eligible household that seeks a connected device is only eligible to receive one supported device.[23]

The Act directs the FCC to provide a public comment period and public reply comment period, each of twenty days, before the rules of the Benefit Program are established.[24] The FCC seeks comment on a variety of provisions.[25] Examples include seeking comment on “the eligibility and election process for participating providers” and what the definition of household is in reference to the Act’s requirement that the discounts and connected devices be provided to “eligible households.”[26] The public comment twenty-day window closed on Jan. 25, 2021, but the public reply comments window closes on Feb.16, 2021, so the scope of the rules of the Benefit Program are yet to be determined.[27]

The true aim behind the Benefit Program is to provide broadband internet access to low income households at affordable rates—especially those households with school-aged children.[28] Whether or not the Benefit Program will be effective in fulfilling this goal remains to be seen. However, it is clear that the Benefit Program is “an important Band-Aid that [will help] Americans [stay] connected,” even if solving the nation’s digital divide requires stitches.[29] Ultimately, the hope is that with increased access to internet services and connected devices, Taco Bell parking lots will remain parking lots and not double as schools.  


[1] Strengths and Weaknesses of Online Learning, Univ. Ill. Springfield, https://www.uis.edu/ion/resources/tutorials/online-education-overview/strengths-and-weaknesses/ (last visited Feb. 9, 2021).

[2] Lizzy Francis, Viral Photo Shows Kids with No Internet Using Taco Bell Wifi To Do Homework, Yahoo! News (Sept. 2, 2020), https://news.yahoo.com/viral-photo-shows-kids-no-171809219.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAG4dqe2tNs1lEJ4bvk99l0BosLqbgsIR5cnnqVYqWpXkh0dQy4YyB0GXkfPVoaWaSQUcKWHskKFOLhweLRqI1lj6_8sOHiIRvdtwAZjvKDYtmVdPKXr7YohJudkZUlOXPra-UbYSQeSCq9cfo1xuiry5ZcyLyV2OY1h2OVqUvwoX.

[3] Id.

[4] See Emmanuel Martinez, How Many Americans Lack High-Speed Internet?, The Markup (Mar. 26, 2020),  https://themarkup.org/ask-the-markup/2020/03/26/how-many-americans-lack-high-speed-internet#:~:text=There%20are%20more%20than%2014,census%20blocks%20and%20not%20households.

[5] Id.

[6] Id.

[7] Digital Divide, Stan. Univ. https://cs.stanford.edu/people/eroberts/cs181/projects/digital-divide/start.html (last visited Feb. 9, 2021).

[8] Id.

[9] See Martinez, supra note 4.

[10] Id.

[11] See id.

[12] Id.

[13]Closing Digital Divide in the Covid Era: Four Big Data Strategies, Digit. Divide Inst. https://digitaldivide.org/ (last visited Feb. 9, 2021).

[14] See Kelcee Griffis, COVID Bill Includes Broadcaster Loans, Broadband Funds, L.360 (Dec. 21, 2020) https://www.law360.com/articles/1339770/covid-bill-includes-broadcaster-loans-broadband-funds.

[15] Consolidated Appropriations Act, 2021, Pub. L. No. 116-260. (2020), available at https://www.congress.gov/bill/116th-congress/house-bill/133/text (Consolidated Appropriations Act) (enrolled bill).

[16] FCC Seeks Public Input on New $3.2 Billion Emergency Broadband Benefit Program, Fed. Commc’ns Comm’n (Jan. 4, 2021), https://docs.fcc.gov/public/attachments/DA-21-6A1.pdf.

[17] Id.

[18] Id. The discount on Tribal lands may be up to $75 per month, as opposed to $50 per month. Id.

[19] Id.

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Id.

[25] Id.

[26] Id.

[27] Id.

[28] Creating (Finally) an Emergency Broadband Benefit, Benton Inst. for Broadband & Soc’y (Jan. 5, 2021) https://www.benton.org/blog/creating-finally-emergency-broadband-benefit#:~:text=In%20the%20Consolidated%20Appropriations%20Act,the%20Emergency%20Broadband%20Benefit%20Program.&text=Broadband%20providers%20will%20be%20reimbursed,household%20is%20on%20Tribal%20land.

[29] Griffis, supra note 14.  

Post image: Two girls in East Salinas, California, rely on wifi from a Taco Bell restaurant to complete homework in a viral photo from August 2020. Via Luis Alejo on Twitter.

By Alexander F. Magee

The internet has long been championed as a marketplace of ideas that fosters unprecedented access to different viewpoints and mass amounts of information and media. At least in the eyes of some, Section 230 of the Communications Decency Act (“CDA”)[1] is largely responsible for the internet gaining that reputation, and the Section has therefore become something of a beacon for free speech.[2] In recent years, however, the Section has received considerable negative attention from both sides of the political spectrum, including explicit denouncement from both President Donald Trump and the Democratic Presidential Nominee Joe Biden.[3] What started as dissatisfied grumblings about unfair censorship orchestrated by tech companies, culminated in President Trump enacting an Executive Order in May calling for changes in the Section that would create greater liability for companies such as Facebook, Twitter, and Google.[4]

The CDA was first enacted in 1996 as an attempt to prevent children from accessing indecent material on the internet.[5] The Act made it a crime to knowingly send obscene material to minors or publish the material in a way that facilitates it being seen by minors.[6] Section 230 was conceived in-part as a way to facilitate this prevention goal, by allowing websites to “self-regulate themselves” by removing indecent material at their discretion.[7] While certain parts of the Act were quickly declared unconstitutional in the Supreme Court decision Reno v. American Civil Liberties Union,[8] Section 230 survived to become arguably the most important law in the growth of the internet.

The relevant language in the Section itself is contained in a “Good Samaritan” provision that states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” and that the provider shall not “be held liable on account of any action . . . taken in good faith to restrict access to or availability of material that the provider . . . considers to be obscene, lewd, lascivious . . . or otherwise objectionable, whether or not such material is constitutionally protected.”[9] This means Twitter, or a similar site, cannot be held liable for the objectionable material a third-party posts on their platform, subject to limited exceptions.[10] It also means that any action taken by Twitter to remove content they deem to be offensive or objectionable is protected as a way to encourage sites to remove offensive content by allowing them to do so without concern of liability.[11]

President Trump apparently takes issue with this “Good Samaritan” protection. In his May Executive Order, President Trump called social media’s moderation behavior “fundamentally un-American and anti-democratic,” and specifically accused Twitter of flagging and removing user content in a way that “clearly reflects political bias.”[12] President Trump also accused unspecified U.S. companies of “profiting from and promoting the aggression and disinformation spread by foreign governments like China.”[13] To address these concerns, the Executive Order calls for a narrowing of Section 230 protections, making it so that social media companies can be held liable for what their users post or for moderating those posts in a way that is “unfair and deceptive.”[14] Four months later, the Department of Justice proposed legislation aimed at weakening Section 230 protections.[15] The legislation is drafted in the spirit of the Executive Order, with special emphasis being paid to holding platforms accountable for hosting “egregious” and “criminal” content, while retaining immunity for defamation.[16]

Presidential Nominee Biden, for his part, seems to be more focused on holding tech companies liable for misinformation that is spread on their websites. In a January interview, Biden stated that tech companies should be liable for “propagating falsehoods they know to be false.”[17] Biden took particular umbrage with Facebook’s hosting of political ads that accused Biden of “blackmailing” the Ukrainian government, and he further stated that Mark Zuckerberg should be subject to civil liability for allowing such behavior.[18]

For a law that has garnered so much recent controversy, and one the public has taken for granted until relatively recently, it’s worth considering what the implications of removing Section 230 protections would be. Internet advocacy groups have vehemently criticized any Section 230 amendment proposals, and have generally painted a bleak picture of the ramifications of such changes.[19] These groups’ prognostications of the legal landscape without Section 230 protections generally predict social media sites will be facing a legal quagmire. Theoretically, sites would not only be exposed to liability for taking down certain third-party content, but also for not taking down other third-party material, which would effectively create a minefield of liability.[20] Internet Association, a trade association that represents preeminent tech companies such as Amazon, Facebook, and Google, has repeatedly attacked any threat to amend Section 230 as detrimental to the internet economy, and recently invoked the First Amendment as reason enough for social media companies to be able to “set and enforce rules for acceptable content on their services.”[21]

The latest serious threat to Section 230 has come from the FCC. On October 15, FCC Chairman Ajit Pai expressed his intention to move forward with a rulemaking request, stating that, while social media companies have a right to free speech, they do not have a “First Amendment right to special immunity denied to other outlets, such as newspapers and broadcasters.”[22] Several Democrats have challenged the FCC’s motives and overall authority to amend the Section.[23] The FCC, in response, asserts a fairly simple argument. The idea is that their authority rests in the language of the Communications Act of 1934, which in Section 201(b), gives the FCC explicit rulemaking power to carry out provisions of that Act.[24] In 1996, Congress added Section 230 to this Communications Act, therefore giving the FCC power to resolve any ambiguities in Section 230.[25] According to the FCC, two Supreme Court cases, AT&T v. Iowa Utilities Board[26] and City of Arlington v. FCC,[27] uphold their power to amend Section 230 pursuant to Section 201(b).[28]

The FCC’s push towards rulemaking came quickly after conservative-led criticisms of Section 230 reached a fever pitch following the circulation of a New York Post story containing potentially damaging pictures and information about Joe Biden’s son Hunter Biden.[29] Twitter and Facebook removed posts linking the story, on the basis that it contained hacked and private information.[30] The two sites have continuously denied suppressing conservative views[31] but, regardless, the Senate Judiciary Committee voted 12-0 to issue subpoenas to Jack Dorsey and Mark Zuckerberg, the sites’ respective CEOs, regarding their content moderation.[32] In anticipation of their hearings, Dorsey and Zuckerberg continued to passionately defend the Section, while Dorsey committed to making moderation changes at Twitter and Zuckerberg advocated for greater governmental regulation of tech companies in general.[33] Alphabet CEO Sundar Pichai, another tech leader subpoenaed, called Section 230 “foundational.”[34] The hearing took place on Wednesday and, according to early reports, was grueling.[35]

Lastly, on October 13, social media companies started to feel pressure from the Supreme Court. Justice Clarence Thomas voiced his concerns with the Section, stating that “extending §230 immunity beyond the natural reading of the text can have serious consequences,” and it would “behoove” the court to take up the issue in the future.[36] In the face of an impending election, uncertainties abound. However, one thing seems undeniable: Section 230 has never felt more heat that it does right now.


[1] 47 U.S.C § 230.

[2] See Section 230 of the Communications Decency Act, Elec. Frontier Found., https://www.eff.org/issues/cda230 (declaring Section 230 to be “The Most Important Law Protecting Internet Speech”).

[3] Cristiano Lima, Trump, Biden Both Want to Repeal Tech Legal Protections- For Opposite Reasons, Politico (May 29, 2020), https://www.politico.com/news/2020/05/29/trump-biden-tech-legal-protections-289306.

[4] Exec. Order No. 13,925, 85 Fed. Reg. 34,079 (May 28, 2020).

[5] See Robert Cannon, The Legislative History of Senator Exon’s Communications Decency Act, 49 Fed. Comm. L.J. 51, 57 (1996).

[6] See id. at 58.

[7] 141 Cong. Rec. H8,470 (daily ed. Aug. 4, 1995) (statement of Rep. Joe Barton), https://www.congress.gov/104/crec/1995/08/04/CREC-1995-08-04-pt1-PgH8460.pdf.

[8] 521 U.S. 844 (1997).

[9] 47 U.S.C. § 230(c)(1)–(2)(A).

[10] For instance, the protection is not available as a defense to sex trafficking offenses. 47 U.S.C. § 230(e)(5).

[11] See Content Moderation: Section 230 of the Communications Decency Act, Internet Assoc., https://internetassociation.org/positions/content-moderation/section-230-communications-decency-act/  (last visited Oct. 24, 2020) (providing explanation of “Good Samaritan” provision).

[12] Exec. Order 13,925, 85 Fed. Reg. at 34,079.

[13] Id.

[14] Id. at 34,081–82.

[15] The Justice Department Unveils Proposed Section 230 Legislation, Dep’t of Just., (Sept. 23, 2020), https://www.justice.gov/opa/pr/justice-department-unveils-proposed-section-230-legislation.

[16] Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996, Dep’t of Just., https://www.justice.gov/ag/department-justice-s-review-section-230-communications-decency-act-1996 (last visited Oct. 23, 2020).

[17] The Times Editorial Board, Opinion: Joe Biden Says Age Is Just a Number, N.Y. Times (Jan. 17, 2020), https://www.nytimes.com/interactive/2020/01/17/opinion/joe-biden-nytimes-interview.html.

[18] Id.

[19] See New IA Survey Reveals Section 230 Enables Best Parts of the Internet, Internet Assoc. (June 26, 2019), https://internetassociation.org/news/new-ia-survey-reveals-section-230-enable-best-parts-of-the-internet/ (putting forth a survey to show that Americans rely on Section 230 protections to a significant degree in their day-to-day use of the internet). 

[20] See Derek E. Bambauer, Trump’s Section 230 Reform Is Repudiation in Disguise, Brookings: TechStream (Oct. 8, 2020), https://www.brookings.edu/techstream/trumps-section-230-reform-is-repudiation-in-disguise/.

[21] See Statement on Today’s Executive Order Concerning Social Media and CDA 230, Internet Assoc. (May 28, 2020), https://internetassociation.org/news/statement-on-todays-executive-order-concerning-social-media-and-cda-230/; Statement in Response to FCC Chairman Pai’s Interest in Opening a Section 230 Rulemaking, Internet Assoc. (Oct. 15, 2020), https://internetassociation.org/news/statement-in-response-to-fcc-chairman-pais-interest-in-opening-a-section-230-rulemaking/.

[22] Ajit Pai (@AjitPaiFCC), Twitter (Oct. 15, 2020, 2:30 PM), https://twitter.com/AjitPaiFCC/status/1316808733805236226.

[23] See Ron Wyden (@RonWyden), Twitter (Oct. 15, 2020, 3:40 PM), https://twitter.com/RonWyden/status/1316826228754538496; Pallone & Doyle on FCC Initiating Section 230 Rulemaking, House Comm. on Energy & Com. (Oct. 19, 2020), https://energycommerce.house.gov/newsroom/press-releases/pallone-doyle-on-fcc-initiating-section-230-rulemaking.

[24] 47 U.S.C. § 201(b); Thomas M. Johnson Jr., The FCC’s Authority to Interpret Section 230 of the Communications Decency Act, FCC (Oct. 21, 2020), https://www.fcc.gov/news-events/blog/2020/10/21/fccs-authority-interpret-section-230-communications-act.

[25] Johnson Jr., supra note 24.

[26] 525 U.S. 366 (1999).

[27] 569 U.S. 290 (2013).

[28] Johnson Jr., supra note 24.

[29] See Katie Glueck et al., Allegations on Biden Prompts Pushback From Social Media Companies, N.Y. Times (Oct. 14, 2020),  https://www.nytimes.com/2020/10/14/us/politics/hunter-biden-ukraine-facebook-twitter.html.

[30] See id.

[31] See id.

[32] Siobhan Hughes & Sarah E. Needleman, Senate Judiciary Committee Authorizes Subpoenas for Twitter and Facebook CEOs, Wall St. J. (Oct. 22, 2020), https://www.wsj.com/articles/senate-judiciary-committee-authorizes-subpoenas-for-twitter-and-facebook-ceos-11603374015.

[33] See Michelle Gao, Facebook, Google, Twitter CEOs to Tell Senators Changing Liability Law Will Destroy How We Communicate Online, CNBC (Oct. 28, 2020), https://www.cnbc.com/amp/2020/10/27/twitter-google-facebook-ceos-prepared-statements-defend-section-230.html.  

[34] Id.

[35] David McCabe & Cecilia Kang, Republicans Blast Social Media CEOs While Democrats Deride Hearing, N.Y. Times (Oct. 28, 2020), https://www.nytimes.com/2020/10/28/technology/senate-tech-hearing-section-230.html (stating that the hearing lasted for four hours and the CEOs were asked over 120 questions).

[36] Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 592 U.S. ____ (2020) (Thomas, J., in denial of certiorari), https://www.supremecourt.gov/orders/courtorders/101320zor_8m58.pdf.