15 Wake Forest L. Rev. Online 46

William Gilchrist

Enacted as part of the Telecommunications Act of 1996, section 230 of the Communications Decency Act was originally introduced to shield children from inappropriate content online.[1] Despite being passed for a relatively limited purpose, section 230’s broad liability protections for interactive computer services have since been credited with shaping the modern internet.[2] Today, it stands as one of the few federal statutes recognized for having “fundamentally changed American life.”[3]

As social media and internet use have evolved, the language of section 230 has generally adapted to new technologies. But with the rise of artificial intelligence (AI) as a mainstream tool, section 230’s scope has become increasingly uncertain. Due in part to its brevity and resulting ambiguity, questions have emerged over whether its liability protections extend to online service providers’ use of AI,[4] particularly in recommender systems.[5] The Supreme Court first addressed section 230’s applicability to AI use in Gonzalez v. Google.[6] Although many hoped the case would bring clarity, the Court issued a three-page per curiam opinion dismissing it for failure to state a claim, leaving stakeholders back at square one.[7]

In Gonzalez, the Supreme Court considered for the first time whether section 230 shields online platforms from liability for using AI to recommend third-party content.[8] While the case was a critical first step in addressing AI-related liability, the Court’s ruling left concerned parties with more questions than answers. Critics argue the opinion fell short of fulfilling the judiciary’s responsibility to “say what the law is,” emphasizing the need for additional guidance on section 230’s scope.[9] Ultimately, the Court’s decision in Gonzalez not only reflects the judiciary’s lack of understanding of AI but also kicks the can down the road, leaving future courts unable to fairly and consistently interpret section 230’s scope. Accordingly, clearer legal standards are essential to help U.S. companies assess their liability exposure when deploying new products and to ensure they remain competitive in the global AI race.[10]

Today, hundreds of active AI-related lawsuits are making their way through the American legal system, typically involving intellectual property, amplification of dangerous content, and discrimination issues.[11] And while AI offers undeniable economic benefits, its widespread and varied application has made it difficult for lawmakers to understand and regulate.[12] As AI becomes increasingly embedded in daily life, AI-related litigation is only expected to increase.[13]

This Comment begins with an explanation of what AI is and how it is currently being used in American society. It then provides background on Gonzalez, analyzes the Court’s opinion and its implications, and argues that the Court should have directly addressed section 230’s applicability. Because a more effective resolution of Gonzalez would have defined section 230’s scope, this Comment critiques the Court’s decision and argues that affirming a broad interpretation of section 230 would have been the better outcome. Finally, this Comment examines the challenges of applying a broad interpretation of section 230, ending with a discussion of the challenges associated with current and future AI regulation.

I. Background

Prior to the 1950s, AI existed only in science fiction.[14] But after Alan Turing introduced the concept in his 1950 paper, Computing Machinery and Intelligence, AI began its gradual evolution into the tool it is today.[15] Beginning as “little more than a series of simple rules and patterns,” AI has advanced exponentially and is now “capable of performing tasks that were once thought impossible.”[16]

The private sector has embraced this expansion, with many companies taking advantage of the technology and incorporating it into various parts of their operations.[17] While doing so offers clear advantages, it has also raised new and increasingly frequent questions about potential liability exposure.[18] Until recently, U.S. courts have reliably turned to section 230 for guidance when evaluating liability arising from online AI use.[19] And while section 230’s text provided sufficient guidance in AI’s early stages, the technology’s growing complexity and evolving uses have rendered section 230’s applicability increasingly unclear.

Since section 230’s adoption in 1996, Americans’ internet access and use have dramatically increased.[20] As internet access has improved, so has Americans’ exposure to and awareness of AI.[21] The AI of the 1990s was virtually nonexistent compared to the AI of today, and new capabilities allow for the technology to be used in ways never before thought possible.[22] These advancements have seamlessly integrated AI into nearly every aspect of daily life, often in ways that go unnoticed.[23] Nevertheless, with new technology comes new legal issues, and AI is no exception.[24]

To understand Gonzalez and its global implications, it is first necessary to define what constitutes AI. At the highest level, AI is “a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem solving, and exercising creativity.”[25] And while AI use continues to evolve, the following discussion outlines the broad categories of AI and how they are currently being used.

A. A Spectrum of Systems

There are seven general categories of AI: three based on capabilities and four based on functionalities.[26] The three kinds of AI based on capabilities are Artificial Narrow, General AI, and Super AI.[27] Artificial Narrow—the only type of AI in use today—refers to technology that is “designed to perform a specific task or a set of closely related tasks.”[28] The other two types of AI based on capabilities—General and Super AI—remain theoretical, as neither has been successfully developed.[29] These forms are expected to match or surpass human intelligence.[30]

The four types of AI based on functionalities are Reactive Machine, Limited Memory, Theory of Mind, and Self-Aware.[31] Reactive Machine systems include AI “with no memory [that is] designed to perform a very specific task,” such as Netflix’s movie and TV show recommendation system.[32] Limited Memory AI differs from Reactive Machine AI because it can recall past events and monitor objects and situations over time.[33] Limited memory AI includes generative AI such as ChatGPT, virtual assistants such as Siri and Alexa, and self-driving vehicles.[34] Theory of Mind and Self-Aware AI are forms that are still in development or entirely theoretical.[35] Theory of Mind AI would allow machines to understand the thoughts and emotions of other entities, while Self-Aware AI would allow machines to understand their own internal conditions and traits.[36]

B. Teaching the Machine: How AI Learns

For each category of AI, there are several tools that software developers can use to create and enhance their systems.[37] One of these tools is machine learning (ML), a term that is often incorrectly used interchangeably with AI.[38] Though AI and ML are closely related, ML is a subset of AI[39] that involves “developing algorithms and statistical models that computer systems use to perform tasks without explicit instructions, relying on patterns and inference instead.”[40] While AI is “the ability of a machine to act and think like a human,” ML is a type of AI that involves humans “relying on data and feeding it to computers so they can simulate what they think we’re doing.”[41] The broad advantages of ML allow it to be used in a variety of contexts, including rapidly processing large datasets, using algorithms that change and improve over time, and spotting patterns or identifying anomalies.[42]

Broadly put, ML works by “exploring data and identifying patterns.”[43] Most tasks involving data-defined patterns or rule sets can be automated with ML,[44] which can be used to explore data and identify patterns in two ways: supervised learning and unsupervised learning.[45] Supervised learning involves humans labeling inputs and outputs that train an algorithm to accurately classify data and predict outcomes.[46] In contrast, unsupervised learning models work independently to discover the structure of unlabeled data. For example, an unsupervised learning model could be used to identify products often purchased together online.[47] Supervised learning, which is more widely used than unsupervised due to its ease of use, is the type of ML behind the recommender systems at issue in Gonzalez.[48]

C. Recommender Systems and Content Curation

Recommender systems, like those in Gonzalez, are “algorithms providing personalized suggestions for items that are most relevant to each user.”[49] Today, many social media platforms use AI and ML recommender systems in a variety of ways.[50] For example, YouTube uses AI and ML to automatically remove objectionable content, label imagery for video background editing, and to recommend videos.[51] In addition to YouTube, recommender systems are commonly used by social media platforms like Spotify, Amazon, Netflix, TikTok, and Instagram to tailor content and product suggestions to their users.[52]

AI, ML, and recommender systems are also being adopted outside the social media context.[53] “From manufacturing to retail and banking to bakeries, even legacy companies are using machine learning to unlock new value or boost efficiency.”[54] As explained by Aleksander Madry, Director of the MIT Center for Deployable Machine Learning, “machine learning is changing, or will change, every industry.”[55]

Though statistics about the adoption of AI differ widely, the number of global companies that use AI is likely in the realm of 35 to 55 percent, with some estimates as high as 67 percent.[56] Beyond its use by companies, individuals are increasingly incorporating AI into their daily lives.[57] But despite the increasing popularity of AI in American society, the only real framework federal courts have to interpret liability for AI use is section 230, an almost thirty-year-old federal statute that was initially passed to promote commercial internet use and shield children from harmful content online.[58]

II. The Legal Backbone of the Internet

In 1996, Congress passed section 230 in response to the “rapidly developing array of Internet and other interactive services.”[59] At the time, section 230 was necessary because of the First Amendment’s inability to adequately protect online platforms providing forums for third-party content.[60] A key catalyst for the legislation was the decision in Stratton Oakmont, Inc. v. Prodigy Services Co., a libel case from 1995.[61]

In Stratton Oakmont, the Supreme Court of New York, Nassau County, found that Prodigy Services, the owner-operator of a computer network that sponsored subscriber communication through online bulletin boards, was liable for third party statements posted on its site.[62] The court reasoned that Prodigy was liable as a “publisher” because it “monitor[ed] and edit[ed]” the individual bulletin board at issue, which gave Prodigy the benefit of editorial control.[63] In response, “to ensure that Internet platforms would not be penalized for attempting to engage in content moderation, Congress enacted Section 230.”[64]

A. Where Immunity Begins: Section 230(c)(1)

Known as “the twenty-six words that created the internet,”[65] the operative provision of the Communications Decency Act is section 230(c)(1),[66] which states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[67]

Section 230(c)(1) generally “protects websites from liability for material posted on the website by someone else.”[68] But interactive service providers are only protected from liability if they are not also an information content provider, or “someone who is ‘responsible, in whole or in part, for the creation or development of‘ the offending content.”[69] As explained by Chief Judge Kozinski in Fair Housing Council v. Roommates.com:

A website operator can be both a service provider and a content provider: If it passively displays content that is created entirely by third parties, then it is only a service provider with respect to that content. But as to content that it creates itself, or is “responsible, in whole or in part” for creating or developing, the website is also a content provider. Thus, a website may be immune from liability for some of the content it displays to the public but be subject to liability for other content.[70]

Thus, the key question in assessing recommender system liability is whether the system contains content for which the operator is “responsible in whole or in part for creating or developing,” or whether the system simply dictates how existing content is displayed.

Although section 230 does not expressly address the use of AI or recommender systems, it was drafted in response to the internet’s rapid growth and evolution.[71] To account for the inevitable emergence of more advanced technologies, section 230 was drafted in a technology-neutral manner that would allow the statute to be applied to emerging and future technology.[72] Unsurprisingly, the exponential increase in the commercial use and complexity of AI has also led to a high volume of litigation, as well as subsequent contradictory state and federal court rulings.[73] But despite the expectation that section 230 would be applied to future technology, the exceedingly complex nature of today’s AI has surpassed the clear bounds of section 230.

B. Uncertainty and Calls for Change

Increasing litigation and uncertainty have led to growing calls for regulation—calls that have not gone unnoticed by lawmakers and courts.[74] One of these lawmakers, Senator Dick Durban, Chairman of the Senate Judiciary Committee, compared the rise of AI to that of the social media industry.[75] “When it came to online platforms, the inclination of the government was to get out of the way. I’m not sure I’m happy with the outcome as I look at online platforms and the harms they have created . . . I don’t want to make that mistake again,” he said.[76] Other senators have agreed, with Senator Lindsey Graham even calling for an entirely new agency to regulate the technology.[77]

Even with increasing calls for regulation, the majority of current AI-related laws and regulations have been implemented by individual states with little to no guidance from Congress or the Supreme Court.[78] And even with bipartisan support and a potential model statute from the European Union,[79] Congress has yet to pass any meaningful regulation.[80] This lack of guidance at the federal level has led companies and courts to rely on conflicting interpretations of section 230 in AI-related claims. This growing uncertainty has also made Supreme Court guidance necessary to achieve clarity and consistency in future litigation.

III. Gonzalez v. Google: A Ripple, Not a Wave

In response to these concerns and calls for action, the Supreme Court granted certiorari to hear Gonzalez v. Google. As Gonzalez moved through the courts, it became a focal point for many AI executives and other stakeholders seeking guidance on how section 230 applies to AI.[81]

The case involved claims brought against Google under the Anti-Terrorism Act (ATA)[82] by the father of Nohemi Gonzalez, a 23-year-old who was murdered while studying abroad in Paris, France.[83] Gonzalez was one of 130 people killed during a series of attacks—known as the “Paris Attacks”—carried out by ISIS on November 13, 2015.[84] The Gonzalez plaintiffs claimed that Google was liable for the victims’ deaths because it “aided and abetted international terrorism and provided material support to international terrorists by allowing ISIS to use YouTube.”[85] Specifically, they argued that because Google’s YouTube algorithms “match and suggest content to users based upon their viewing history,” YouTube actively recommended ISIS videos to users and, in effect, “facilitat[ed] social networking among jihadists.”[86] The plaintiffs further alleged that YouTube “has become an essential and integral part of ISIS’s program of terrorism,” serving as “a unique and powerful tool of communication that enables ISIS to achieve its goals.”[87]

The district court concluded that the plaintiffs’ claims were barred by section 230 and dismissed the case pursuant to Rule 12(b)(6).[88] On appeal, the Ninth Circuit consolidated Gonzalez with Twitter v. Taamneh and Clayborn v. Twitter, two cases with similar facts and claims.[89] Taamneh was brought by the survivors of a victim killed in the Reina nightclub attack in Istanbul, Turkey, on January 1, 2017, while Clayborn was brought by the survivors of a victim killed in a 2015 attack on an office Christmas party in San Bernardino, California.[90] As in Gonzalez, the attacks in Taamneh and Clayborn were later connected to ISIS.[91]

In each case, the plaintiffs sought damages from Google, Twitter, and Facebook under the ATA, which “allows United States nationals to recover damages for injuries suffered ‘by reason of an act of international terrorism.’”[92] The scope of the ATA was broadened in 2016 by the Justice Against Sponsors of Terrorism Act (JASTA), which “amended the ATA to include secondary civil liability for ‘any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed’ an act of international terrorism.”[93] The claims theorized that the defendants were liable under the ATA because their “social media platforms allowed ISIS to post videos and other content to communicate the terrorist group’s message, to radicalize new recruits, and to generally further its mission,” effectively aiding and abetting international terrorism.[94]

The district court granted Google’s motion to dismiss in Gonzalez after concluding that all of the plaintiffs’ claims were barred by section 230 except for the revenue-sharing claims,[95] which were dismissed for failure to allege proximate cause.[96] The courts in Taamneh and Clayborn also granted the defendants’ motions to dismiss for failure to allege secondary liability under the ATA.[97] The Ninth Circuit affirmed the dismissals in Gonzalez and Clayborn, and reversed and remanded for further proceedings in Taamneh.[98] The Gonzalez plaintiffs’ filed a petition for a writ of certiorari on April 4, 2022, followed by the Taamneh plaintiffs’ on May 26. The Supreme Court granted both petitions on October 3, 2022.[99]

Prior to Gonzalez, the Supreme Court had never addressed how section 230 applies to liability stemming from the use of AI by a social media company, or any company in general.[100] And while any case before the Supreme Court has the potential to have a significant impact, the rapid growth and increasing pervasiveness of AI in American society, combined with the lack of meaningful regulation, has created an urgent need for guidance in the industry. Because section 230 is one of the “most important laws in tech policy,” organizations across the political spectrum would be impacted by the Supreme Court’s interpretation of its scope.[101]

The significance of the Court’s decision in Gonzalez resulted in, and is underscored by, the unusually high number of amicus briefs filed. Since 2010, Supreme Court cases have averaged about a dozen amicus briefs each.[102] In Gonzalez, seventy-eight organizations filed amicus curiae briefs in hopes of influencing the Court’s opinion.[103] While each organization had its own motives, one thing is clear: Many organizations had a stake in the outcome of Gonzalez, and the Court’s opinion left them with more questions than answers.[104]

A. Confusion at Oral Argument: A Decision in Twitter v. Taamneh

Many of the issues raised by amici were discussed during oral arguments.[105] The oral arguments—lasting nearly three hours in each case—were held in February 2023.[106] The Justices posed questions about everything from the use of AI to generate content[107] to hypotheticals about a bank’s potential liability for allowing Osama Bin Laden to open an account.[108] On multiple occasions, several of the Justices expressed confusion—not only about the arguments being made, but also about the questions before the Court.[109] But after countless hypotheticals and endless back-and-forth between counsel and the Justices, the Justices were apparently left with more questions than answers.

The Court’s opinion highlighted its confusion over the issues, the available options, and the potential consequences of various interpretations of section 230. After hundreds of pages of amicus briefs and oral arguments that went over the time limit by an hour and thirty-four minutes,[110] the Court’s three-page per curiam opinion was released on May 18, 2023.[111] Despite high hopes from stakeholders and members of the AI community, the Court declined to address the application of section 230, concluding that the plaintiffs’ complaint appeared to state “little, if any, plausible claim for relief.”[112] This conclusion led the Court to vacate the Ninth Circuit’s judgment and remand the case for consideration in light of the decision in Taamneh.[113]

The Court overturned the Ninth Circuit’s ruling in the more robust Taamneh opinion. Although Taamneh provided significantly more analysis than Gonzalez, the analysis focused on what it means to “aid and abet” and “what precisely must the defendant have ‘aided and abetted’” when determining liability under JASTA.[114] The Court looked to Halberstam v. Welch[115] to provide the legal framework for “civil aiding and abetting and conspiracy liability.”[116] After acknowledging that “the point of aiding and abetting is to impose liability on those who consciously and culpably participated in the tort at issue,” the Court noted that the nexus between the defendants and the terrorist attack was far removed.[117] Seeming skeptical, the Court acknowledged the plaintiffs’ allegations that Twitter “failed to do ‘enough’ to remove ISIS-affiliated users and ISIS-related content—out of hundreds of millions of users worldwide and an immense ocean of content—from their platforms.”[118] However, because the plaintiffs ultimately failed to allege intentional aid or systematic assistance, the Court held the allegations were insufficient under the ATA.

B. Gonzalez, Taamneh, and Their Effects

While the Court offered a relatively substantive aiding and abetting analysis in Taamneh, the Court’s decisions in both Gonzalez and Taamneh ultimately fell short. Touted as an act of misguided judicial minimalism, the Court’s decisions “simultaneously avoid[ed] the risk of erroneous judgment on a technical question with far-reaching consequences and [left] the politically contentious issue of § 230’s scope to the democratically accountable Congress.”[119] And although doing so may have been the safer short-term decision given the Court’s questionable understanding of the ins and outs of recommender systems and AI,[120] deferring the decision to Congress is hardly likely to yield meaningful regulations anytime soon.

Nonetheless, the Court’s decision not to rule on section 230 was not a result of a lack of awareness of the need for guidance on the issue. While it was the first petition the Court granted, Gonzalez was not the first case to petition the Court to define or provide clarity on the scope of section 230.[121] The Court denied cert in Doe v. Facebook, a case involving allegations that a sexual predator used Facebook to groom the plaintiff for sex trafficking.[122] In his concurrence denying certiorari, Justice Thomas noted that “‘the United States Supreme Court—or better yet, Congress—may soon resolve the burgeoning debate about whether the federal courts have thus far correctly interpreted section 230.’ Assuming Congress does not step in to clarify § 230’s scope, we should do so in an appropriate case.”[123]

Gonzalez was the appropriate case. Yet, the Court’s questions and admitted confusion at oral argument[124] indicate that it ultimately took the advice outlined by Justice Thomas in Doe—that “before we close the door on such serious charges, ‘we should be certain that is what the law demands.’”[125] But even though the Justices may remain uncertain about what the law demands, the Court’s internal justifications for avoiding the substance of section 230 will have lasting consequences for social media conglomerates and other companies who have come to rely on recommender systems and other forms of AI.

IV. Critical Error: The Need to Affirm Section 230’s Broad Scope

As lower courts have consistently held in the past, immunity should only be withheld when an interactive service provider makes “substantial or material edits and additions” to content.[126] Here, the Court ultimately reached the correct outcome in Gonzalez by dismissing the plaintiff’s claims, but its fatal flaw was failing to validate section 230’s broad immunity for future litigants.

An affirmance of the broad scope of section 230 was necessary for two reasons. First, providing current and future online service providers with a dependable, broad grant of immunity is in line with the plain language of the statute and Congress’s intent for section 230—“to protect Internet platforms’ ability to publish and present user-generated content in real time, and to encourage them to screen and remove illegal or offensive content.”[127] Second, policy considerations support a broad application of section 230 because, as the evolution of the internet has shown, strong liability protections encourage beneficial technological and economic development in the United States, particularly for small businesses.[128]

A. Gonzalez Ignores Congressional Intent and the Plain Language of Section 230

Two primary purposes of section 230 were “to protect Internet speech from content regulation by the government,” and to reverse a New York Supreme Court case that held “an online service provider’s decision to moderate the content of its message boards rendered it a ‘publisher’ of users’ defamatory comments on the boards.”[129] Both purposes were aimed at promoting the continued development of the internet, and while AI and the internet were once separate and distinct, they have become increasingly intertwined.[130]

Like the internet, AI has and continues to evolve at extreme speed.[131] The drafters were aware of the rapidly changing nature of the internet, and section 230’s immunity for “publisher[s]” and “speaker[s]” was drafted without highly specific or limiting language to account for inevitable and unforeseeable technological changes.[132] The first web page was launched in 1991, just five years before section 230 was passed.[133] In the early 1990s, people were only just beginning to hear about the new information superhighway that would one day change their lives.[134] By 2024, contemporary AI—including recommender systems and ML algorithms—is viewed much like the internet was when section 230 was first drafted in the early 1990s.[135]

As highlighted by Senator Ron Wyden and former Representative Christopher Cox, “many of the major Internet platforms engaged in content curation [were] a precursor to the targeted recommendations that today are employed by YouTube and other contemporary platforms.”[136] Senator Wyden and former Representative Cox agree that the recommender systems at issue in Gonzalez—which are representative of typical AI systems used by online service providers—are the “direct descendants” of early content curation efforts.[137] And just as Wyden, Cox, and other regulators of the 1990s were seeking to promote the development of the internet, regulators are now seeking to promote AI.[138] So because the internet and AI are intrinsically linked, regulation of companies’ use of AI should fall within the scope of section 230.

Beyond the original intent and plain language of section 230, the statute has also been applied as a broad shield to protect online service providers from liability since its inception.[139] As noted by Justice Thomas in Malwarebytes, Inc. v. Enigma Software Group, USA, LLC, “the first appellate court to consider the statute held that . . . § 230 confers immunity even when a company distributes content that it knows is illegal.”[140] This broad interpretation set the stage for future section 230 jurisprudence, and subsequent decisions “adopted this holding as a categorical rule across all contexts.”[141]

Courts have also upheld the principle that section 230 should be interpreted broadly, even in the context of AI.[142] Although Gonzalez was the first time the issue reached the Supreme Court, it is not the first time a court considered whether AI use could fall within the scope of the statute.[143]

In Force v. Facebook, Inc., the Second Circuit interpreted section 230 to protect AI use.[144] There, the court noted that because the algorithms at issue were “content ‘neutral,’ . . . merely arranging and displaying others’ content . . . [was] not enough to hold Facebook responsible.”[145] However, the court went further, providing additional clarification on section 230’s scope:

We do not mean that Section 230 requires algorithms to treat all types of content the same. To the contrary, Section 230 would plainly allow Facebook’s algorithms to, for example, de-promote or block content it deemed objectionable. We emphasize only—assuming that such conduct could constitute “development” of third-party content—that plaintiffs do not plausibly allege that Facebook augments terrorist-supporting content primarily on the basis of its subject matter.[146]

By recognizing the plain language and overall intent behind the statute—to allow online service providers to monitor what is on their sites, while recognizing that no provider could prevent all illegal or undesirable content—the court in Force reached the conclusion the Supreme Court should have affirmed in Gonzalez.

The plain language of section 230, express legislative intent behind its drafting, and the subsequent interpretation of the statute all support the prevailing view that section 230 should be interpreted broadly. When considering these aspects of section 230, as well as others discussed below, the decision is clear: The Supreme Court should have used Gonzalez as an opportunity to affirm the broad scope of section 230 and extend liability protection to online service providers that incorporate AI recommender systems into their platforms.

B. Congress or the Courts? Promoting Beneficial AI Development in the United States

Interpreting section 230’s liability protections to include AI was necessary to foster innovation and strengthen AI development in the United States. As noted by section 230’s drafters, “[b]y providing legal certainty for platforms, the law has enabled the development of innumerable internet business models based on user-created content.”[147] Like the internet, AI has the potential to have a dramatic impact on our lives,[148] and while AI has become increasingly integrated into large scale business models, small and midsize businesses have begun to fall behind.[149] This is partly because larger businesses typically have the resources and capital to implement AI and are better able to offset the costs and litigation risks associated with testing and developing cutting-edge technology.

Despite litigation risks and other obstacles, AI use more than doubled between 2017 and 2022.[150] However, the proportion of global businesses that use AI has plateaued between 50 and 60 percent,[151] and a May 2023 report found that only 25 percent of small businesses have begun testing or using AI in their operations.[152] Compared with larger companies, the benefits of AI have the potential to generate an even greater impact for small businesses; the benefits include cost savings through improved processes, accelerated time from production to market for new products, and access to talent that would otherwise be too expensive.[153]

Despite its many benefits, AI is still largely underutilized by small businesses.[154] Fortunately, small percentage increases in AI adoption have the potential to have a major impact, as small businesses of 500 employees or less make up 99.9 percent of all U.S. businesses.[155] Promoting small business growth is a high priority among government regulators,[156] and lawmakers should be doing everything in their power to help wherever possible. Accordingly, because the legal certainty provided by section 230 “enabled the development of innumerable internet business models,”[157] interpreting section 230 to include AI would provide crucial opportunities and support for small businesses, just as it did for early internet sites.

Finally, the Gonzalez courts’ sole focus on whether recommender systems are within the scope of section 230 does not limit the applicability of the decision to other types of AI. Increasingly popular generative AI products, such as ChatGPT and other chatbots, “can and do rely on and relay information that is provided by another.”[158] Thus, it is likely that a broad interpretation in Gonzalez would extend to other forms of AI, like generative AI.

In sum, a broad application of section 230 is supported by the plain text of the statute, the legislative intent of the drafters, subsequent interpretation by lower courts, and prevailing policy considerations. Gonzalez presented a great opportunity to solidify these concerns by affirming section 230’s broad scope, resulting in the conclusion that the decision not to reach the issue was misguided.

V. Guidance from Abroad and the Potential for Regulation by Default

By default, the Gonzalez decision left lower courts and AI-reliant companies in the same position as before the Court granted certiorari. But questions about the scope of section 230 and companies’ liability for the AI use are not going away; as AI advances and becomes more prevalent in society, these questions will continue to pop up with greater frequency. Although the Supreme Court may argue that the decision is better left for Congress, continued inaction risks allowing foreign regulations to dictate the outcome instead.

For example, a decision may come in the form of AI or speech regulations from the European Union (EU). In 2018, the EU passed the General Data Protection Regulation (GDPR), the self-proclaimed “strongest privacy and security law in the world.”[159] Even though the GDPR is only targeted towards protecting EU residents, many companies “made global changes to their services to comply with European regulations.”[160] Shortly after the GDPR was passed, the European Union passed the Digital Services Act (DSA), which came into effect on November 16, 2022.[161] The DSA requires big tech companies, like Google and Facebook, “to police their platforms more strictly to better protect European users from hate speech, disinformation, and other harmful online content.”[162] Both the GDPR and DSA threaten large fines for noncompliant companies,[163] and while the laws only require compliance inside the EU, it is often more practical to make global changes rather than region-specific adjustments.

On December 9, 2023, the European Parliament reached a provisional agreement with the European Council for “a bill to ensure AI in Europe is safe, respects fundamental rights and democracy, [and allows] businesses [to] thrive and expand.”[164] Known as the AI Act, the bill would be the world’s first comprehensive AI law, creating “obligations for providers and users depending on the level of risk” from artificial intelligence.[165] Although still in its early stages, the AI Act would, among other things, ban categorization systems that use sensitive characteristics, such as political, religious, or philosophical beliefs, as well as sexual orientation and race.[166] If passed, the effects of the Act would likely be similar to the GDPR and DSA: The risk of non-compliance and practical difficulties of making region-specific changes would lead companies to tailor their algorithms in areas outside the EU to ensure compliance. So, by failing to outline the protections for AI stemming from section 230, the Supreme Court missed an opportunity to set the rule for what was protected in the United States, opening the door for EU regulations to set the standard.

VI. No Perfect Solution

Although a broad interpretation of section 230 is the best solution, it is not a perfect solution. The online world is a dangerous place, and bad actors will inevitably take advantage of or work around online algorithms to commit crimes and other bad acts. Beyond concerns that algorithms help promote terrorism, interest groups have warned that several other problems—including human trafficking, child exploitation, and the spread of misinformation—will become worse if section 230 is interpreted broadly.[167] While mitigating these harms is difficult, a highly specific and restrictive interpretation would cause more harm than good, and the novel, dynamic nature of AI makes comprehensive regulation currently impractical. As such, broad regulation is the only reasonable step at this stage.

As highlighted by the National Center on Sexual Exploitation (NCOSE), the internet is the primary location for the sexual exploitation of children, and section 230 “was never intended to provide legal protection to websites that . . . facilitate traffickers in advertising the sale of unlawful sex acts.”[168] Both points are uncontroverted and address abhorrent societal problems which require continued commitment and action by regulators to eradicate. But preventing exploitation and human trafficking online is a complex challenge. And while narrowing the scope of section 230 might provide limited assistance in addressing these pinpoint issues, altering the interpretation of a broad statute based on the concerns of a small subset of stakeholders would do more harm than good. As noted in an amicus brief filed by Reddit Inc., “[j]udicial interpretation should not move at Internet speeds, and there is no telling what a sweeping order removing targeted recommendations from the protection of Section 230 would do to the Internet as we know it.”[169]

Section 230 has been interpreted broadly since its enactment.[170] Although the significant immunity from liability given to online service providers has resulted in negative consequences, the broader implications of a drastic change would be difficult for the Court to predict. Thus, a narrow interpretation of section 230’s scope would have been misguided.

In the realm of free speech, less regulation has traditionally been associated with more freedom.[171] But some argue that AI has the potential to disrupt that balance. In its July 2023 report, PEN America argued that “generative A.I. threatens free expression by ‘supercharging’ the dissemination of disinformation and online abuse,” resulting in “the potential for people to lose trust in language itself, and thus in one another.”[172] While the dissemination of misinformation online is of increasing concern, online service providers are already taking steps to mitigate misinformation risks on their platforms.[173] And while there is always more that can be done, the “massive volume of content and the nuanced nature of misinformation”[174] make creating effective regulations difficult, if not impossible. Interpreting section 230 narrowly in hopes of addressing these concerns would still fail to effectively confront these issues, while chilling freedom of the press by discouraging journalists from reporting on issues that might lead to legal trouble.[175]

Despite the pitfalls of interpreting section 230 broadly, the novel and increasingly complex nature of AI has resulted in a lack of currently feasible alternatives. AI is particularly difficult to regulate because it is used to perform a wide variety of tasks, exists in many different forms with distinct characteristics, often involves the use of multiple algorithms working together, and consistently evolves through updates and new data.[176]

These characteristics are part of what makes AI so useful. It is dynamic, easily adaptable, and able to advance on its own. Unfortunately, Congress does not share these characteristics, and targeted regulations in the near future are unlikely. As a result, it is important to make do with what we have—section 230. Drafted nearly thirty years ago, section 230 has served as an effective regulator of internet speech since its creation, and even though applying its language to AI is by no means a perfect solution, it currently is the best available option.

Conclusion

AI is new, complex, and changing daily—as a result, lawmakers have struggled to develop and pass regulations that can keep up with AI’s rapid development. Referring to the European AI Act,[177] Tom Siebel, founder and CEO of C3.ai, an emerging AI company, said that “[i]f you can understand one sentence of it, you will understand one more sentence than I, and I think you will understand one more sentence than the people who wrote it.”[178] Regulating AI presents a significant challenge, but like any emerging technology, it comes with obstacles. Leaders in the industry still haven’t found the perfect solution, and a perfect web of AI laws will not emerge overnight.

Still, it is important to maximize the effectiveness of the regulations already in existence by tailoring our interpretation of existing law to include AI. In Gonzalez, the Supreme Court had the opportunity to do just that, by affirming the way many lower courts have interpreted section 230 in the past. By failing to affirm lower courts’ previous interpretations, the Supreme Court effectively affirmed the status quo—that section 230 might be applied to protect online service providers from liability—while also spreading uncertainty about companies’ future exposure to liability for the use of AI.

  1.  47 U.S.C. § 230; Gonzalez v. Google LLC, 2 F.4th 871, 942 (9th Cir. 2021).
  2. Interactive computer services are “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” See 47 U.S.C. § 230(f)(2); see also Jeff Kosseff, The Twenty-Six Words That Created the Internet 1 (2019).
  3. Kosseff, supra note 2, at 3.
  4. Brief of Senator Ron Wyden and Former Representative Christopher Cox as Amici Curiae in Support of Respondent, Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023) (No. 21-1333); see, e.g., Gonzalez, 2 F.4th 871; Dyroff v. Ultimate Software Grp., 934 F.3d 1093 (9th Cir. 2019); Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
  5. Recommender systems generate “personalized suggestions for items that are most relevant to each user.” See Francesco Casalegno, Recommender Systems – A Complete Guide to Machine Learning Models, Medium (Nov. 25, 2022), https://towardsdatascience.com/recommender-systems-a-complete-guide-to-machine-learning-models-96d3f94ea748.
  6. 143 S. Ct. 1191 (2023) (per curiam); see also Ron Wyden & Christopher Cox, The Authors of Section 230: ‘The Supreme Court Has Provided Much-Needed Certainty About the Landmark Internet Law–but AI Is Uncharted Territory, Fortune (Sept. 7, 2023), https://fortune.com/2023/09/07/authors-of-section-230-supreme-court-certainty-landmark-internet-law-ai-uncharted-territory-politics-tech-wyden-cox/; Gonzalez, 2 F.4th at 942.
  7. Gonzalez, 143 S. Ct. 1191.
  8. Id. at 1191–92.
  9. Leading Case, Twitter, Inc. v. Taamneh, 137 Harv. L. Rev. 400, 400 (2023) (quoting Marbury v. Madison, 5 U.S. (1 Cranch) 137, 177 (1803)).
  10. See Riccardo Righi et al., Eur. Comm’n, JRC 125613, EU in the Global Artificial Intelligence Landscape (2021).
  11. John Kell, AI Is About to Face Many More Legal Risks. Here’s How Businesses Can Prepare, Fortune (Nov. 8, 2023), https://fortune.com/2023/11/08/ai-playbook-legality/.
  12. Shari Davidson, The Growth of AI Law: Exploring Legal Challenges in Artificial Intelligence, Nat’l L. Rev. (Jan. 28, 2025), https://natlawreview.com/article/growth-ai-law-exploring-legal-challenges-artificial-intelligence.
  13. Kell, supra note 11.
  14. Michael Haenlein & Andreas Kaplan, A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence, Cal. Mgmt. Rev., Aug. 2019, at 5, 6–7.
  15. Id.
  16. Tanya Roy, The History and Evolution of Artificial Intelligence, AI’s Present and Future, All Tech Mag. (July 19, 2023), https://alltechmagazine.com/the-evolution-of-ai/.
  17. Kell, supra note 11.
  18. Id.
  19. See Doe v. Facebook, Inc., 142 S. Ct. 1087, 1088 (2022) (Thomas, J., concurring in denial of certiorari).
  20. Susannah Fox & Lee Rainie, Pew Rsch. Ctr., The Web at 25 in the U.S. 9 (2014) (finding that only 14% of U.S. adults had internet access in 1995).
  21. See Brian Kennedy et al., Pew Rsch. Ctr., Public Awareness of Artificial Intelligence in Everyday Activities (2023).
  22. See Max Roser, The Brief History of Artificial Intelligence: The World Has Changed Fast – What Might Be Next?, Our World in Data (Dec. 6, 2022), https://ourworldindata.org/brief-history-of-ai.
  23. AI is now used in everything from determining airline ticket prices to deciding who is released from jail. See id.
  24. See Lyria B. Moses, Recurring Dilemmas: The Law’s Race to Keep up with Technological Change 4 (Univ. of New S. Wales Working Paper No. 2007-21, 2007), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=979861.
  25. What is AI?, McKinsey & Co. (Apr. 3, 2024), https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-ai; see Understanding the Different Types of Artificial Intelligence, IBM Data & AI Team (Oct. 12, 2023), https://www.ibm.com/think/topics/artificial-intelligence-types.
  26. IBM Data & AI Team, supra note 25; see also Naveen Joshi, 7 Types of Artificial Intelligence, Forbes (June 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/06/19/7-types-of-artificial-intelligence/.
  27. IBM Data & AI Team, supra note 25. General AI and Super AI are both strictly theoretical concepts; even OpenAI’s ChatGPT is considered a form of Narrow AI because it’s limited to the single task of text-based chat. Id.
  28. Narrow AI, DeepAI, https://deepai.org/machine-learning-glossary-and-terms/narrow-ai (last visited May 24, 2025).
  29. Ben Nancholas, What Are the Different Types of Artificial Intelligence?, Univ. Wolverhampton (June 7, 2023), https://online.wlv.ac.uk/what-are-the-different-types-of-artificial-intelligence/. General AI, also known as Artificial General Intelligence (AGI), uses “previous learnings and skills to accomplish new tasks in a different context without the need for [humans] to train the underlying models.” IBM Data & AI Team, supra note 25. Super AI, if ever successfully developed, “would think, reason, learn, make judgments and possess cognitive abilities that surpass those of human beings.” Id.
  30. IBM Data & AI Team, supra note 25.
  31. Id. The four types of AI based on functionalities all fit into the broader category of Artificial Narrow AI. Id.; see also Joshi, supra note 26.
  32. IBM Data & AI Team, supra note 25; see also How Netflix’s Recommendations System Works, Netflix: Help Ctr., https://help.netflix.com/en/node/100639 (last visited May 24, 2025).
  33. IBM Data & AI Team, supra note 25.
  34. Id.
  35. Id.
  36. Id. Theory of Mind AI is currently being developed, and Self-Aware AI is strictly theoretical. Id.
  37. See Artificial Intelligence (AI) vs. Machine Learning, Columbia Eng’g, https://ai.engineering.columbia.edu/ai-vs-machine-learning/ (last visited May 24, 2025).
  38. See Artificial Intelligence (AI) vs. Machine Learning (ML), Microsoft Azure, https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/artificial-intelligence-vs-machine-learning (last visited May 24, 2025).
  39. Id.
  40. What’s the Difference Between Business Intelligence and Machine Learning?, AWS, https://aws.amazon.com/compare/the-difference-between-business-intelligence-and-machine-learning/ (last visited May 24, 2025).
  41. Kristin Burnham, Artificial Intelligence vs. Machine Learning: What’s the Difference?, Ne. Univ. Graduate Programs (May 6, 2020), https://graduate.northeastern.edu/resources/artificial-intelligence-vs-machine-learning-whats-the-difference/.
  42. Id.
  43. The Evolution and Techniques of Machine Learning, DataRobot (Jan. 7, 2025), https://www.datarobot.com/blog/how-machine-learning-works/.
  44. Id.
  45. Julianna Delua, Supervised Versus Unsupervised Learning: What’s the Difference?, IBM (Mar. 12, 2021), https://www.ibm.com/blog/supervised-vs-unsupervised-learning/.
  46. Id.
  47. Id.
  48. See Gaudenz Boesch, Supervised vs Unsupervised Learning for Computer Vision, viso.ai (Dec. 21, 2023), https://viso.ai/deep-learning/supervised-vs-unsupervised-learning/; Alyshai Nadeem, Machine Learning 101: Supervised, Unsupervised, Reinforcement Learning Explained, datasciencedojo (Sept. 15, 2022), https://datasciencedojo.com/blog/machine-learning-101/.
  49. Gonzalez v. Google, LLC, 2 F.4th 871, 881 (9th Cir. 2021). Recommender systems fall into the category of Artificial Narrow and are a type of reactive machine AI. See IBM Data & AI Team, supra note 25; Casalegno, supra note 5.
  50. See Rem Darbinyan, How AI Transforms Social Media, Forbes (Mar. 16, 2023), https://www.forbes.com/sites/forbestechcouncil/2023/03/16/how-ai-transforms-social-media/.
  51. Bernard Marr, The Amazing Ways YouTube Uses Artificial Intelligence and Machine Learning, Forbes (Aug. 23, 2019), https://www.forbes.com/sites/bernardmarr/2019/08/23/the-amazing-ways-youtube-uses-artificial-intelligence-and-machine-learning/.
  52. Id.; see Nadeem, supra note 48; see also Tamara Biljman, AI in Social Media: Benefits, Tools, and Challenges, Sendible (Jun. 4, 2024), https://www.sendible.com/insights/ai-in-social-media.
  53. Sara Brown, Machine Learning, Explained, MIT Mgmt. Sloan Sch.: Ideas Made to Matter (Apr. 21, 2021), https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained; see Katherine Haan & Robb Watts, How Businesses Are Using Artificial Intelligence, Forbes Advisor (Apr. 24, 2023), https://www.forbes.com/advisor/business/software/ai-in-business/.
  54. Brown, supra note 53.
  55. Id.
  56. Id.; Anthony Cardillo, How Many Companies Use AI? (New Data), Exploding Topics, https://explodingtopics.com/blog/companies-using-ai (May 1, 2025); IBM, IBM Global AI Adoption Index 2022 (May 2022), https://www.ibm.com/downloads/cas/GVAGA3JP; The State of AI in 2023: Generative AI’s Breakout Year, McKinsey & Co. (Aug. 1, 2023), https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year#steady.
  57. Ryan Tracy, ChatGPT’s Sam Altman Warns Congress That AI ‘Can Go Quite Wrong, Wall St. J. (May 16, 2023), https://www.wsj.com/tech/ai/chatgpts-sam-altman-faces-senate-panel-examining-artificial-intelligence-4bb6942a.
  58. See Wyden & Cox, supra note 6, at 2; Stratton Oakmont, Inc. v. Prodigy Serv. Co., No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995).
  59. 47 U.S.C. § 230(a)(1).
  60. See Kosseff, supra note 2, at 9–10.
  61. Stratton Oakmont, 1995 WL 323710; Wyden & Cox, supra note 6, at 2; see also Kosseff, supra note 2, at 45–56.
  62. Stratton Oakmont, 1995 WL 323710, at *1.
  63. Id. at *4–5.
  64. Wyden & Cox, supra note 6, at 2.
  65. See Kosseff, supra note 2, at 2.
  66. Id.; Gonzalez v. Google LLC, 2 F.4th 871, 886 (9th Cir. 2021).
  67. 47 U.S.C. § 230(c)(1).
  68. Gonzalez, 2 F.4th at 886–87 (quoting Doe v. Internet Brands, Inc., 824 F.3d 846, 850 (9th Cir. 2016)).
  69. Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157, 1162 (9th Cir. 2008) (quoting 47 U.S.C. § 230(f)(3)).
  70. Id. at 1162–63.
  71. Section 230, EFF, https://www.eff.org/issues/cda230 (last visited May 24, 2025).
  72. Id.
  73. Rebecca Kern, SCOTUS to Hear Challenge to Section 230 Protections, Politico (Oct. 3, 2022), https://www.politico.com/news/2022/10/03/scotus-section-230-google-twitter-youtube-00060007. Compare Prager Univ. v. Google LLC, 85 Cal. App. 5th 1022 (Cal. Ct. App. 2022), and Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093 (9th Cir. 2019), with Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
  74. Zach Schonfeld, Chief Justice Centers Supreme Court Annual Report on AI’s Dangers, Hill (Dec. 31, 2023), https://thehill.com/regulation/court-battles/4383324-chief-justice-centers-supreme-court-annual-report-on-ais-dangers/.
  75. Tracy, supra note 57.
  76. Id.
  77. Id.
  78. Lawrence Norden & Benjamin Lerude, States Take the Lead on Regulating Artificial Intelligence, Brennan Ctr. for Just. (Nov. 6, 2023), https://www.brennancenter.org/our-work/research-reports/states-take-lead-regulating-artificial-intelligence.
  79. See EU AI Act: First Regulation on Artificial Intelligence, Eur. Parl.: Topics (Feb. 19, 2025), https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
  80. Norden & Lerude, supra note 78.
  81. Kern, supra note 73.
  82. 18 U.S.C. § 2333.
  83. Gonzalez v. Google LLC, 2 F.4th 871, 880 (9th Cir. 2021). Gonzalez’s initial complaint was later amended and joined by other family members and similarly situated plaintiffs. Id. at 882.
  84. Id. at 880; Lori Hinnant, 2015 Paris Attacks Suspect: Deaths of 130 ‘Nothing Personal, AP News (Sept. 15, 2021), https://apnews.com/article/europe-france-trials-paris-brussels-f2031a79abfae46cbd10d4315cf29163.
  85. Gonzalez, 2 F.4th at 882.
  86. Id. at 881.
  87. Id.
  88. See Gonzalez v. Google, Inc., 282 F. Supp. 3d 1150, 1171 (N.D. Cal. 2017); Fed. R. Civ. P. 12(b)(6).
  89. Gonzalez, 2 F.4th at 880. Taamneh and Clayborn involve claims against Google, Twitter, and Facebook. Id.
  90. Gonzalez, 2 F.4th at 879, 883, 884; 1 Artificial Intelligence: Law and Litigation § 3.02, Lexis (database updated May 2024).
  91. Gonzalez, 2 F.4th at 879.
  92. Id. at 880 (quoting 18 U.S.C. § 2333(a)).
  93. Id. at 885 (quoting Justice Against Sponsors of Terrorism Act (JASTA), Pub. L. No. 144-222, 130 Stat. 852 (2016)).
  94. Id. at 880.
  95. The Gonzalez plaintiffs’ revenue-sharing theory is distinct from their other theories of liability because the allegations were not based on the content ISIS placed on YouTube. Id. at 898. Instead, the allegations were “premised on Google providing ISIS with material support by giving ISIS money.” Id. The revenue-sharing allegations stemmed from Google’s AdSense program, which involved “Google shar[ing] a percentage of revenues generated from those advertisements with ISIS.” Id.
  96. Id. at 882.
  97. Id. at 880. The district court in Taamneh did not reach the issue of section 230 immunity. Id.
  98. Id. The Taamneh plaintiffs only appealed the dismissal of their aiding and abetting claim. Id. at 908. The Ninth Circuit reversed the district court’s dismissal after concluding that the complaint’s allegations “that defendants provided services that were central to ISIS’s growth and expansion, and that this assistance was provided over many years,” adequately alleged the defendants’ assistance to ISIS was substantial. Id. at 910.
  99. Gonzalez v. Google LLC, 143 S. Ct. 80 (2022) (mem.); Twitter, Inc. v. Taamneh, 143 S. Ct. 81 (2022) (mem.).
  100. Gonzalez v. Google, Elec. Priv. Info. Ctr., https://epic.org/documents/onzalez-v-google/ (last visited May 24, 2025); see also Gonzalez v. Google LLC, 143 S. Ct. 1191, 1191–92 (2023) (per curiam).
  101. See Danielle Draper & Sean Long, Summarizing the Amicus Briefs Arguments in Gonzalez v. Google LLC, Bipartisan Pol’y Ctr. (Feb. 21, 2023), https://bipartisanpolicy.org/blog/arguments-gonzalez-v-google/.
  102. Richard L. Pacelle, Jr., Amicus Curiae Briefs in the Supreme Court, Oxford Rsch. Encyclopedias (April 20, 2022), https://doi.org/10.1093/acrefore/9780190228637.013.1992.
  103. Draper & Long, supra note 101.
  104. Id.
  105. See generally Transcript of Oral Argument, Gonzalez v. Google, 143 S. Ct. 1191 (2023) (No. 21-1333) [hereinafter Gonzalez Oral Argument Transcript]; Transcript of Oral Argument, Twitter v. Taamneh, 143 S. Ct. 1206 (2023) (No. 21-1496) [hereinafter Taamneh Oral Argument Transcript].
  106. See Gonzalez Oral Argument Transcript, supra note 105, at 1, 164; Taamneh Oral Argument Transcript, supra note 105, at 1, 151.
  107. Gonzalez Oral Argument Transcript, supra note 105, at 49.
  108. Taamneh Oral Argument Transcript, supra note 105, at 72–73.
  109. Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72; Taamneh Oral Argument Transcript, supra note 105, at 12–13, 54, 126.
  110. Kate Klonick, How 236,471 Words of Amici Briefing Gave Us the 565 Word Gonzalez Decision, Klonickles (May 29, 2023), https://klonick.substack.com/p/how-236471-words-of-amici-briefing.
  111. Gonzalez v. Google, 143 S. Ct. 1191 (2023) (per curiam).
  112. Id. at 1192.
  113. Id.
  114. Taamneh, 143 S. Ct. at 1218.
  115. 705 F.2d 472 (D.C. Cir. 1983).
  116. Taamneh, 143 S. Ct. at 1218 (quoting Justice Against Sponsors of Terrorism Act (JASTA), Pub. L. No. 114-222, § 2(a)(5), 130 Stat. 852, 852 (2016)).
  117. Id. at 1230.
  118. Id. at 1230–31.
  119. See Leading Case, supra note 9, at 404–06. “Judicial minimalism is the principle that judges should ‘say[] no more than necessary to justify an outcome.’” Id. at 405 (alteration in original) (quoting Cass R. Sunstein, The Supreme Court, 1995 Term — Foreword: Leaving Things Undecided, 110 Harv. L. Rev. 4, 6 (1996)).
  120. See Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72; Taamneh Oral Argument Transcript, supra note 105, at 12–13, 54, 126.
  121. See Doe v. Facebook, Inc., 142 S. Ct. 1087, 1088–89 (2022) (Thomas, J., concurring in denial of certiorari).
  122. See id. at 1087.
  123. Id. at 1088 (quoting In re Facebook, 625 S.W.3d 80 (Tex. 2021)).
  124. Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72.
  125. Doe, 142 S. Ct. at 1088 (2022) (Thomas, J., concurring in denial of certiorari) (quoting Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 18 (2020)).
  126. See Malwarebytes, 141 S. Ct. at 16.
  127. Wyden & Cox, supra note 6, at 2.
  128. See Kosseff, supra note 2, at 2.
  129. Wyden & Cox, supra note 6, at 6.
  130. See George Glover, It’s Time to See Whether AI Is the New Internet — or the Next Metaverse,’ Bus. Insider (July 11, 2023), https://www.businessinsider.com/ai-chatgpt-artificial-intelligence-internet-dot-com-metaverse-crypto-blockchain-2023-7; Einaras Von Gravrock, How AI Empowers the Evolution of the Internet, Forbes (Nov. 15, 2018), https://www.forbes.com/sites/forbeslacouncil/2018/11/15/how-ai-empowers-the-evolution-of-the-internet/.
  131. See generally How Has the Internet Changed in the Last 20 Years, in.house.media, https://www.ihm.co.uk/blog/how-has-the-internet-changed-in-the-last-20-years/ (last visited May 24, 2025).
  132. 47 U.S.C. § 230(c)(1); see Wyden & Cox, supra note 6, at 2 (“Congress drafted Section 230 in light of its understanding of the capabilities of then-extant online platforms and the evident trajectory of Internet development.”).
  133. Josie Fischels, A Look Back at the Very First Website Ever Launched, 30 Years Later, NPR (Aug. 6, 2021), https://www.npr.org/2021/08/06/1025554426/a-look-back-at-the-very-first-website-ever-launched-30-years-later.
  134. See Fox & Rainie, supra note 20.
  135. See Danny Hajek et al., What Is AI and How Will It Change Our Lives? NPR Explains., NPR (May 25, 2023), https://www.npr.org/2023/05/25/1177700852/ai-future-dangers-benefits; How Artificial Intelligence Is Changing Your Life Unknowingly, Econ. Times (Mar. 15, 2023), https://economictimes.indiatimes.com/news/how-to/how-artificial-intelligence-is-changing-your-life-unknowingly/articleshow/98455922.cms?from=mdr; Mike Thomas, The Future of AI: How Artificial Intelligence Will Change the World, builtin, https://builtin.com/artificial-intelligence/artificial-intelligence-future (Jan. 28, 2025).
  136. Wyden & Cox, supra note 6, at 8.
  137. Id. at 12–13.
  138. See, e.g., Exec. Order No. 14,110, 88 Fed. Reg. 75191 (Oct. 30, 2023).
  139. See Zeran v. Am. Online, Inc., 129 F.3d 327, 331–34 (4th Cir. 1997).
  140. Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 15 (2020) (Thomas, J., concurring in the denial of certiorari) (citing Zeran, 129 F.3d at 331–34).
  141. Malwarebytes, 141 S. Ct. at 15 (Thomas, J., concurring in the denial of certiorari) (citations omitted).
  142. See Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
  143. See id.
  144. Id. In Force, victims of terrorist attacks in Israel alleged that Facebook provided material support to Hamas terrorists by enabling Hamas “to disseminate its messages directly to its intended audiences and to carry out communication components of its terror attacks.” Id. at 59.
  145. Id. at 70.
  146. Id. at 70 n.24.
  147. Christopher Cox, The Origins and Original Intent of Section 230 of the Communications Decency Act, Rich. J.L. & Tech. Blog (Aug. 27, 2020), https://jolt.richmond.edu/2020/08/27/the-origins-and-original-intent-of-section-230-of-the-communications-decency-act/.
  148. See sources cited supra note 135.
  149. See Poornima Apte, How AI is Leveling the Marketing Playing Field Between SMBs and Big Business, U.S. Chamber of Comm.: CO (Aug. 7, 2023), https://www.uschamber.com/co/good-company/launch-pad/how-small-businesses-are-using-ai.
  150. Michael Chui et al., The State of AI in 2022—and A Half Decade in Review, McKinsey & Co. (Dec. 6, 2022), https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review.
  151. Id.
  152. Report: Small Business Owners Embrace the Future – Majority Say They Will Adopt Generative AI, FreshBooks, https://www.freshbooks.com/press/data-research/data-research-majority-of-small-business-owners-will-use-ai (last visited May 24, 2025); see also Michelle Kumar, Navigating the Era of AI: Implications for Small Businesses, Bipartisan Pol’y Ctr. (Nov. 3, 2023), https://bipartisanpolicy.org/blog/navigating-the-era-of-ai-implications-for-small-businesses (highlighting a recent survey that found that 23% of small businesses use AI in some form).
  153. See Apte, supra note 149.
  154. See id.
  155. Martin Rowinski, How Small Businesses Drive The American Economy, Forbes (Mar. 25, 2022), https://www.forbes.com/councils/forbesbusinesscouncil/2022/03/25/how-small-businesses-drive-the-american-economy/.
  156. See, e.g., FACT SHEET: The Small Business Boom Under the Biden-Harris Administration, White House (Apr. 28, 2022), https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2022/04/28/fact-sheet-the-small-business-boom-under-the-biden-harris-administration/.
  157. Cox, supra note 147.
  158. Christopher MacColl, Defamatory Bots and Section 230: Navigating Liability in the Age of Artificial Intelligence, JD Supra (July 18, 2023), https://www.jdsupra.com/legalnews/defamatory-bots-and-section-230-3202468 (quoting 47 U.S.C. § 230(c)(1)).
  159. The General Data Protection Regulation, Eur. Council (June 13, 2024), https://www.consilium.europa.eu/en/policies/data-protection-regulation/.
  160. Jared Schroeder, Meet the EU Law That Could Reshape Online Speech in the U.S., Slate (Oct. 27, 2022), https://slate.com/technology/2022/10/digital-services-act-european-union-content-moderation.html.
  161. See Questions and Answers On the Digital Services Act, Eur. Comm’n (Feb. 23, 2024), https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_2348.
  162. Kelvin Chan & Raf Casert, EU law targets Big Tech Over Hate Speech, Disinformation, Associated Press (April 23, 2022), https://apnews.com/article/technology-business-police-social-media-reform-52744e1d0f5b93a426f966138f2ccb52.
  163. See Schroeder, supra note 160.
  164. Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI, Eur. Parl.: News (Sept. 12, 2023), https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai.
  165. See EU AI Act: First Regulation on Artificial Intelligence, Eur. Parl.: News, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (Feb. 19, 2025); The Digital Services Act Package, Eur. Comm’n, https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package (Feb. 12, 2025).
  166. Artificial Intelligence Act, supra note 164.
  167. See, e.g., Brief of the National Center on Sexual Exploitation, the National Trafficking Sheltered Alliance, and RAINN, as Amici Curiae in Support of Petitioners, Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023) (No. 21-1333) [hereinafter NCSE Brief]. See generally Sivile Manene et al., Mitigating Misinformation About the COVID-19 Infodemic on Social Media: A Conceptual Framework, NIH Nat’l Libr. Med., May 2023, at 1, 2 (“Social media platforms have taken steps to mitigate the spread of COVID-19 misinformation by implementing policies . . . which prohibit[] users from using the platform’s services to share false or misleading information about COVID-19.”).
  168. NCSE Brief, supra note 167.
  169. Brief for Reddit, Inc. and Reddit Moderators as Amici Curiae in Support of Respondent, Gonzalez, 143 S. Ct. 1191 (No. 21-1333).
  170. See Zeran v. Am. Online, Inc., 129 F.3d 327, 331–34 (4th Cir. 1997).
  171. See John Samples, Why the Government Should Not Regulate Content Moderation of Social Media, CATO Inst. (Apr. 9, 2019), https://www.cato.org/policy-analysis/why-government-should-not-regulate-content-moderation-social-media.
  172. Sue Halpern, The Year A.I. Ate the Internet, New Yorker (Dec. 8, 2023), https://www.newyorker.com/culture/2023-in-review/the-year-ai-ate-the-internet.
  173. See Manene et al., supra note 167, at 2 (“Social media platforms have taken steps to mitigate the spread of COVID-19 misinformation by implementing policies . . . which prohibit[] users from using the platform’s services to share false or misleading information about COVID-19.”).
  174. See Nandita Krishnan et al., Research Note: Examining How Various Social Media Platforms Have Responded to COVID-19 Misinformation, Harv. Kennedy Sch. Misinformation Rev. (Dec. 15, 2021), https://misinforeview.hks.harvard.edu/article/research-note-examining-how-various-social-media-platforms-have-responded-to-covid-19-misinformation/.
  175. See Gabrielle Lim & Samantha Bradshaw, Chilling Legislation: Tracking the Impact of “Fake News” Laws on Press Freedom Internationally, Ctr. for Int’l Media Assistance (July 19, 2023), https://www.cima.ned.org/publication/chilling-legislation/.
  176. See Cary Coglianese, Regulating Machine Learning: The Challenge of Heterogeneity, Competition Pol’y Int’l, Feb. 2023, at 1, 3.
  177. Artificial Intelligence Act, supra note 164.
  178. Kell, supra note 8.

By Meredith Behrens

As many as forty million people are estimated to be trapped in modern day slavery.[1] Rather than disappearing with its formal abolishment in the 19th century, slavery has taken hold of victims not only in the United States, but around the world through human trafficking.[2] Roughly five million of these victims are victims of sex trafficking,[3] which is described as a “booming” industry that turns a yearly profit of 99 billion dollars.[4] Victims of sex trafficking are forced to engage in commercial sex acts such as prostitution or pornography through force, fraud, or coercion.[5]

Victims in the United States do not come from a set and predictable background.[6] Rather, victims of sex trafficking come from a variety of “races, ethnicities, sexual orientations, gender identities,”[7] socio-economic backgrounds, and educational levels across all fifty states.[8] However, victims are commonly minors, with the average age of entry balancing between fourteen and sixteen-years-old.[9] An estimated 300,000 American minors risk entry to the sex trafficking industry every year.[10]

The Effects of FOSTA

The expansion of the internet has only made the sex trafficking of victims easier.[11] In fact, three of every four victims may be trafficked online.[12] In 2018, Congress signed into law the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) to combat the prevalence of sex trafficking online.[13] The text of the bill states that websites “that promote and facilitate prostitution have been reckless in allowing the sale of sex trafficking victims and have done nothing to prevent the trafficking of children and victims of force, fraud, and coercion.”[14] FOSTA seeks to correct this through both the amendment of the Communications Decency Act and the amendment of 18 U.S.C. § 2421 to add § 2421A.[15]

The Communications Decency Act (“CDA”) protects interactive computer service providers from being held as the publisher or speaker of offensive material published on their sites.[16] Service providers that allowed prostitution of children on their sites could claim immunity under the CDA.[17] However, FOSTA amends the CDA by adding to § 230(e) that the section has “no effect on sex trafficking law,” so these websites that knowingly allow for sex trafficking to take place were no longer protected.[18] Without this protection, there is an increased likelihood that service providers will monitor their sites to prevent and remove content related to the sex trafficking of individuals.[19] Section 2421A is the “centerpiece” of FOSTA,[20] which mandates a fine or imprisonment for whomever “owns, manages, or operates an interactive computer service . . . or conspires or attempts to do so, with the intent to promote or facilitate the prostitution of another person.”[21] FOSTA both increases criminal liability and limits protections for interactive computer services with regards to sex trafficking content.[22] With such a substantial change, it is unsurprising that constitutionality concerns quickly arose.[23]

Constitutionality Concerns in Woodhull Freedom Foundation

On June 28, 2018, Woodhill Freedom Foundation filed a complaint in the United States District Court for the District of Columbia.[24] In that complaint, the plaintiffs assert that FOSTA violates both the First and Fifth Amendments to the United States Constitution, as well as the Ex Post Facto Clause.[25] This constitutional violation comes, in part, because plaintiffs claim it is “overbroad, vague, impermissibly targets speech based on viewpoint and content, pares back immunity from certain state law claims, erodes the scienter requirement, and wrongly criminalizes conduct that was lawful at the time committed.”[26]

Without addressing the constitutionality of FOSTA, the District Court granted the Government’s Motion to Dismiss after determining the plaintiffs did not adequately allege standing.[27] Plaintiffs appealed the ruling to the District of Columbia Circuit.[28] The Appellate Court heard oral argument on September 20, 2019, at which time Judges Rogers, Griffith, and Katsas determined whether a plausible interpretation of the language of FOSTA allowed for the plaintiffs to have standing.[29] Appellee argued that the text of 18 U.S.C. § 2421A specifically requires intent and refers to specific acts of prostitution rather than the concept of prostitution as a whole.[30] While the court admitted that this was a reasonable interpretation of the text, it appeared unconvinced that Appellant had not raised at least a plausible interpretation of the statute as well.[31]

On January 24, 2020, the Appellate Court released its holding regarding Woodhull Freedom Found, concluding that at least two of the five plaintiffs, Alex Andrews and Eric Koszyk, have Article III standing to bring forward a pre-enforcement challenge to the statute.[32] Alex Andrews created Rate That Rescue, a website that provides reviews of resources available to sex workers.[33] Eric Koszyk is a licensed massage therapist whose advertisements were removed from Craigslist after the passing of FOSTA.[34] Koszyk has suffered monetary losses as a result.[35] The Appellate Court reversed the District Court’s holding and remanded for further proceedings.[36]

Moving Forward

The District Court will now have to determine the constitutionality of FOSTA. There are two proposed interpretations of the statute before the District Court.[37] The Government’s constitutional interpretation of FOSTA is narrow, only including the promotion and facilitation of specific criminal acts.[38] The plaintiffs interpret the text broadly, to include a wide range of speech that goes beyond engagement in a specific criminal act.[39] Under the doctrine of constitutional avoidance, courts must disregard unconstitutional interpretations if they find an additional interpretation that is both reasonable and constitutional.[40] The District Court has yet to rule.

If the court determines FOSTA is unconstitutional, it is unclear what the effect will be. A significant decline has taken place in discovered sex trafficking activity online since the time of FOSTA’s passing, however, FOSTA was signed into law five days after Backpage (the most well-known site that allowed for sex trafficking) was seized.[41] While FOSTA did result in sites such as Craigslist removing relevant ad sections,[42] the test will be whether sex trafficking activity online will spike once more if FOSTA is off the books.


[1] Slavery Today, International Justice Mission, https://www.ijm.org/slavery.

[2] Id.; What Is Modern Slavery? Anti-Slavery, https://www.antislavery.org/slavery-today/modern-slavery/.

[3] Sex Trafficking, End Slavery Now, http://www.endslaverynow.org/learn/slavery-today/sex-trafficking; Sex Trafficking, Polaris, https://polarisproject.org/human-trafficking/sex-trafficking.

[4] Human Trafficking by the Numbers, Human Rights First (Jan. 7, 2017), https://www.humanrightsfirst.org/resource/human-trafficking-numbers.

[5] What is Sex Trafficking? Shared Hope https://sharedhope.org/the-problem/what-is-sex-trafficking/.

[6] The Victims, Human Trafficking Hotline, https://humantraffickinghotline.org/what-human-trafficking/human-trafficking/victims.

[7] Violence Prevention: Sex Trafficking, Centers for Disease Control and Prevention, https://www.cdc.gov/violenceprevention/sexualviolence/trafficking.html.

[8] The Victims, supra note 6.

[9] Demand: A Comparative Examination of Sex Tourism and Trafficking in Jamaica, Japan, the Netherlands, and the United States, Shared Hope International at 5, https://sharedhope.org/wp-content/uploads/2012/09/DEMAND.pdf [hereinafter Demand].

[10] Erin Weaver, Human trafficking has wide-reaching social impact, Souderton Independent (Jan. 18, 2014), http://www.montgomerynews.com/soudertonindependent/news/human-trafficking-has-wide-reaching-social-impact/article_aedb4b5f-9d4c-5415-a8ab-1fceee5c9a3b.html.

[11] Demand, supra note 9 at 5.

[12] See Child Trafficking Statistics, Thorn, https://www.thorn.org/child-trafficking-statistics/; see also Robbie Couch, 70 Percent of Child Sex Trafficking Victims Are Sold Online: Study, Huffington Post (July 25, 2014), https://www.huffpost.com/entry/sex-trafficking-in-the-us_n_5621481 (“In 2014, buying a child for sex online can be just as easy as selling your old couch or posting an updated resume”).

[13] Allow States and Victims to Fight Online Sex Trafficking Act of 2017, Pub. L. No, 115-164, 132 Stat. 1258 (2018).

[14] Id.

[15] Id.

[16] Communications Decency Act, 47 U.S.C. §230 (2018).

[17] Alina Selyukh, Section 230: A Key Legal Shield for Facebook, Google Is About to Change, NPR (Mar. 21, 2018, 5:11 AM), https://www.npr.org/sections/alltechconsidered/2018/03/21/591622450/section-230-a-key-legal-shield-for-facebook-google-is-about-to-change.

[18] Allow States and Victims to Fight Online Sex Trafficking Act of 2017, Pub. L. No, 115-164, 132 Stat. 1253 (2018).

[19] See Justice Department Seizes Classified Ads Website Backpage.com, Fox2News (updated Apr. 7, 2018, 9:13 PM), https://fox2now.com/2018/04/07/justice-department-seizes-classified-ads-website-backpage-com/.

[20] Woodhull Freedom Found v. United States, 334 F. Supp. 3d 185, 190 (D.C. Cir. 2018).

[21] 18 U.S.C. § 2421A(a) (2018).

[22] Patrick J. Carome & Ari Holtzblatt, Congress Enacts Law Creating a Sex Trafficking Exception from the Immunity Provided by Section 230 of the Communications Decency Act, WilmerHale (Apr. 16, 2018), https://www.wilmerhale.com/en/insights/client-alerts/2018-04-16-congress-enacts-law-creating-a-sex-trafficking-exception-from-the-immunity-provided-by-section-230-of-the-communications-decency-act.

[23] Woodhull, 334 F. Supp. 3d at 189.

[24] Woodhull, 334 F. Supp. 3d 185.

[25] Id. at 189.

[26] Id.

[27] Id. at 203.

[28] Id.

[29] Oral Argument, Woodhull Freedom Found v. United States (D.C. Cir. 2019) (No. 18-5298),https://www.cadc.uscourts.gov/recordings/recordings2019.nsf/6AB1615EE1D6E7C58525847B00573B9A/$file/18-5298.mp3.

[30] Id. at 17:40.

[31] Id.

[32] Woodhull Freedom Found. v. United States, No. 18-5298, 2020 WL 398625 (D.C. Cir. Jan. 24, 2020).

[33] Id. at *8.

[34] Id. at *9

[35] Id.

[36] Id. at *18.

[37] Oral Argument, supra note 29.

[38] Id.

[39] Id.

[40]  Richard L. Hasen, Constitutional Avoidance and Anti-Avoidance by the Roberts Court, 2009 Sup. Ct. Rev. 181, 186 (2009).

[41] Eric Goldman, The Complicated Story of FOSTA and Section 230, 17 First Amend. L. Rev. 279, 285 (2018).

[42] Justice Department Seizes Classified Ads Website Backpage.com, supra note 19.

By Greg Berman

Controversy erupted last week after a George Washington University professor, Dave Karpf, tweeted a joke at New York Times columnist Bret Stephens’s expense.  Quoting an 8-word post about a bedbug infestation in the Times’ newsroom, Karpf joked that “[t]he bedbugs are a metaphor.  The bedbugs are Bret Stephens.”[1]  Although this tweet did not initially gain much traction, it later went viral when Stephens personally emailed Karpf, as well as the George Washington University provost, demanding an apology for the insult.[2]  After several more tweets and an off-scheduled column post by Stephens with visible references to the controversy, both sides of the feud seem to be slowing down.[3]  Although this back and forth is just one isolated incident between two individuals, it highlights a growing trend in our discourse.  With the growing usage of social media in our society, these sorts of ideological clashes have seemingly become more prevalent than ever.[4]  And even though these virtual arguments tend to be more of an annoyance than a liability, reputation-damaging attacks (even those made on the internet) still can run the risk of triggering a costly libel lawsuit.[5] 

The tort of libel is defined by Black’s Law Dictionary as “[a] defamatory statement expressed in a fixed medium, esp[ecially] writing but also a picture, sign, or electronic broadcast.”[6]  The enforcement of libel laws in the United States dates predates the ratification of the Constitution, most notably with the trial of John Peter Zenger, whose 1735 jury acquittal established the idea that someone cannot be charged with libel if the remark is true.[7]  Even today, the accuracy of the allegedly libelous statements continues to be one of key factors for courts to consider in libel cases, with each state setting their own standards for liability.[8]  Another key consideration for courts comes from New York Times v. Sullivan, where the Supreme Court differentiated defamation claims involving public figures and private individuals, holding that any libel suit against a public figure requires the inaccurate statement to be made with “actual malice.”[9]  Actual malice has been defined by the Court as “knowledge that (the statement) was false or with reckless disregard of whether it was false or not.”[10]  Additional protections against libel claims were enacted nine years later, when the Supreme Court limited libel laws to apply only to intentionally false statements of fact, even if a trial court is presented with baseless opinions that are similarly incorrect.[11]

Our ever-increasing move toward a digitalized world raises the question of how these libel laws can be applied to internet publications.  To start, no claim for libel can be made against any social media site, such as Facebook or Twitter, for content posted by a user of that social media site.[12]  This is primarily due to the expansive legal protections given to these “interactive computer services” by Section 230 of the Communications Decency Act of 1996.[13]  That being said, individuals may still be held liable for content that they post on the internet, with each state continuing to apply its own standards for libelous conduct even as information crosses state lines.[14]  When it comes to the question of jurisdiction, the Supreme Court clarified in Keeton v. Hustler Magazine, Inc. that a state can claim jurisdiction over a non-resident when injurious information is intentionally disseminated to its citizens.[15]  Specifically, the Court cited each state’s interest in protecting its citizens from intentional falsehoods as a key consideration in its decision.[16] While online information is disseminated in a different manner than the magazines from Keeton, courts have begun allow jurisdiction for internet libel cases when the online post directly targets one or more residents of the state.[17]

When applying libel laws to online statements, courts have used similar substantive principles to those used for print publications.  In 2009, former musician Courtney Love was sued by her former attorney after tweeting allegedly libelous remarks.[18]  As this was the first reported case to go to a jury decision for remarks made over Twitter, the trial court was left with a case of first impression.[19]  In a landmark decision, the court opted to apply traditional libel laws.  A jury found that Love did not know that the statements were false at the time they were made; she therefore lacked the actual malice required to be considered libel.[20]  

There have also been other cases involving libelous comments made over Twitter.[21]  For example, one such case took place after a tenant complained on her personal Twitter account about her “moldy apartment.”[22]  After seeing the post, the landlord sued the tenant under Illinois libel laws; the case was later dismissed with prejudice because the tweet was too vague to meet the requisite legal standards for libel.[23]  Another lawsuit took place after a mid-game conversation between an NBA coach and a referee was overheard and tweeted out by an AP reporter.[24]  The referee insisted that the reported conversation never took place, and the subsequent lawsuit ultimately resulted in a $20,000 settlement.[25]  Each of these cases present factually unique scenarios, but all together indicate a growing trend: even as the medium for public discourse has been rapidly shifting towards the digital sphere, traditional libel laws still continue to apply.

In addition to substantive treatment, there also remain unresolved legal questions stemming from courts’ application of the single publication rule.  The single publication rule provides that “any one edition of a book or newspaper, or any one radio or television broadcast, exhibition of a motion picture or similar aggregate communication is a single publication” and therefore “only one action for damages can be maintained.”[26]  The justification behind this rule is simple: by aggregating all damages allegedly caused by a publication to a single action, a party would not be perpetually bombarded with litigation long after their active role in publication has ended.[27]  This rule has already been adopted in “the great majority of states” and was implemented within the 4th Circuit in Morrissey v. William Morrow & Co.[28]  However, some academics have proposed that the single publication rule should not always be applied to social media posts, citing the possibility that a publisher could personally solicit shares or retweets and thereby maintain an active role in republishing libelous information.[29]  The issue of continual dissemination by means of retweeting seems primed to be raised in later litigation, but thus far has not been brought before any court.[30]  Still, many circuits have already begun the process of implementing the single publication rule to online posts in general (so far these cases have been litigated over personal blogs rather than Facebook or Twitter posts), so it will be interesting to see how courts handle the issue if eventually raised by litigants down the road.[31]

As the social media presence in our society grows stronger each day, only time will tell if courts will craft separate libel principles for online publications.  There are arguments to be made on both sides, especially now that online mediums are increasingly taking over many of the informational functions previously held by their print counterparts.[32]  For now, at least, courts are continuing to use the same traditional libel laws that have been evolving and changing since John Peter Zenger’s 1735 acquittal. [33]  And while the jury is still out on whether Dave Karpf actually thinks Bret Stephens is a metaphorical bedbug, he can likely rest easy knowing that current libel laws will protect his joke from any future legal trouble.


1. Dave Korpf (@davekorpf), Twitter (Aug. 26, 2019, 5:07 PM), https://twitter.com/davekarpf/status/1166094950024515584.

[2] See Dave Korpf (@davekorpf), Twitter (Aug. 26, 2019, 9:22 PM), https://twitter.com/davekarpf/status/1166159027589570566; Dave Korpf (@davekorpf), Twitter (Aug. 26, 2019, 10:13 PM) https://twitter.com/davekarpf/status/1166171837082079232; see also Tim Efrink & Morgan Krakow, A Professor Called Bret Stephens a ‘Bedbug.’ The New York Times Columnist Complained to the Professor’s Boss, Wash. Post (Aug. 27, 2019), https://www.washingtonpost.com/nation/2019/08/27/bret-stephens-bedbug-david-karpf-twitter/ (summarizing the context of Korpf’s tweet and the resulting controversy).

[3] See Dave Korpf (@davekorpf), Twitter (Aug. 30, 2019, 7:58 PM), https://twitter.com/davekarpf/status/1167587392292892672; Bret Stephens, Opinion, World War II and the Ingredients of Slaughter, N.Y. Times (Aug. 30, 2019), https://www.nytimes.com/2019/08/30/opinion/world-war-ii-anniversary.html.

[4] Jasmine Garsd, In An Increasingly Polarized America, Is It Possible To Be Civil On Social Media?, NPR (Mar. 31, 2019) https://www.npr.org/2019/03/31/708039892/in-an-increasingly-polarized-america-is-it-possible-to-be-civil-on-social-media.

[5] See id.; Adeline A. Allen, Twibel Retweeted: Twitter Libel and the Single Publication Rule,15 J. High Tech. L. 63, 81 n.99 (2014).

[6]  Libel, Black’s Law Dictionary (11th ed. 2019).

[7] Michael Kent Curtis, J. Wilson Parker, William G. Ross, Davison M. Douglas & Paul Finkelman, Constitutional Law in Context 1038 (4th ed. 2018).

[8] James L. Pielemeier, Constitutional Limitations on Choice of Law: The Special Case of Multistate Defamation, 133 U. Pa. L. Rev. 381, 384 (1985).

[9] 376 U.S. 254, 279–80 (1964); see also Gertz v. Robert Welch, Inc., 418 U.S. 323, 351 (1974) (defining a public figure as either “an individual achiev[ing] such pervasive fame or notoriety” or an individual who “voluntarily injects himself or is drawn into a particular public controversy”).

[10] Sullivan, 376 U.S. at 280.

[11] See Gertz, 418 U.S. at 339 (“[u]nder the First Amendment, there is no such thing as a false idea.”).

[12] See Allen, supra note 5, at 82.  Of course, Facebook and Twitter are not immunized against suits for content that they post on their own platforms.  Cf. Force v. Facebook, Inc., ___ F.3d ___, No. 18-397, 2019 WL 3432818, slip op. at 41 (2d Cir. July 31, 2019), http://www.ca2.uscourts.gov/decisions/isysquery/a9011811-1969-4f97-bef7-7eb025d7d66c/1/doc/18-397_complete_opn.pdf (“If Facebook was a creator or developer, even ‘in part,’ of the terrorism-related content upon which plaintiffs’ claims rely, then Facebook is an ‘information content provider’ of that content and is not protected by Section 230(c)(1) immunity.”).

[13] 47 U.S.C. §230(c)(1) (2017) (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”).  “Interactive computer service” is defined by the act as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server”). Id. at §230(f)(2); see also Allen, supra note 5, at 82 n.100 (describing additional protections provided by the Communications Decency Act, including how Twitter falls under its definition of “interactive computer service”).

[14] See Allen, supra note 5, at 84; Pielemeier, supra note 8, at 384.

[15] 465 U.S. 770, 777 (1984); see also Calder v. Jones, 465 U.S. 783, 791 (1984) (holding that personal jurisdiction is proper over defendants who purposefully directed libelous information at the plaintiff’s home state with the intent of causing harm).

[16] Keeton, 465 U.S. at 777.

[17] See, e.g.,Zippo Mfg. Co. v. Zippo Dot Com, Inc., 952 F. Supp. 1119, 1124 (W.D. Pa. 1997); Young v. New Haven Advocate, 315 F.3d 256, 263 (4th Cir. 2002); Tamburo v. Dworkin, 601 F.3d 693, 707 (7th Cir. 2010) (each applying traditional libel tests for personal jurisdiction to online publications, requiring the publication to be intentionally targeted towards citizens of the state). 

[18] Gordon v. Love, No. B256367, 2016 WL 374950, at *2 (Cal. Ct. App. Feb. 1, 2016). The exact language of the tweet in question was “I was fucking devastated when Rhonda J. Holmes, Esquire, of San Diego was bought off @FairNewsSpears perhaps you can get a quote.”  Id.  The tweet was deleted five to seven minutes after it was posted.  Id. at *3.  This was Love’s second time being sued for defamation over comments made on her Twitter account, although the first lawsuit resulted in a $430,000 settlement before trial. Matthew Belloni, Courtney Love to Pay $430,000 in Twitter Case, Reuters (Mar. 3, 2011), https://www.reuters.com/article/us-courtneylove/courtney-love-to-pay-430000-in-twitter-case-idUSTRE7230F820110304.

[19] See Allen, supra note 5, at 81 n.99.

[20] Love, 2016 WL 374950, at *3.  The reason actual malice was required in the case is because Love’s attorney had gained public figure status, which was not disputed at trial. Id.

[21] See Joe Trevino, From Tweets to Twibel*: Why the Current Defamation Law Does Not Provide for Jay Cutler’s Feelings, 19 Sports Law J. 49, 61–63 (2012) (describing a series of libel lawsuits stemming from social media posts).

[22] Id. at 61.

[23] Andrew L. Wang, Twitter Apartment Mold Libel Suit Dismissed, Chi. Trib. (Jan. 22, 2010), https://www.chicagotribune.com/news/ct-xpm-2010-01-22-1001210830-story.html.

[24] Trevino, supra note 21, at 63. 

[25] Lauren Dugan, The AP Settles Over NBA Twitter Lawsuit, Pays $20,000 Fine, Adweek (Dec. 8, 2011), https://www.adweek.com/digital/the-ap-settles-over-nba-twitter-lawsuit-pays-20000-fine/.

[26] Restatement (Second) of Torts § 577A(3–4) (Am. Law Inst. 1977).

[27] Id. at § 577A cmt. b.

[28] 739 F.2d 962, 967 (4th Cir. 1984) (quoting Keeton, 465 U.S. at 777 n.8).

[29] Allen, supra note 5, at 87–88.

[30] See Lori A. Wood, Cyber-Defamation and the Single Publication Rule, 81 B.U. L. Rev. 895, 915 (2001) (calling for courts to define “republication” in the context of internet publications).

[31] See, e.g., Firth v. State, 775 N.E.2d 463, 466 (N.Y. 2002); Van Buskirk v. N.Y. Times Co., 325 F.3d 87, 90 (2d Cir. 2003); Oja v. U.S. Army Corps of Eng’rs, 440 F.3d 1122, 1130–31 (9th Cir. 2006); Nationwide Bi-Weekly Admin., Inc. v. Belo Corp., 512 F.3d 137, 144 (5th Cir. 2007).  But see Swafford v. Memphis Individual Prac. Ass’n, 1998 Tenn. App. LEXIS 361, at *38 (Tenn. App. 1998).

[32] See Allen, supra note 5, at 91 n.157.

[33] See Trevino, supra note 19, at 69.