15 Wake Forest L. Rev. Online 46
William Gilchrist
Enacted as part of the Telecommunications Act of 1996, section 230 of the Communications Decency Act was originally introduced to shield children from inappropriate content online.[1] Despite being passed for a relatively limited purpose, section 230’s broad liability protections for interactive computer services have since been credited with shaping the modern internet.[2] Today, it stands as one of the few federal statutes recognized for having “fundamentally changed American life.”[3]
As social media and internet use have evolved, the language of section 230 has generally adapted to new technologies. But with the rise of artificial intelligence (AI) as a mainstream tool, section 230’s scope has become increasingly uncertain. Due in part to its brevity and resulting ambiguity, questions have emerged over whether its liability protections extend to online service providers’ use of AI,[4] particularly in recommender systems.[5] The Supreme Court first addressed section 230’s applicability to AI use in Gonzalez v. Google.[6] Although many hoped the case would bring clarity, the Court issued a three-page per curiam opinion dismissing it for failure to state a claim, leaving stakeholders back at square one.[7]
In Gonzalez, the Supreme Court considered for the first time whether section 230 shields online platforms from liability for using AI to recommend third-party content.[8] While the case was a critical first step in addressing AI-related liability, the Court’s ruling left concerned parties with more questions than answers. Critics argue the opinion fell short of fulfilling the judiciary’s responsibility to “say what the law is,” emphasizing the need for additional guidance on section 230’s scope.[9] Ultimately, the Court’s decision in Gonzalez not only reflects the judiciary’s lack of understanding of AI but also kicks the can down the road, leaving future courts unable to fairly and consistently interpret section 230’s scope. Accordingly, clearer legal standards are essential to help U.S. companies assess their liability exposure when deploying new products and to ensure they remain competitive in the global AI race.[10]
Today, hundreds of active AI-related lawsuits are making their way through the American legal system, typically involving intellectual property, amplification of dangerous content, and discrimination issues.[11] And while AI offers undeniable economic benefits, its widespread and varied application has made it difficult for lawmakers to understand and regulate.[12] As AI becomes increasingly embedded in daily life, AI-related litigation is only expected to increase.[13]
This Comment begins with an explanation of what AI is and how it is currently being used in American society. It then provides background on Gonzalez, analyzes the Court’s opinion and its implications, and argues that the Court should have directly addressed section 230’s applicability. Because a more effective resolution of Gonzalez would have defined section 230’s scope, this Comment critiques the Court’s decision and argues that affirming a broad interpretation of section 230 would have been the better outcome. Finally, this Comment examines the challenges of applying a broad interpretation of section 230, ending with a discussion of the challenges associated with current and future AI regulation.
I. Background
Prior to the 1950s, AI existed only in science fiction.[14] But after Alan Turing introduced the concept in his 1950 paper, Computing Machinery and Intelligence, AI began its gradual evolution into the tool it is today.[15] Beginning as “little more than a series of simple rules and patterns,” AI has advanced exponentially and is now “capable of performing tasks that were once thought impossible.”[16]
The private sector has embraced this expansion, with many companies taking advantage of the technology and incorporating it into various parts of their operations.[17] While doing so offers clear advantages, it has also raised new and increasingly frequent questions about potential liability exposure.[18] Until recently, U.S. courts have reliably turned to section 230 for guidance when evaluating liability arising from online AI use.[19] And while section 230’s text provided sufficient guidance in AI’s early stages, the technology’s growing complexity and evolving uses have rendered section 230’s applicability increasingly unclear.
Since section 230’s adoption in 1996, Americans’ internet access and use have dramatically increased.[20] As internet access has improved, so has Americans’ exposure to and awareness of AI.[21] The AI of the 1990s was virtually nonexistent compared to the AI of today, and new capabilities allow for the technology to be used in ways never before thought possible.[22] These advancements have seamlessly integrated AI into nearly every aspect of daily life, often in ways that go unnoticed.[23] Nevertheless, with new technology comes new legal issues, and AI is no exception.[24]
To understand Gonzalez and its global implications, it is first necessary to define what constitutes AI. At the highest level, AI is “a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem solving, and exercising creativity.”[25] And while AI use continues to evolve, the following discussion outlines the broad categories of AI and how they are currently being used.
A. A Spectrum of Systems
There are seven general categories of AI: three based on capabilities and four based on functionalities.[26] The three kinds of AI based on capabilities are Artificial Narrow, General AI, and Super AI.[27] Artificial Narrow—the only type of AI in use today—refers to technology that is “designed to perform a specific task or a set of closely related tasks.”[28] The other two types of AI based on capabilities—General and Super AI—remain theoretical, as neither has been successfully developed.[29] These forms are expected to match or surpass human intelligence.[30]
The four types of AI based on functionalities are Reactive Machine, Limited Memory, Theory of Mind, and Self-Aware.[31] Reactive Machine systems include AI “with no memory [that is] designed to perform a very specific task,” such as Netflix’s movie and TV show recommendation system.[32] Limited Memory AI differs from Reactive Machine AI because it can recall past events and monitor objects and situations over time.[33] Limited memory AI includes generative AI such as ChatGPT, virtual assistants such as Siri and Alexa, and self-driving vehicles.[34] Theory of Mind and Self-Aware AI are forms that are still in development or entirely theoretical.[35] Theory of Mind AI would allow machines to understand the thoughts and emotions of other entities, while Self-Aware AI would allow machines to understand their own internal conditions and traits.[36]
B. Teaching the Machine: How AI Learns
For each category of AI, there are several tools that software developers can use to create and enhance their systems.[37] One of these tools is machine learning (ML), a term that is often incorrectly used interchangeably with AI.[38] Though AI and ML are closely related, ML is a subset of AI[39] that involves “developing algorithms and statistical models that computer systems use to perform tasks without explicit instructions, relying on patterns and inference instead.”[40] While AI is “the ability of a machine to act and think like a human,” ML is a type of AI that involves humans “relying on data and feeding it to computers so they can simulate what they think we’re doing.”[41] The broad advantages of ML allow it to be used in a variety of contexts, including rapidly processing large datasets, using algorithms that change and improve over time, and spotting patterns or identifying anomalies.[42]
Broadly put, ML works by “exploring data and identifying patterns.”[43] Most tasks involving data-defined patterns or rule sets can be automated with ML,[44] which can be used to explore data and identify patterns in two ways: supervised learning and unsupervised learning.[45] Supervised learning involves humans labeling inputs and outputs that train an algorithm to accurately classify data and predict outcomes.[46] In contrast, unsupervised learning models work independently to discover the structure of unlabeled data. For example, an unsupervised learning model could be used to identify products often purchased together online.[47] Supervised learning, which is more widely used than unsupervised due to its ease of use, is the type of ML behind the recommender systems at issue in Gonzalez.[48]
C. Recommender Systems and Content Curation
Recommender systems, like those in Gonzalez, are “algorithms providing personalized suggestions for items that are most relevant to each user.”[49] Today, many social media platforms use AI and ML recommender systems in a variety of ways.[50] For example, YouTube uses AI and ML to automatically remove objectionable content, label imagery for video background editing, and to recommend videos.[51] In addition to YouTube, recommender systems are commonly used by social media platforms like Spotify, Amazon, Netflix, TikTok, and Instagram to tailor content and product suggestions to their users.[52]
AI, ML, and recommender systems are also being adopted outside the social media context.[53] “From manufacturing to retail and banking to bakeries, even legacy companies are using machine learning to unlock new value or boost efficiency.”[54] As explained by Aleksander Madry, Director of the MIT Center for Deployable Machine Learning, “machine learning is changing, or will change, every industry.”[55]
Though statistics about the adoption of AI differ widely, the number of global companies that use AI is likely in the realm of 35 to 55 percent, with some estimates as high as 67 percent.[56] Beyond its use by companies, individuals are increasingly incorporating AI into their daily lives.[57] But despite the increasing popularity of AI in American society, the only real framework federal courts have to interpret liability for AI use is section 230, an almost thirty-year-old federal statute that was initially passed to promote commercial internet use and shield children from harmful content online.[58]
II. The Legal Backbone of the Internet
In 1996, Congress passed section 230 in response to the “rapidly developing array of Internet and other interactive services.”[59] At the time, section 230 was necessary because of the First Amendment’s inability to adequately protect online platforms providing forums for third-party content.[60] A key catalyst for the legislation was the decision in Stratton Oakmont, Inc. v. Prodigy Services Co., a libel case from 1995.[61]
In Stratton Oakmont, the Supreme Court of New York, Nassau County, found that Prodigy Services, the owner-operator of a computer network that sponsored subscriber communication through online bulletin boards, was liable for third party statements posted on its site.[62] The court reasoned that Prodigy was liable as a “publisher” because it “monitor[ed] and edit[ed]” the individual bulletin board at issue, which gave Prodigy the benefit of editorial control.[63] In response, “to ensure that Internet platforms would not be penalized for attempting to engage in content moderation, Congress enacted Section 230.”[64]
A. Where Immunity Begins: Section 230(c)(1)
Known as “the twenty-six words that created the internet,”[65] the operative provision of the Communications Decency Act is section 230(c)(1),[66] which states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[67]
Section 230(c)(1) generally “protects websites from liability for material posted on the website by someone else.”[68] But interactive service providers are only protected from liability if they are not also an information content provider, or “someone who is ‘responsible, in whole or in part, for the creation or development of‘ the offending content.”[69] As explained by Chief Judge Kozinski in Fair Housing Council v. Roommates.com:
A website operator can be both a service provider and a content provider: If it passively displays content that is created entirely by third parties, then it is only a service provider with respect to that content. But as to content that it creates itself, or is “responsible, in whole or in part” for creating or developing, the website is also a content provider. Thus, a website may be immune from liability for some of the content it displays to the public but be subject to liability for other content.[70]
Thus, the key question in assessing recommender system liability is whether the system contains content for which the operator is “responsible in whole or in part for creating or developing,” or whether the system simply dictates how existing content is displayed.
Although section 230 does not expressly address the use of AI or recommender systems, it was drafted in response to the internet’s rapid growth and evolution.[71] To account for the inevitable emergence of more advanced technologies, section 230 was drafted in a technology-neutral manner that would allow the statute to be applied to emerging and future technology.[72] Unsurprisingly, the exponential increase in the commercial use and complexity of AI has also led to a high volume of litigation, as well as subsequent contradictory state and federal court rulings.[73] But despite the expectation that section 230 would be applied to future technology, the exceedingly complex nature of today’s AI has surpassed the clear bounds of section 230.
B. Uncertainty and Calls for Change
Increasing litigation and uncertainty have led to growing calls for regulation—calls that have not gone unnoticed by lawmakers and courts.[74] One of these lawmakers, Senator Dick Durban, Chairman of the Senate Judiciary Committee, compared the rise of AI to that of the social media industry.[75] “When it came to online platforms, the inclination of the government was to get out of the way. I’m not sure I’m happy with the outcome as I look at online platforms and the harms they have created . . . I don’t want to make that mistake again,” he said.[76] Other senators have agreed, with Senator Lindsey Graham even calling for an entirely new agency to regulate the technology.[77]
Even with increasing calls for regulation, the majority of current AI-related laws and regulations have been implemented by individual states with little to no guidance from Congress or the Supreme Court.[78] And even with bipartisan support and a potential model statute from the European Union,[79] Congress has yet to pass any meaningful regulation.[80] This lack of guidance at the federal level has led companies and courts to rely on conflicting interpretations of section 230 in AI-related claims. This growing uncertainty has also made Supreme Court guidance necessary to achieve clarity and consistency in future litigation.
III. Gonzalez v. Google: A Ripple, Not a Wave
In response to these concerns and calls for action, the Supreme Court granted certiorari to hear Gonzalez v. Google. As Gonzalez moved through the courts, it became a focal point for many AI executives and other stakeholders seeking guidance on how section 230 applies to AI.[81]
The case involved claims brought against Google under the Anti-Terrorism Act (ATA)[82] by the father of Nohemi Gonzalez, a 23-year-old who was murdered while studying abroad in Paris, France.[83] Gonzalez was one of 130 people killed during a series of attacks—known as the “Paris Attacks”—carried out by ISIS on November 13, 2015.[84] The Gonzalez plaintiffs claimed that Google was liable for the victims’ deaths because it “aided and abetted international terrorism and provided material support to international terrorists by allowing ISIS to use YouTube.”[85] Specifically, they argued that because Google’s YouTube algorithms “match and suggest content to users based upon their viewing history,” YouTube actively recommended ISIS videos to users and, in effect, “facilitat[ed] social networking among jihadists.”[86] The plaintiffs further alleged that YouTube “has become an essential and integral part of ISIS’s program of terrorism,” serving as “a unique and powerful tool of communication that enables ISIS to achieve its goals.”[87]
The district court concluded that the plaintiffs’ claims were barred by section 230 and dismissed the case pursuant to Rule 12(b)(6).[88] On appeal, the Ninth Circuit consolidated Gonzalez with Twitter v. Taamneh and Clayborn v. Twitter, two cases with similar facts and claims.[89] Taamneh was brought by the survivors of a victim killed in the Reina nightclub attack in Istanbul, Turkey, on January 1, 2017, while Clayborn was brought by the survivors of a victim killed in a 2015 attack on an office Christmas party in San Bernardino, California.[90] As in Gonzalez, the attacks in Taamneh and Clayborn were later connected to ISIS.[91]
In each case, the plaintiffs sought damages from Google, Twitter, and Facebook under the ATA, which “allows United States nationals to recover damages for injuries suffered ‘by reason of an act of international terrorism.’”[92] The scope of the ATA was broadened in 2016 by the Justice Against Sponsors of Terrorism Act (JASTA), which “amended the ATA to include secondary civil liability for ‘any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed’ an act of international terrorism.”[93] The claims theorized that the defendants were liable under the ATA because their “social media platforms allowed ISIS to post videos and other content to communicate the terrorist group’s message, to radicalize new recruits, and to generally further its mission,” effectively aiding and abetting international terrorism.[94]
The district court granted Google’s motion to dismiss in Gonzalez after concluding that all of the plaintiffs’ claims were barred by section 230 except for the revenue-sharing claims,[95] which were dismissed for failure to allege proximate cause.[96] The courts in Taamneh and Clayborn also granted the defendants’ motions to dismiss for failure to allege secondary liability under the ATA.[97] The Ninth Circuit affirmed the dismissals in Gonzalez and Clayborn, and reversed and remanded for further proceedings in Taamneh.[98] The Gonzalez plaintiffs’ filed a petition for a writ of certiorari on April 4, 2022, followed by the Taamneh plaintiffs’ on May 26. The Supreme Court granted both petitions on October 3, 2022.[99]
Prior to Gonzalez, the Supreme Court had never addressed how section 230 applies to liability stemming from the use of AI by a social media company, or any company in general.[100] And while any case before the Supreme Court has the potential to have a significant impact, the rapid growth and increasing pervasiveness of AI in American society, combined with the lack of meaningful regulation, has created an urgent need for guidance in the industry. Because section 230 is one of the “most important laws in tech policy,” organizations across the political spectrum would be impacted by the Supreme Court’s interpretation of its scope.[101]
The significance of the Court’s decision in Gonzalez resulted in, and is underscored by, the unusually high number of amicus briefs filed. Since 2010, Supreme Court cases have averaged about a dozen amicus briefs each.[102] In Gonzalez, seventy-eight organizations filed amicus curiae briefs in hopes of influencing the Court’s opinion.[103] While each organization had its own motives, one thing is clear: Many organizations had a stake in the outcome of Gonzalez, and the Court’s opinion left them with more questions than answers.[104]
A. Confusion at Oral Argument: A Decision in Twitter v. Taamneh
Many of the issues raised by amici were discussed during oral arguments.[105] The oral arguments—lasting nearly three hours in each case—were held in February 2023.[106] The Justices posed questions about everything from the use of AI to generate content[107] to hypotheticals about a bank’s potential liability for allowing Osama Bin Laden to open an account.[108] On multiple occasions, several of the Justices expressed confusion—not only about the arguments being made, but also about the questions before the Court.[109] But after countless hypotheticals and endless back-and-forth between counsel and the Justices, the Justices were apparently left with more questions than answers.
The Court’s opinion highlighted its confusion over the issues, the available options, and the potential consequences of various interpretations of section 230. After hundreds of pages of amicus briefs and oral arguments that went over the time limit by an hour and thirty-four minutes,[110] the Court’s three-page per curiam opinion was released on May 18, 2023.[111] Despite high hopes from stakeholders and members of the AI community, the Court declined to address the application of section 230, concluding that the plaintiffs’ complaint appeared to state “little, if any, plausible claim for relief.”[112] This conclusion led the Court to vacate the Ninth Circuit’s judgment and remand the case for consideration in light of the decision in Taamneh.[113]
The Court overturned the Ninth Circuit’s ruling in the more robust Taamneh opinion. Although Taamneh provided significantly more analysis than Gonzalez, the analysis focused on what it means to “aid and abet” and “what precisely must the defendant have ‘aided and abetted’” when determining liability under JASTA.[114] The Court looked to Halberstam v. Welch[115] to provide the legal framework for “civil aiding and abetting and conspiracy liability.”[116] After acknowledging that “the point of aiding and abetting is to impose liability on those who consciously and culpably participated in the tort at issue,” the Court noted that the nexus between the defendants and the terrorist attack was far removed.[117] Seeming skeptical, the Court acknowledged the plaintiffs’ allegations that Twitter “failed to do ‘enough’ to remove ISIS-affiliated users and ISIS-related content—out of hundreds of millions of users worldwide and an immense ocean of content—from their platforms.”[118] However, because the plaintiffs ultimately failed to allege intentional aid or systematic assistance, the Court held the allegations were insufficient under the ATA.
B. Gonzalez, Taamneh, and Their Effects
While the Court offered a relatively substantive aiding and abetting analysis in Taamneh, the Court’s decisions in both Gonzalez and Taamneh ultimately fell short. Touted as an act of misguided judicial minimalism, the Court’s decisions “simultaneously avoid[ed] the risk of erroneous judgment on a technical question with far-reaching consequences and [left] the politically contentious issue of § 230’s scope to the democratically accountable Congress.”[119] And although doing so may have been the safer short-term decision given the Court’s questionable understanding of the ins and outs of recommender systems and AI,[120] deferring the decision to Congress is hardly likely to yield meaningful regulations anytime soon.
Nonetheless, the Court’s decision not to rule on section 230 was not a result of a lack of awareness of the need for guidance on the issue. While it was the first petition the Court granted, Gonzalez was not the first case to petition the Court to define or provide clarity on the scope of section 230.[121] The Court denied cert in Doe v. Facebook, a case involving allegations that a sexual predator used Facebook to groom the plaintiff for sex trafficking.[122] In his concurrence denying certiorari, Justice Thomas noted that “‘the United States Supreme Court—or better yet, Congress—may soon resolve the burgeoning debate about whether the federal courts have thus far correctly interpreted section 230.’ Assuming Congress does not step in to clarify § 230’s scope, we should do so in an appropriate case.”[123]
Gonzalez was the appropriate case. Yet, the Court’s questions and admitted confusion at oral argument[124] indicate that it ultimately took the advice outlined by Justice Thomas in Doe—that “before we close the door on such serious charges, ‘we should be certain that is what the law demands.’”[125] But even though the Justices may remain uncertain about what the law demands, the Court’s internal justifications for avoiding the substance of section 230 will have lasting consequences for social media conglomerates and other companies who have come to rely on recommender systems and other forms of AI.
IV. Critical Error: The Need to Affirm Section 230’s Broad Scope
As lower courts have consistently held in the past, immunity should only be withheld when an interactive service provider makes “substantial or material edits and additions” to content.[126] Here, the Court ultimately reached the correct outcome in Gonzalez by dismissing the plaintiff’s claims, but its fatal flaw was failing to validate section 230’s broad immunity for future litigants.
An affirmance of the broad scope of section 230 was necessary for two reasons. First, providing current and future online service providers with a dependable, broad grant of immunity is in line with the plain language of the statute and Congress’s intent for section 230—“to protect Internet platforms’ ability to publish and present user-generated content in real time, and to encourage them to screen and remove illegal or offensive content.”[127] Second, policy considerations support a broad application of section 230 because, as the evolution of the internet has shown, strong liability protections encourage beneficial technological and economic development in the United States, particularly for small businesses.[128]
A. Gonzalez Ignores Congressional Intent and the Plain Language of Section 230
Two primary purposes of section 230 were “to protect Internet speech from content regulation by the government,” and to reverse a New York Supreme Court case that held “an online service provider’s decision to moderate the content of its message boards rendered it a ‘publisher’ of users’ defamatory comments on the boards.”[129] Both purposes were aimed at promoting the continued development of the internet, and while AI and the internet were once separate and distinct, they have become increasingly intertwined.[130]
Like the internet, AI has and continues to evolve at extreme speed.[131] The drafters were aware of the rapidly changing nature of the internet, and section 230’s immunity for “publisher[s]” and “speaker[s]” was drafted without highly specific or limiting language to account for inevitable and unforeseeable technological changes.[132] The first web page was launched in 1991, just five years before section 230 was passed.[133] In the early 1990s, people were only just beginning to hear about the new information superhighway that would one day change their lives.[134] By 2024, contemporary AI—including recommender systems and ML algorithms—is viewed much like the internet was when section 230 was first drafted in the early 1990s.[135]
As highlighted by Senator Ron Wyden and former Representative Christopher Cox, “many of the major Internet platforms engaged in content curation [were] a precursor to the targeted recommendations that today are employed by YouTube and other contemporary platforms.”[136] Senator Wyden and former Representative Cox agree that the recommender systems at issue in Gonzalez—which are representative of typical AI systems used by online service providers—are the “direct descendants” of early content curation efforts.[137] And just as Wyden, Cox, and other regulators of the 1990s were seeking to promote the development of the internet, regulators are now seeking to promote AI.[138] So because the internet and AI are intrinsically linked, regulation of companies’ use of AI should fall within the scope of section 230.
Beyond the original intent and plain language of section 230, the statute has also been applied as a broad shield to protect online service providers from liability since its inception.[139] As noted by Justice Thomas in Malwarebytes, Inc. v. Enigma Software Group, USA, LLC, “the first appellate court to consider the statute held that . . . § 230 confers immunity even when a company distributes content that it knows is illegal.”[140] This broad interpretation set the stage for future section 230 jurisprudence, and subsequent decisions “adopted this holding as a categorical rule across all contexts.”[141]
Courts have also upheld the principle that section 230 should be interpreted broadly, even in the context of AI.[142] Although Gonzalez was the first time the issue reached the Supreme Court, it is not the first time a court considered whether AI use could fall within the scope of the statute.[143]
In Force v. Facebook, Inc., the Second Circuit interpreted section 230 to protect AI use.[144] There, the court noted that because the algorithms at issue were “content ‘neutral,’ . . . merely arranging and displaying others’ content . . . [was] not enough to hold Facebook responsible.”[145] However, the court went further, providing additional clarification on section 230’s scope:
We do not mean that Section 230 requires algorithms to treat all types of content the same. To the contrary, Section 230 would plainly allow Facebook’s algorithms to, for example, de-promote or block content it deemed objectionable. We emphasize only—assuming that such conduct could constitute “development” of third-party content—that plaintiffs do not plausibly allege that Facebook augments terrorist-supporting content primarily on the basis of its subject matter.[146]
By recognizing the plain language and overall intent behind the statute—to allow online service providers to monitor what is on their sites, while recognizing that no provider could prevent all illegal or undesirable content—the court in Force reached the conclusion the Supreme Court should have affirmed in Gonzalez.
The plain language of section 230, express legislative intent behind its drafting, and the subsequent interpretation of the statute all support the prevailing view that section 230 should be interpreted broadly. When considering these aspects of section 230, as well as others discussed below, the decision is clear: The Supreme Court should have used Gonzalez as an opportunity to affirm the broad scope of section 230 and extend liability protection to online service providers that incorporate AI recommender systems into their platforms.
B. Congress or the Courts? Promoting Beneficial AI Development in the United States
Interpreting section 230’s liability protections to include AI was necessary to foster innovation and strengthen AI development in the United States. As noted by section 230’s drafters, “[b]y providing legal certainty for platforms, the law has enabled the development of innumerable internet business models based on user-created content.”[147] Like the internet, AI has the potential to have a dramatic impact on our lives,[148] and while AI has become increasingly integrated into large scale business models, small and midsize businesses have begun to fall behind.[149] This is partly because larger businesses typically have the resources and capital to implement AI and are better able to offset the costs and litigation risks associated with testing and developing cutting-edge technology.
Despite litigation risks and other obstacles, AI use more than doubled between 2017 and 2022.[150] However, the proportion of global businesses that use AI has plateaued between 50 and 60 percent,[151] and a May 2023 report found that only 25 percent of small businesses have begun testing or using AI in their operations.[152] Compared with larger companies, the benefits of AI have the potential to generate an even greater impact for small businesses; the benefits include cost savings through improved processes, accelerated time from production to market for new products, and access to talent that would otherwise be too expensive.[153]
Despite its many benefits, AI is still largely underutilized by small businesses.[154] Fortunately, small percentage increases in AI adoption have the potential to have a major impact, as small businesses of 500 employees or less make up 99.9 percent of all U.S. businesses.[155] Promoting small business growth is a high priority among government regulators,[156] and lawmakers should be doing everything in their power to help wherever possible. Accordingly, because the legal certainty provided by section 230 “enabled the development of innumerable internet business models,”[157] interpreting section 230 to include AI would provide crucial opportunities and support for small businesses, just as it did for early internet sites.
Finally, the Gonzalez courts’ sole focus on whether recommender systems are within the scope of section 230 does not limit the applicability of the decision to other types of AI. Increasingly popular generative AI products, such as ChatGPT and other chatbots, “can and do rely on and relay information that is provided by another.”[158] Thus, it is likely that a broad interpretation in Gonzalez would extend to other forms of AI, like generative AI.
In sum, a broad application of section 230 is supported by the plain text of the statute, the legislative intent of the drafters, subsequent interpretation by lower courts, and prevailing policy considerations. Gonzalez presented a great opportunity to solidify these concerns by affirming section 230’s broad scope, resulting in the conclusion that the decision not to reach the issue was misguided.
V. Guidance from Abroad and the Potential for Regulation by Default
By default, the Gonzalez decision left lower courts and AI-reliant companies in the same position as before the Court granted certiorari. But questions about the scope of section 230 and companies’ liability for the AI use are not going away; as AI advances and becomes more prevalent in society, these questions will continue to pop up with greater frequency. Although the Supreme Court may argue that the decision is better left for Congress, continued inaction risks allowing foreign regulations to dictate the outcome instead.
For example, a decision may come in the form of AI or speech regulations from the European Union (EU). In 2018, the EU passed the General Data Protection Regulation (GDPR), the self-proclaimed “strongest privacy and security law in the world.”[159] Even though the GDPR is only targeted towards protecting EU residents, many companies “made global changes to their services to comply with European regulations.”[160] Shortly after the GDPR was passed, the European Union passed the Digital Services Act (DSA), which came into effect on November 16, 2022.[161] The DSA requires big tech companies, like Google and Facebook, “to police their platforms more strictly to better protect European users from hate speech, disinformation, and other harmful online content.”[162] Both the GDPR and DSA threaten large fines for noncompliant companies,[163] and while the laws only require compliance inside the EU, it is often more practical to make global changes rather than region-specific adjustments.
On December 9, 2023, the European Parliament reached a provisional agreement with the European Council for “a bill to ensure AI in Europe is safe, respects fundamental rights and democracy, [and allows] businesses [to] thrive and expand.”[164] Known as the AI Act, the bill would be the world’s first comprehensive AI law, creating “obligations for providers and users depending on the level of risk” from artificial intelligence.[165] Although still in its early stages, the AI Act would, among other things, ban categorization systems that use sensitive characteristics, such as political, religious, or philosophical beliefs, as well as sexual orientation and race.[166] If passed, the effects of the Act would likely be similar to the GDPR and DSA: The risk of non-compliance and practical difficulties of making region-specific changes would lead companies to tailor their algorithms in areas outside the EU to ensure compliance. So, by failing to outline the protections for AI stemming from section 230, the Supreme Court missed an opportunity to set the rule for what was protected in the United States, opening the door for EU regulations to set the standard.
VI. No Perfect Solution
Although a broad interpretation of section 230 is the best solution, it is not a perfect solution. The online world is a dangerous place, and bad actors will inevitably take advantage of or work around online algorithms to commit crimes and other bad acts. Beyond concerns that algorithms help promote terrorism, interest groups have warned that several other problems—including human trafficking, child exploitation, and the spread of misinformation—will become worse if section 230 is interpreted broadly.[167] While mitigating these harms is difficult, a highly specific and restrictive interpretation would cause more harm than good, and the novel, dynamic nature of AI makes comprehensive regulation currently impractical. As such, broad regulation is the only reasonable step at this stage.
As highlighted by the National Center on Sexual Exploitation (NCOSE), the internet is the primary location for the sexual exploitation of children, and section 230 “was never intended to provide legal protection to websites that . . . facilitate traffickers in advertising the sale of unlawful sex acts.”[168] Both points are uncontroverted and address abhorrent societal problems which require continued commitment and action by regulators to eradicate. But preventing exploitation and human trafficking online is a complex challenge. And while narrowing the scope of section 230 might provide limited assistance in addressing these pinpoint issues, altering the interpretation of a broad statute based on the concerns of a small subset of stakeholders would do more harm than good. As noted in an amicus brief filed by Reddit Inc., “[j]udicial interpretation should not move at Internet speeds, and there is no telling what a sweeping order removing targeted recommendations from the protection of Section 230 would do to the Internet as we know it.”[169]
Section 230 has been interpreted broadly since its enactment.[170] Although the significant immunity from liability given to online service providers has resulted in negative consequences, the broader implications of a drastic change would be difficult for the Court to predict. Thus, a narrow interpretation of section 230’s scope would have been misguided.
In the realm of free speech, less regulation has traditionally been associated with more freedom.[171] But some argue that AI has the potential to disrupt that balance. In its July 2023 report, PEN America argued that “generative A.I. threatens free expression by ‘supercharging’ the dissemination of disinformation and online abuse,” resulting in “the potential for people to lose trust in language itself, and thus in one another.”[172] While the dissemination of misinformation online is of increasing concern, online service providers are already taking steps to mitigate misinformation risks on their platforms.[173] And while there is always more that can be done, the “massive volume of content and the nuanced nature of misinformation”[174] make creating effective regulations difficult, if not impossible. Interpreting section 230 narrowly in hopes of addressing these concerns would still fail to effectively confront these issues, while chilling freedom of the press by discouraging journalists from reporting on issues that might lead to legal trouble.[175]
Despite the pitfalls of interpreting section 230 broadly, the novel and increasingly complex nature of AI has resulted in a lack of currently feasible alternatives. AI is particularly difficult to regulate because it is used to perform a wide variety of tasks, exists in many different forms with distinct characteristics, often involves the use of multiple algorithms working together, and consistently evolves through updates and new data.[176]
These characteristics are part of what makes AI so useful. It is dynamic, easily adaptable, and able to advance on its own. Unfortunately, Congress does not share these characteristics, and targeted regulations in the near future are unlikely. As a result, it is important to make do with what we have—section 230. Drafted nearly thirty years ago, section 230 has served as an effective regulator of internet speech since its creation, and even though applying its language to AI is by no means a perfect solution, it currently is the best available option.
Conclusion
AI is new, complex, and changing daily—as a result, lawmakers have struggled to develop and pass regulations that can keep up with AI’s rapid development. Referring to the European AI Act,[177] Tom Siebel, founder and CEO of C3.ai, an emerging AI company, said that “[i]f you can understand one sentence of it, you will understand one more sentence than I, and I think you will understand one more sentence than the people who wrote it.”[178] Regulating AI presents a significant challenge, but like any emerging technology, it comes with obstacles. Leaders in the industry still haven’t found the perfect solution, and a perfect web of AI laws will not emerge overnight.
Still, it is important to maximize the effectiveness of the regulations already in existence by tailoring our interpretation of existing law to include AI. In Gonzalez, the Supreme Court had the opportunity to do just that, by affirming the way many lower courts have interpreted section 230 in the past. By failing to affirm lower courts’ previous interpretations, the Supreme Court effectively affirmed the status quo—that section 230 might be applied to protect online service providers from liability—while also spreading uncertainty about companies’ future exposure to liability for the use of AI.
- 47 U.S.C. § 230; Gonzalez v. Google LLC, 2 F.4th 871, 942 (9th Cir. 2021). ↑
- Interactive computer services are “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” See 47 U.S.C. § 230(f)(2); see also Jeff Kosseff, The Twenty-Six Words That Created the Internet 1 (2019). ↑
- Kosseff, supra note 2, at 3. ↑
- Brief of Senator Ron Wyden and Former Representative Christopher Cox as Amici Curiae in Support of Respondent, Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023) (No. 21-1333); see, e.g., Gonzalez, 2 F.4th 871; Dyroff v. Ultimate Software Grp., 934 F.3d 1093 (9th Cir. 2019); Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019). ↑
- Recommender systems generate “personalized suggestions for items that are most relevant to each user.” See Francesco Casalegno, Recommender Systems – A Complete Guide to Machine Learning Models, Medium (Nov. 25, 2022), https://towardsdatascience.com/recommender-systems-a-complete-guide-to-machine-learning-models-96d3f94ea748. ↑
- 143 S. Ct. 1191 (2023) (per curiam); see also Ron Wyden & Christopher Cox, The Authors of Section 230: ‘The Supreme Court Has Provided Much-Needed Certainty About the Landmark Internet Law–but AI Is Uncharted Territory,’ Fortune (Sept. 7, 2023), https://fortune.com/2023/09/07/authors-of-section-230-supreme-court-certainty-landmark-internet-law-ai-uncharted-territory-politics-tech-wyden-cox/; Gonzalez, 2 F.4th at 942. ↑
- Gonzalez, 143 S. Ct. 1191. ↑
- Id. at 1191–92. ↑
- Leading Case, Twitter, Inc. v. Taamneh, 137 Harv. L. Rev. 400, 400 (2023) (quoting Marbury v. Madison, 5 U.S. (1 Cranch) 137, 177 (1803)). ↑
- See Riccardo Righi et al., Eur. Comm’n, JRC 125613, EU in the Global Artificial Intelligence Landscape (2021). ↑
- John Kell, AI Is About to Face Many More Legal Risks. Here’s How Businesses Can Prepare, Fortune (Nov. 8, 2023), https://fortune.com/2023/11/08/ai-playbook-legality/. ↑
- Shari Davidson, The Growth of AI Law: Exploring Legal Challenges in Artificial Intelligence, Nat’l L. Rev. (Jan. 28, 2025), https://natlawreview.com/article/growth-ai-law-exploring-legal-challenges-artificial-intelligence. ↑
- Kell, supra note 11. ↑
- Michael Haenlein & Andreas Kaplan, A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence, Cal. Mgmt. Rev., Aug. 2019, at 5, 6–7. ↑
- Id. ↑
- Tanya Roy, The History and Evolution of Artificial Intelligence, AI’s Present and Future, All Tech Mag. (July 19, 2023), https://alltechmagazine.com/the-evolution-of-ai/. ↑
- Kell, supra note 11. ↑
- Id. ↑
- See Doe v. Facebook, Inc., 142 S. Ct. 1087, 1088 (2022) (Thomas, J., concurring in denial of certiorari). ↑
- Susannah Fox & Lee Rainie, Pew Rsch. Ctr., The Web at 25 in the U.S. 9 (2014) (finding that only 14% of U.S. adults had internet access in 1995). ↑
- See Brian Kennedy et al., Pew Rsch. Ctr., Public Awareness of Artificial Intelligence in Everyday Activities (2023). ↑
- See Max Roser, The Brief History of Artificial Intelligence: The World Has Changed Fast – What Might Be Next?, Our World in Data (Dec. 6, 2022), https://ourworldindata.org/brief-history-of-ai. ↑
- AI is now used in everything from determining airline ticket prices to deciding who is released from jail. See id. ↑
- See Lyria B. Moses, Recurring Dilemmas: The Law’s Race to Keep up with Technological Change 4 (Univ. of New S. Wales Working Paper No. 2007-21, 2007), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=979861. ↑
- What is AI?, McKinsey & Co. (Apr. 3, 2024), https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-ai; see Understanding the Different Types of Artificial Intelligence, IBM Data & AI Team (Oct. 12, 2023), https://www.ibm.com/think/topics/artificial-intelligence-types. ↑
- IBM Data & AI Team, supra note 25; see also Naveen Joshi, 7 Types of Artificial Intelligence, Forbes (June 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/06/19/7-types-of-artificial-intelligence/. ↑
- IBM Data & AI Team, supra note 25. General AI and Super AI are both strictly theoretical concepts; even OpenAI’s ChatGPT is considered a form of Narrow AI because it’s limited to the single task of text-based chat. Id. ↑
- Narrow AI, DeepAI, https://deepai.org/machine-learning-glossary-and-terms/narrow-ai (last visited May 24, 2025). ↑
- Ben Nancholas, What Are the Different Types of Artificial Intelligence?, Univ. Wolverhampton (June 7, 2023), https://online.wlv.ac.uk/what-are-the-different-types-of-artificial-intelligence/. General AI, also known as Artificial General Intelligence (AGI), uses “previous learnings and skills to accomplish new tasks in a different context without the need for [humans] to train the underlying models.” IBM Data & AI Team, supra note 25. Super AI, if ever successfully developed, “would think, reason, learn, make judgments and possess cognitive abilities that surpass those of human beings.” Id. ↑
- IBM Data & AI Team, supra note 25. ↑
- Id. The four types of AI based on functionalities all fit into the broader category of Artificial Narrow AI. Id.; see also Joshi, supra note 26. ↑
- IBM Data & AI Team, supra note 25; see also How Netflix’s Recommendations System Works, Netflix: Help Ctr., https://help.netflix.com/en/node/100639 (last visited May 24, 2025). ↑
- IBM Data & AI Team, supra note 25. ↑
- Id. ↑
- Id. ↑
- Id. Theory of Mind AI is currently being developed, and Self-Aware AI is strictly theoretical. Id. ↑
- See Artificial Intelligence (AI) vs. Machine Learning, Columbia Eng’g, https://ai.engineering.columbia.edu/ai-vs-machine-learning/ (last visited May 24, 2025). ↑
- See Artificial Intelligence (AI) vs. Machine Learning (ML), Microsoft Azure, https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/artificial-intelligence-vs-machine-learning (last visited May 24, 2025). ↑
- Id. ↑
- What’s the Difference Between Business Intelligence and Machine Learning?, AWS, https://aws.amazon.com/compare/the-difference-between-business-intelligence-and-machine-learning/ (last visited May 24, 2025). ↑
- Kristin Burnham, Artificial Intelligence vs. Machine Learning: What’s the Difference?, Ne. Univ. Graduate Programs (May 6, 2020), https://graduate.northeastern.edu/resources/artificial-intelligence-vs-machine-learning-whats-the-difference/. ↑
- Id. ↑
- The Evolution and Techniques of Machine Learning, DataRobot (Jan. 7, 2025), https://www.datarobot.com/blog/how-machine-learning-works/. ↑
- Id. ↑
- Julianna Delua, Supervised Versus Unsupervised Learning: What’s the Difference?, IBM (Mar. 12, 2021), https://www.ibm.com/blog/supervised-vs-unsupervised-learning/. ↑
- Id. ↑
- Id. ↑
- See Gaudenz Boesch, Supervised vs Unsupervised Learning for Computer Vision, viso.ai (Dec. 21, 2023), https://viso.ai/deep-learning/supervised-vs-unsupervised-learning/; Alyshai Nadeem, Machine Learning 101: Supervised, Unsupervised, Reinforcement Learning Explained, datasciencedojo (Sept. 15, 2022), https://datasciencedojo.com/blog/machine-learning-101/. ↑
- Gonzalez v. Google, LLC, 2 F.4th 871, 881 (9th Cir. 2021). Recommender systems fall into the category of Artificial Narrow and are a type of reactive machine AI. See IBM Data & AI Team, supra note 25; Casalegno, supra note 5. ↑
- See Rem Darbinyan, How AI Transforms Social Media, Forbes (Mar. 16, 2023), https://www.forbes.com/sites/forbestechcouncil/2023/03/16/how-ai-transforms-social-media/. ↑
- Bernard Marr, The Amazing Ways YouTube Uses Artificial Intelligence and Machine Learning, Forbes (Aug. 23, 2019), https://www.forbes.com/sites/bernardmarr/2019/08/23/the-amazing-ways-youtube-uses-artificial-intelligence-and-machine-learning/. ↑
- Id.; see Nadeem, supra note 48; see also Tamara Biljman, AI in Social Media: Benefits, Tools, and Challenges, Sendible (Jun. 4, 2024), https://www.sendible.com/insights/ai-in-social-media. ↑
- Sara Brown, Machine Learning, Explained, MIT Mgmt. Sloan Sch.: Ideas Made to Matter (Apr. 21, 2021), https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained; see Katherine Haan & Robb Watts, How Businesses Are Using Artificial Intelligence, Forbes Advisor (Apr. 24, 2023), https://www.forbes.com/advisor/business/software/ai-in-business/. ↑
- Brown, supra note 53. ↑
- Id. ↑
- Id.; Anthony Cardillo, How Many Companies Use AI? (New Data), Exploding Topics, https://explodingtopics.com/blog/companies-using-ai (May 1, 2025); IBM, IBM Global AI Adoption Index 2022 (May 2022), https://www.ibm.com/downloads/cas/GVAGA3JP; The State of AI in 2023: Generative AI’s Breakout Year, McKinsey & Co. (Aug. 1, 2023), https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year#steady. ↑
- Ryan Tracy, ChatGPT’s Sam Altman Warns Congress That AI ‘Can Go Quite Wrong,’ Wall St. J. (May 16, 2023), https://www.wsj.com/tech/ai/chatgpts-sam-altman-faces-senate-panel-examining-artificial-intelligence-4bb6942a. ↑
- See Wyden & Cox, supra note 6, at 2; Stratton Oakmont, Inc. v. Prodigy Serv. Co., No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995). ↑
- 47 U.S.C. § 230(a)(1). ↑
- See Kosseff, supra note 2, at 9–10. ↑
- Stratton Oakmont, 1995 WL 323710; Wyden & Cox, supra note 6, at 2; see also Kosseff, supra note 2, at 45–56. ↑
- Stratton Oakmont, 1995 WL 323710, at *1. ↑
- Id. at *4–5. ↑
- Wyden & Cox, supra note 6, at 2. ↑
- See Kosseff, supra note 2, at 2. ↑
- Id.; Gonzalez v. Google LLC, 2 F.4th 871, 886 (9th Cir. 2021). ↑
- 47 U.S.C. § 230(c)(1). ↑
- Gonzalez, 2 F.4th at 886–87 (quoting Doe v. Internet Brands, Inc., 824 F.3d 846, 850 (9th Cir. 2016)). ↑
- Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157, 1162 (9th Cir. 2008) (quoting 47 U.S.C. § 230(f)(3)). ↑
- Id. at 1162–63. ↑
- Section 230, EFF, https://www.eff.org/issues/cda230 (last visited May 24, 2025). ↑
- Id. ↑
- Rebecca Kern, SCOTUS to Hear Challenge to Section 230 Protections, Politico (Oct. 3, 2022), https://www.politico.com/news/2022/10/03/scotus-section-230-google-twitter-youtube-00060007. Compare Prager Univ. v. Google LLC, 85 Cal. App. 5th 1022 (Cal. Ct. App. 2022), and Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093 (9th Cir. 2019), with Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019). ↑
- Zach Schonfeld, Chief Justice Centers Supreme Court Annual Report on AI’s Dangers, Hill (Dec. 31, 2023), https://thehill.com/regulation/court-battles/4383324-chief-justice-centers-supreme-court-annual-report-on-ais-dangers/. ↑
- Tracy, supra note 57. ↑
- Id. ↑
- Id. ↑
- Lawrence Norden & Benjamin Lerude, States Take the Lead on Regulating Artificial Intelligence, Brennan Ctr. for Just. (Nov. 6, 2023), https://www.brennancenter.org/our-work/research-reports/states-take-lead-regulating-artificial-intelligence. ↑
- See EU AI Act: First Regulation on Artificial Intelligence, Eur. Parl.: Topics (Feb. 19, 2025), https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. ↑
- Norden & Lerude, supra note 78. ↑
- Kern, supra note 73. ↑
- 18 U.S.C. § 2333. ↑
- Gonzalez v. Google LLC, 2 F.4th 871, 880 (9th Cir. 2021). Gonzalez’s initial complaint was later amended and joined by other family members and similarly situated plaintiffs. Id. at 882. ↑
- Id. at 880; Lori Hinnant, 2015 Paris Attacks Suspect: Deaths of 130 ‘Nothing Personal,’ AP News (Sept. 15, 2021), https://apnews.com/article/europe-france-trials-paris-brussels-f2031a79abfae46cbd10d4315cf29163. ↑
- Gonzalez, 2 F.4th at 882. ↑
- Id. at 881. ↑
- Id. ↑
- See Gonzalez v. Google, Inc., 282 F. Supp. 3d 1150, 1171 (N.D. Cal. 2017); Fed. R. Civ. P. 12(b)(6). ↑
- Gonzalez, 2 F.4th at 880. Taamneh and Clayborn involve claims against Google, Twitter, and Facebook. Id. ↑
- Gonzalez, 2 F.4th at 879, 883, 884; 1 Artificial Intelligence: Law and Litigation § 3.02, Lexis (database updated May 2024). ↑
- Gonzalez, 2 F.4th at 879. ↑
- Id. at 880 (quoting 18 U.S.C. § 2333(a)). ↑
- Id. at 885 (quoting Justice Against Sponsors of Terrorism Act (JASTA), Pub. L. No. 144-222, 130 Stat. 852 (2016)). ↑
- Id. at 880. ↑
- The Gonzalez plaintiffs’ revenue-sharing theory is distinct from their other theories of liability because the allegations were not based on the content ISIS placed on YouTube. Id. at 898. Instead, the allegations were “premised on Google providing ISIS with material support by giving ISIS money.” Id. The revenue-sharing allegations stemmed from Google’s AdSense program, which involved “Google shar[ing] a percentage of revenues generated from those advertisements with ISIS.” Id. ↑
- Id. at 882. ↑
- Id. at 880. The district court in Taamneh did not reach the issue of section 230 immunity. Id. ↑
- Id. The Taamneh plaintiffs only appealed the dismissal of their aiding and abetting claim. Id. at 908. The Ninth Circuit reversed the district court’s dismissal after concluding that the complaint’s allegations “that defendants provided services that were central to ISIS’s growth and expansion, and that this assistance was provided over many years,” adequately alleged the defendants’ assistance to ISIS was substantial. Id. at 910. ↑
- Gonzalez v. Google LLC, 143 S. Ct. 80 (2022) (mem.); Twitter, Inc. v. Taamneh, 143 S. Ct. 81 (2022) (mem.). ↑
- Gonzalez v. Google, Elec. Priv. Info. Ctr., https://epic.org/documents/onzalez-v-google/ (last visited May 24, 2025); see also Gonzalez v. Google LLC, 143 S. Ct. 1191, 1191–92 (2023) (per curiam). ↑
- See Danielle Draper & Sean Long, Summarizing the Amicus Briefs Arguments in Gonzalez v. Google LLC, Bipartisan Pol’y Ctr. (Feb. 21, 2023), https://bipartisanpolicy.org/blog/arguments-gonzalez-v-google/. ↑
- Richard L. Pacelle, Jr., Amicus Curiae Briefs in the Supreme Court, Oxford Rsch. Encyclopedias (April 20, 2022), https://doi.org/10.1093/acrefore/9780190228637.013.1992. ↑
- Draper & Long, supra note 101. ↑
- Id. ↑
- See generally Transcript of Oral Argument, Gonzalez v. Google, 143 S. Ct. 1191 (2023) (No. 21-1333) [hereinafter Gonzalez Oral Argument Transcript]; Transcript of Oral Argument, Twitter v. Taamneh, 143 S. Ct. 1206 (2023) (No. 21-1496) [hereinafter Taamneh Oral Argument Transcript]. ↑
- See Gonzalez Oral Argument Transcript, supra note 105, at 1, 164; Taamneh Oral Argument Transcript, supra note 105, at 1, 151. ↑
- Gonzalez Oral Argument Transcript, supra note 105, at 49. ↑
- Taamneh Oral Argument Transcript, supra note 105, at 72–73. ↑
- Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72; Taamneh Oral Argument Transcript, supra note 105, at 12–13, 54, 126. ↑
- Kate Klonick, How 236,471 Words of Amici Briefing Gave Us the 565 Word Gonzalez Decision, Klonickles (May 29, 2023), https://klonick.substack.com/p/how-236471-words-of-amici-briefing. ↑
- Gonzalez v. Google, 143 S. Ct. 1191 (2023) (per curiam). ↑
- Id. at 1192. ↑
- Id. ↑
- Taamneh, 143 S. Ct. at 1218. ↑
- 705 F.2d 472 (D.C. Cir. 1983). ↑
- Taamneh, 143 S. Ct. at 1218 (quoting Justice Against Sponsors of Terrorism Act (JASTA), Pub. L. No. 114-222, § 2(a)(5), 130 Stat. 852, 852 (2016)). ↑
- Id. at 1230. ↑
- Id. at 1230–31. ↑
- See Leading Case, supra note 9, at 404–06. “Judicial minimalism is the principle that judges should ‘say[] no more than necessary to justify an outcome.’” Id. at 405 (alteration in original) (quoting Cass R. Sunstein, The Supreme Court, 1995 Term — Foreword: Leaving Things Undecided, 110 Harv. L. Rev. 4, 6 (1996)). ↑
- See Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72; Taamneh Oral Argument Transcript, supra note 105, at 12–13, 54, 126. ↑
- See Doe v. Facebook, Inc., 142 S. Ct. 1087, 1088–89 (2022) (Thomas, J., concurring in denial of certiorari). ↑
- See id. at 1087. ↑
- Id. at 1088 (quoting In re Facebook, 625 S.W.3d 80 (Tex. 2021)). ↑
- Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72. ↑
- Doe, 142 S. Ct. at 1088 (2022) (Thomas, J., concurring in denial of certiorari) (quoting Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 18 (2020)). ↑
- See Malwarebytes, 141 S. Ct. at 16. ↑
- Wyden & Cox, supra note 6, at 2. ↑
- See Kosseff, supra note 2, at 2. ↑
- Wyden & Cox, supra note 6, at 6. ↑
- See George Glover, It’s Time to See Whether AI Is the New Internet — or the Next Metaverse,’ Bus. Insider (July 11, 2023), https://www.businessinsider.com/ai-chatgpt-artificial-intelligence-internet-dot-com-metaverse-crypto-blockchain-2023-7; Einaras Von Gravrock, How AI Empowers the Evolution of the Internet, Forbes (Nov. 15, 2018), https://www.forbes.com/sites/forbeslacouncil/2018/11/15/how-ai-empowers-the-evolution-of-the-internet/. ↑
- See generally How Has the Internet Changed in the Last 20 Years, in.house.media, https://www.ihm.co.uk/blog/how-has-the-internet-changed-in-the-last-20-years/ (last visited May 24, 2025). ↑
- 47 U.S.C. § 230(c)(1); see Wyden & Cox, supra note 6, at 2 (“Congress drafted Section 230 in light of its understanding of the capabilities of then-extant online platforms and the evident trajectory of Internet development.”). ↑
- Josie Fischels, A Look Back at the Very First Website Ever Launched, 30 Years Later, NPR (Aug. 6, 2021), https://www.npr.org/2021/08/06/1025554426/a-look-back-at-the-very-first-website-ever-launched-30-years-later. ↑
- See Fox & Rainie, supra note 20. ↑
- See Danny Hajek et al., What Is AI and How Will It Change Our Lives? NPR Explains., NPR (May 25, 2023), https://www.npr.org/2023/05/25/1177700852/ai-future-dangers-benefits; How Artificial Intelligence Is Changing Your Life Unknowingly, Econ. Times (Mar. 15, 2023), https://economictimes.indiatimes.com/news/how-to/how-artificial-intelligence-is-changing-your-life-unknowingly/articleshow/98455922.cms?from=mdr; Mike Thomas, The Future of AI: How Artificial Intelligence Will Change the World, builtin, https://builtin.com/artificial-intelligence/artificial-intelligence-future (Jan. 28, 2025). ↑
- Wyden & Cox, supra note 6, at 8. ↑
- Id. at 12–13. ↑
- See, e.g., Exec. Order No. 14,110, 88 Fed. Reg. 75191 (Oct. 30, 2023). ↑
- See Zeran v. Am. Online, Inc., 129 F.3d 327, 331–34 (4th Cir. 1997). ↑
- Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 15 (2020) (Thomas, J., concurring in the denial of certiorari) (citing Zeran, 129 F.3d at 331–34). ↑
- Malwarebytes, 141 S. Ct. at 15 (Thomas, J., concurring in the denial of certiorari) (citations omitted). ↑
- See Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019). ↑
- See id. ↑
- Id. In Force, victims of terrorist attacks in Israel alleged that Facebook provided material support to Hamas terrorists by enabling Hamas “to disseminate its messages directly to its intended audiences and to carry out communication components of its terror attacks.” Id. at 59. ↑
- Id. at 70. ↑
- Id. at 70 n.24. ↑
- Christopher Cox, The Origins and Original Intent of Section 230 of the Communications Decency Act, Rich. J.L. & Tech. Blog (Aug. 27, 2020), https://jolt.richmond.edu/2020/08/27/the-origins-and-original-intent-of-section-230-of-the-communications-decency-act/. ↑
- See sources cited supra note 135. ↑
- See Poornima Apte, How AI is Leveling the Marketing Playing Field Between SMBs and Big Business, U.S. Chamber of Comm.: CO (Aug. 7, 2023), https://www.uschamber.com/co/good-company/launch-pad/how-small-businesses-are-using-ai. ↑
- Michael Chui et al., The State of AI in 2022—and A Half Decade in Review, McKinsey & Co. (Dec. 6, 2022), https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review. ↑
- Id. ↑
- Report: Small Business Owners Embrace the Future – Majority Say They Will Adopt Generative AI, FreshBooks, https://www.freshbooks.com/press/data-research/data-research-majority-of-small-business-owners-will-use-ai (last visited May 24, 2025); see also Michelle Kumar, Navigating the Era of AI: Implications for Small Businesses, Bipartisan Pol’y Ctr. (Nov. 3, 2023), https://bipartisanpolicy.org/blog/navigating-the-era-of-ai-implications-for-small-businesses (highlighting a recent survey that found that 23% of small businesses use AI in some form). ↑
- See Apte, supra note 149. ↑
- See id. ↑
- Martin Rowinski, How Small Businesses Drive The American Economy, Forbes (Mar. 25, 2022), https://www.forbes.com/councils/forbesbusinesscouncil/2022/03/25/how-small-businesses-drive-the-american-economy/. ↑
- See, e.g., FACT SHEET: The Small Business Boom Under the Biden-Harris Administration, White House (Apr. 28, 2022), https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2022/04/28/fact-sheet-the-small-business-boom-under-the-biden-harris-administration/. ↑
- Cox, supra note 147. ↑
- Christopher MacColl, Defamatory Bots and Section 230: Navigating Liability in the Age of Artificial Intelligence, JD Supra (July 18, 2023), https://www.jdsupra.com/legalnews/defamatory-bots-and-section-230-3202468 (quoting 47 U.S.C. § 230(c)(1)). ↑
- The General Data Protection Regulation, Eur. Council (June 13, 2024), https://www.consilium.europa.eu/en/policies/data-protection-regulation/. ↑
- Jared Schroeder, Meet the EU Law That Could Reshape Online Speech in the U.S., Slate (Oct. 27, 2022), https://slate.com/technology/2022/10/digital-services-act-european-union-content-moderation.html. ↑
- See Questions and Answers On the Digital Services Act, Eur. Comm’n (Feb. 23, 2024), https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_2348. ↑
- Kelvin Chan & Raf Casert, EU law targets Big Tech Over Hate Speech, Disinformation, Associated Press (April 23, 2022), https://apnews.com/article/technology-business-police-social-media-reform-52744e1d0f5b93a426f966138f2ccb52. ↑
- See Schroeder, supra note 160. ↑
- Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI, Eur. Parl.: News (Sept. 12, 2023), https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai. ↑
- See EU AI Act: First Regulation on Artificial Intelligence, Eur. Parl.: News, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (Feb. 19, 2025); The Digital Services Act Package, Eur. Comm’n, https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package (Feb. 12, 2025). ↑
- Artificial Intelligence Act, supra note 164. ↑
- See, e.g., Brief of the National Center on Sexual Exploitation, the National Trafficking Sheltered Alliance, and RAINN, as Amici Curiae in Support of Petitioners, Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023) (No. 21-1333) [hereinafter NCSE Brief]. See generally Sivile Manene et al., Mitigating Misinformation About the COVID-19 Infodemic on Social Media: A Conceptual Framework, NIH Nat’l Libr. Med., May 2023, at 1, 2 (“Social media platforms have taken steps to mitigate the spread of COVID-19 misinformation by implementing policies . . . which prohibit[] users from using the platform’s services to share false or misleading information about COVID-19.”). ↑
- NCSE Brief, supra note 167. ↑
- Brief for Reddit, Inc. and Reddit Moderators as Amici Curiae in Support of Respondent, Gonzalez, 143 S. Ct. 1191 (No. 21-1333). ↑
- See Zeran v. Am. Online, Inc., 129 F.3d 327, 331–34 (4th Cir. 1997). ↑
- See John Samples, Why the Government Should Not Regulate Content Moderation of Social Media, CATO Inst. (Apr. 9, 2019), https://www.cato.org/policy-analysis/why-government-should-not-regulate-content-moderation-social-media. ↑
- Sue Halpern, The Year A.I. Ate the Internet, New Yorker (Dec. 8, 2023), https://www.newyorker.com/culture/2023-in-review/the-year-ai-ate-the-internet. ↑
- See Manene et al., supra note 167, at 2 (“Social media platforms have taken steps to mitigate the spread of COVID-19 misinformation by implementing policies . . . which prohibit[] users from using the platform’s services to share false or misleading information about COVID-19.”). ↑
- See Nandita Krishnan et al., Research Note: Examining How Various Social Media Platforms Have Responded to COVID-19 Misinformation, Harv. Kennedy Sch. Misinformation Rev. (Dec. 15, 2021), https://misinforeview.hks.harvard.edu/article/research-note-examining-how-various-social-media-platforms-have-responded-to-covid-19-misinformation/. ↑
- See Gabrielle Lim & Samantha Bradshaw, Chilling Legislation: Tracking the Impact of “Fake News” Laws on Press Freedom Internationally, Ctr. for Int’l Media Assistance (July 19, 2023), https://www.cima.ned.org/publication/chilling-legislation/. ↑
- See Cary Coglianese, Regulating Machine Learning: The Challenge of Heterogeneity, Competition Pol’y Int’l, Feb. 2023, at 1, 3. ↑
- Artificial Intelligence Act, supra note 164. ↑
- Kell, supra note 8. ↑