15 Wake Forest L. Rev. Online 46

William Gilchrist

Enacted as part of the Telecommunications Act of 1996, section 230 of the Communications Decency Act was originally introduced to shield children from inappropriate content online.[1] Despite being passed for a relatively limited purpose, section 230’s broad liability protections for interactive computer services have since been credited with shaping the modern internet.[2] Today, it stands as one of the few federal statutes recognized for having “fundamentally changed American life.”[3]

As social media and internet use have evolved, the language of section 230 has generally adapted to new technologies. But with the rise of artificial intelligence (AI) as a mainstream tool, section 230’s scope has become increasingly uncertain. Due in part to its brevity and resulting ambiguity, questions have emerged over whether its liability protections extend to online service providers’ use of AI,[4] particularly in recommender systems.[5] The Supreme Court first addressed section 230’s applicability to AI use in Gonzalez v. Google.[6] Although many hoped the case would bring clarity, the Court issued a three-page per curiam opinion dismissing it for failure to state a claim, leaving stakeholders back at square one.[7]

In Gonzalez, the Supreme Court considered for the first time whether section 230 shields online platforms from liability for using AI to recommend third-party content.[8] While the case was a critical first step in addressing AI-related liability, the Court’s ruling left concerned parties with more questions than answers. Critics argue the opinion fell short of fulfilling the judiciary’s responsibility to “say what the law is,” emphasizing the need for additional guidance on section 230’s scope.[9] Ultimately, the Court’s decision in Gonzalez not only reflects the judiciary’s lack of understanding of AI but also kicks the can down the road, leaving future courts unable to fairly and consistently interpret section 230’s scope. Accordingly, clearer legal standards are essential to help U.S. companies assess their liability exposure when deploying new products and to ensure they remain competitive in the global AI race.[10]

Today, hundreds of active AI-related lawsuits are making their way through the American legal system, typically involving intellectual property, amplification of dangerous content, and discrimination issues.[11] And while AI offers undeniable economic benefits, its widespread and varied application has made it difficult for lawmakers to understand and regulate.[12] As AI becomes increasingly embedded in daily life, AI-related litigation is only expected to increase.[13]

This Comment begins with an explanation of what AI is and how it is currently being used in American society. It then provides background on Gonzalez, analyzes the Court’s opinion and its implications, and argues that the Court should have directly addressed section 230’s applicability. Because a more effective resolution of Gonzalez would have defined section 230’s scope, this Comment critiques the Court’s decision and argues that affirming a broad interpretation of section 230 would have been the better outcome. Finally, this Comment examines the challenges of applying a broad interpretation of section 230, ending with a discussion of the challenges associated with current and future AI regulation.

I. Background

Prior to the 1950s, AI existed only in science fiction.[14] But after Alan Turing introduced the concept in his 1950 paper, Computing Machinery and Intelligence, AI began its gradual evolution into the tool it is today.[15] Beginning as “little more than a series of simple rules and patterns,” AI has advanced exponentially and is now “capable of performing tasks that were once thought impossible.”[16]

The private sector has embraced this expansion, with many companies taking advantage of the technology and incorporating it into various parts of their operations.[17] While doing so offers clear advantages, it has also raised new and increasingly frequent questions about potential liability exposure.[18] Until recently, U.S. courts have reliably turned to section 230 for guidance when evaluating liability arising from online AI use.[19] And while section 230’s text provided sufficient guidance in AI’s early stages, the technology’s growing complexity and evolving uses have rendered section 230’s applicability increasingly unclear.

Since section 230’s adoption in 1996, Americans’ internet access and use have dramatically increased.[20] As internet access has improved, so has Americans’ exposure to and awareness of AI.[21] The AI of the 1990s was virtually nonexistent compared to the AI of today, and new capabilities allow for the technology to be used in ways never before thought possible.[22] These advancements have seamlessly integrated AI into nearly every aspect of daily life, often in ways that go unnoticed.[23] Nevertheless, with new technology comes new legal issues, and AI is no exception.[24]

To understand Gonzalez and its global implications, it is first necessary to define what constitutes AI. At the highest level, AI is “a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem solving, and exercising creativity.”[25] And while AI use continues to evolve, the following discussion outlines the broad categories of AI and how they are currently being used.

A. A Spectrum of Systems

There are seven general categories of AI: three based on capabilities and four based on functionalities.[26] The three kinds of AI based on capabilities are Artificial Narrow, General AI, and Super AI.[27] Artificial Narrow—the only type of AI in use today—refers to technology that is “designed to perform a specific task or a set of closely related tasks.”[28] The other two types of AI based on capabilities—General and Super AI—remain theoretical, as neither has been successfully developed.[29] These forms are expected to match or surpass human intelligence.[30]

The four types of AI based on functionalities are Reactive Machine, Limited Memory, Theory of Mind, and Self-Aware.[31] Reactive Machine systems include AI “with no memory [that is] designed to perform a very specific task,” such as Netflix’s movie and TV show recommendation system.[32] Limited Memory AI differs from Reactive Machine AI because it can recall past events and monitor objects and situations over time.[33] Limited memory AI includes generative AI such as ChatGPT, virtual assistants such as Siri and Alexa, and self-driving vehicles.[34] Theory of Mind and Self-Aware AI are forms that are still in development or entirely theoretical.[35] Theory of Mind AI would allow machines to understand the thoughts and emotions of other entities, while Self-Aware AI would allow machines to understand their own internal conditions and traits.[36]

B. Teaching the Machine: How AI Learns

For each category of AI, there are several tools that software developers can use to create and enhance their systems.[37] One of these tools is machine learning (ML), a term that is often incorrectly used interchangeably with AI.[38] Though AI and ML are closely related, ML is a subset of AI[39] that involves “developing algorithms and statistical models that computer systems use to perform tasks without explicit instructions, relying on patterns and inference instead.”[40] While AI is “the ability of a machine to act and think like a human,” ML is a type of AI that involves humans “relying on data and feeding it to computers so they can simulate what they think we’re doing.”[41] The broad advantages of ML allow it to be used in a variety of contexts, including rapidly processing large datasets, using algorithms that change and improve over time, and spotting patterns or identifying anomalies.[42]

Broadly put, ML works by “exploring data and identifying patterns.”[43] Most tasks involving data-defined patterns or rule sets can be automated with ML,[44] which can be used to explore data and identify patterns in two ways: supervised learning and unsupervised learning.[45] Supervised learning involves humans labeling inputs and outputs that train an algorithm to accurately classify data and predict outcomes.[46] In contrast, unsupervised learning models work independently to discover the structure of unlabeled data. For example, an unsupervised learning model could be used to identify products often purchased together online.[47] Supervised learning, which is more widely used than unsupervised due to its ease of use, is the type of ML behind the recommender systems at issue in Gonzalez.[48]

C. Recommender Systems and Content Curation

Recommender systems, like those in Gonzalez, are “algorithms providing personalized suggestions for items that are most relevant to each user.”[49] Today, many social media platforms use AI and ML recommender systems in a variety of ways.[50] For example, YouTube uses AI and ML to automatically remove objectionable content, label imagery for video background editing, and to recommend videos.[51] In addition to YouTube, recommender systems are commonly used by social media platforms like Spotify, Amazon, Netflix, TikTok, and Instagram to tailor content and product suggestions to their users.[52]

AI, ML, and recommender systems are also being adopted outside the social media context.[53] “From manufacturing to retail and banking to bakeries, even legacy companies are using machine learning to unlock new value or boost efficiency.”[54] As explained by Aleksander Madry, Director of the MIT Center for Deployable Machine Learning, “machine learning is changing, or will change, every industry.”[55]

Though statistics about the adoption of AI differ widely, the number of global companies that use AI is likely in the realm of 35 to 55 percent, with some estimates as high as 67 percent.[56] Beyond its use by companies, individuals are increasingly incorporating AI into their daily lives.[57] But despite the increasing popularity of AI in American society, the only real framework federal courts have to interpret liability for AI use is section 230, an almost thirty-year-old federal statute that was initially passed to promote commercial internet use and shield children from harmful content online.[58]

II. The Legal Backbone of the Internet

In 1996, Congress passed section 230 in response to the “rapidly developing array of Internet and other interactive services.”[59] At the time, section 230 was necessary because of the First Amendment’s inability to adequately protect online platforms providing forums for third-party content.[60] A key catalyst for the legislation was the decision in Stratton Oakmont, Inc. v. Prodigy Services Co., a libel case from 1995.[61]

In Stratton Oakmont, the Supreme Court of New York, Nassau County, found that Prodigy Services, the owner-operator of a computer network that sponsored subscriber communication through online bulletin boards, was liable for third party statements posted on its site.[62] The court reasoned that Prodigy was liable as a “publisher” because it “monitor[ed] and edit[ed]” the individual bulletin board at issue, which gave Prodigy the benefit of editorial control.[63] In response, “to ensure that Internet platforms would not be penalized for attempting to engage in content moderation, Congress enacted Section 230.”[64]

A. Where Immunity Begins: Section 230(c)(1)

Known as “the twenty-six words that created the internet,”[65] the operative provision of the Communications Decency Act is section 230(c)(1),[66] which states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[67]

Section 230(c)(1) generally “protects websites from liability for material posted on the website by someone else.”[68] But interactive service providers are only protected from liability if they are not also an information content provider, or “someone who is ‘responsible, in whole or in part, for the creation or development of‘ the offending content.”[69] As explained by Chief Judge Kozinski in Fair Housing Council v. Roommates.com:

A website operator can be both a service provider and a content provider: If it passively displays content that is created entirely by third parties, then it is only a service provider with respect to that content. But as to content that it creates itself, or is “responsible, in whole or in part” for creating or developing, the website is also a content provider. Thus, a website may be immune from liability for some of the content it displays to the public but be subject to liability for other content.[70]

Thus, the key question in assessing recommender system liability is whether the system contains content for which the operator is “responsible in whole or in part for creating or developing,” or whether the system simply dictates how existing content is displayed.

Although section 230 does not expressly address the use of AI or recommender systems, it was drafted in response to the internet’s rapid growth and evolution.[71] To account for the inevitable emergence of more advanced technologies, section 230 was drafted in a technology-neutral manner that would allow the statute to be applied to emerging and future technology.[72] Unsurprisingly, the exponential increase in the commercial use and complexity of AI has also led to a high volume of litigation, as well as subsequent contradictory state and federal court rulings.[73] But despite the expectation that section 230 would be applied to future technology, the exceedingly complex nature of today’s AI has surpassed the clear bounds of section 230.

B. Uncertainty and Calls for Change

Increasing litigation and uncertainty have led to growing calls for regulation—calls that have not gone unnoticed by lawmakers and courts.[74] One of these lawmakers, Senator Dick Durban, Chairman of the Senate Judiciary Committee, compared the rise of AI to that of the social media industry.[75] “When it came to online platforms, the inclination of the government was to get out of the way. I’m not sure I’m happy with the outcome as I look at online platforms and the harms they have created . . . I don’t want to make that mistake again,” he said.[76] Other senators have agreed, with Senator Lindsey Graham even calling for an entirely new agency to regulate the technology.[77]

Even with increasing calls for regulation, the majority of current AI-related laws and regulations have been implemented by individual states with little to no guidance from Congress or the Supreme Court.[78] And even with bipartisan support and a potential model statute from the European Union,[79] Congress has yet to pass any meaningful regulation.[80] This lack of guidance at the federal level has led companies and courts to rely on conflicting interpretations of section 230 in AI-related claims. This growing uncertainty has also made Supreme Court guidance necessary to achieve clarity and consistency in future litigation.

III. Gonzalez v. Google: A Ripple, Not a Wave

In response to these concerns and calls for action, the Supreme Court granted certiorari to hear Gonzalez v. Google. As Gonzalez moved through the courts, it became a focal point for many AI executives and other stakeholders seeking guidance on how section 230 applies to AI.[81]

The case involved claims brought against Google under the Anti-Terrorism Act (ATA)[82] by the father of Nohemi Gonzalez, a 23-year-old who was murdered while studying abroad in Paris, France.[83] Gonzalez was one of 130 people killed during a series of attacks—known as the “Paris Attacks”—carried out by ISIS on November 13, 2015.[84] The Gonzalez plaintiffs claimed that Google was liable for the victims’ deaths because it “aided and abetted international terrorism and provided material support to international terrorists by allowing ISIS to use YouTube.”[85] Specifically, they argued that because Google’s YouTube algorithms “match and suggest content to users based upon their viewing history,” YouTube actively recommended ISIS videos to users and, in effect, “facilitat[ed] social networking among jihadists.”[86] The plaintiffs further alleged that YouTube “has become an essential and integral part of ISIS’s program of terrorism,” serving as “a unique and powerful tool of communication that enables ISIS to achieve its goals.”[87]

The district court concluded that the plaintiffs’ claims were barred by section 230 and dismissed the case pursuant to Rule 12(b)(6).[88] On appeal, the Ninth Circuit consolidated Gonzalez with Twitter v. Taamneh and Clayborn v. Twitter, two cases with similar facts and claims.[89] Taamneh was brought by the survivors of a victim killed in the Reina nightclub attack in Istanbul, Turkey, on January 1, 2017, while Clayborn was brought by the survivors of a victim killed in a 2015 attack on an office Christmas party in San Bernardino, California.[90] As in Gonzalez, the attacks in Taamneh and Clayborn were later connected to ISIS.[91]

In each case, the plaintiffs sought damages from Google, Twitter, and Facebook under the ATA, which “allows United States nationals to recover damages for injuries suffered ‘by reason of an act of international terrorism.’”[92] The scope of the ATA was broadened in 2016 by the Justice Against Sponsors of Terrorism Act (JASTA), which “amended the ATA to include secondary civil liability for ‘any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed’ an act of international terrorism.”[93] The claims theorized that the defendants were liable under the ATA because their “social media platforms allowed ISIS to post videos and other content to communicate the terrorist group’s message, to radicalize new recruits, and to generally further its mission,” effectively aiding and abetting international terrorism.[94]

The district court granted Google’s motion to dismiss in Gonzalez after concluding that all of the plaintiffs’ claims were barred by section 230 except for the revenue-sharing claims,[95] which were dismissed for failure to allege proximate cause.[96] The courts in Taamneh and Clayborn also granted the defendants’ motions to dismiss for failure to allege secondary liability under the ATA.[97] The Ninth Circuit affirmed the dismissals in Gonzalez and Clayborn, and reversed and remanded for further proceedings in Taamneh.[98] The Gonzalez plaintiffs’ filed a petition for a writ of certiorari on April 4, 2022, followed by the Taamneh plaintiffs’ on May 26. The Supreme Court granted both petitions on October 3, 2022.[99]

Prior to Gonzalez, the Supreme Court had never addressed how section 230 applies to liability stemming from the use of AI by a social media company, or any company in general.[100] And while any case before the Supreme Court has the potential to have a significant impact, the rapid growth and increasing pervasiveness of AI in American society, combined with the lack of meaningful regulation, has created an urgent need for guidance in the industry. Because section 230 is one of the “most important laws in tech policy,” organizations across the political spectrum would be impacted by the Supreme Court’s interpretation of its scope.[101]

The significance of the Court’s decision in Gonzalez resulted in, and is underscored by, the unusually high number of amicus briefs filed. Since 2010, Supreme Court cases have averaged about a dozen amicus briefs each.[102] In Gonzalez, seventy-eight organizations filed amicus curiae briefs in hopes of influencing the Court’s opinion.[103] While each organization had its own motives, one thing is clear: Many organizations had a stake in the outcome of Gonzalez, and the Court’s opinion left them with more questions than answers.[104]

A. Confusion at Oral Argument: A Decision in Twitter v. Taamneh

Many of the issues raised by amici were discussed during oral arguments.[105] The oral arguments—lasting nearly three hours in each case—were held in February 2023.[106] The Justices posed questions about everything from the use of AI to generate content[107] to hypotheticals about a bank’s potential liability for allowing Osama Bin Laden to open an account.[108] On multiple occasions, several of the Justices expressed confusion—not only about the arguments being made, but also about the questions before the Court.[109] But after countless hypotheticals and endless back-and-forth between counsel and the Justices, the Justices were apparently left with more questions than answers.

The Court’s opinion highlighted its confusion over the issues, the available options, and the potential consequences of various interpretations of section 230. After hundreds of pages of amicus briefs and oral arguments that went over the time limit by an hour and thirty-four minutes,[110] the Court’s three-page per curiam opinion was released on May 18, 2023.[111] Despite high hopes from stakeholders and members of the AI community, the Court declined to address the application of section 230, concluding that the plaintiffs’ complaint appeared to state “little, if any, plausible claim for relief.”[112] This conclusion led the Court to vacate the Ninth Circuit’s judgment and remand the case for consideration in light of the decision in Taamneh.[113]

The Court overturned the Ninth Circuit’s ruling in the more robust Taamneh opinion. Although Taamneh provided significantly more analysis than Gonzalez, the analysis focused on what it means to “aid and abet” and “what precisely must the defendant have ‘aided and abetted’” when determining liability under JASTA.[114] The Court looked to Halberstam v. Welch[115] to provide the legal framework for “civil aiding and abetting and conspiracy liability.”[116] After acknowledging that “the point of aiding and abetting is to impose liability on those who consciously and culpably participated in the tort at issue,” the Court noted that the nexus between the defendants and the terrorist attack was far removed.[117] Seeming skeptical, the Court acknowledged the plaintiffs’ allegations that Twitter “failed to do ‘enough’ to remove ISIS-affiliated users and ISIS-related content—out of hundreds of millions of users worldwide and an immense ocean of content—from their platforms.”[118] However, because the plaintiffs ultimately failed to allege intentional aid or systematic assistance, the Court held the allegations were insufficient under the ATA.

B. Gonzalez, Taamneh, and Their Effects

While the Court offered a relatively substantive aiding and abetting analysis in Taamneh, the Court’s decisions in both Gonzalez and Taamneh ultimately fell short. Touted as an act of misguided judicial minimalism, the Court’s decisions “simultaneously avoid[ed] the risk of erroneous judgment on a technical question with far-reaching consequences and [left] the politically contentious issue of § 230’s scope to the democratically accountable Congress.”[119] And although doing so may have been the safer short-term decision given the Court’s questionable understanding of the ins and outs of recommender systems and AI,[120] deferring the decision to Congress is hardly likely to yield meaningful regulations anytime soon.

Nonetheless, the Court’s decision not to rule on section 230 was not a result of a lack of awareness of the need for guidance on the issue. While it was the first petition the Court granted, Gonzalez was not the first case to petition the Court to define or provide clarity on the scope of section 230.[121] The Court denied cert in Doe v. Facebook, a case involving allegations that a sexual predator used Facebook to groom the plaintiff for sex trafficking.[122] In his concurrence denying certiorari, Justice Thomas noted that “‘the United States Supreme Court—or better yet, Congress—may soon resolve the burgeoning debate about whether the federal courts have thus far correctly interpreted section 230.’ Assuming Congress does not step in to clarify § 230’s scope, we should do so in an appropriate case.”[123]

Gonzalez was the appropriate case. Yet, the Court’s questions and admitted confusion at oral argument[124] indicate that it ultimately took the advice outlined by Justice Thomas in Doe—that “before we close the door on such serious charges, ‘we should be certain that is what the law demands.’”[125] But even though the Justices may remain uncertain about what the law demands, the Court’s internal justifications for avoiding the substance of section 230 will have lasting consequences for social media conglomerates and other companies who have come to rely on recommender systems and other forms of AI.

IV. Critical Error: The Need to Affirm Section 230’s Broad Scope

As lower courts have consistently held in the past, immunity should only be withheld when an interactive service provider makes “substantial or material edits and additions” to content.[126] Here, the Court ultimately reached the correct outcome in Gonzalez by dismissing the plaintiff’s claims, but its fatal flaw was failing to validate section 230’s broad immunity for future litigants.

An affirmance of the broad scope of section 230 was necessary for two reasons. First, providing current and future online service providers with a dependable, broad grant of immunity is in line with the plain language of the statute and Congress’s intent for section 230—“to protect Internet platforms’ ability to publish and present user-generated content in real time, and to encourage them to screen and remove illegal or offensive content.”[127] Second, policy considerations support a broad application of section 230 because, as the evolution of the internet has shown, strong liability protections encourage beneficial technological and economic development in the United States, particularly for small businesses.[128]

A. Gonzalez Ignores Congressional Intent and the Plain Language of Section 230

Two primary purposes of section 230 were “to protect Internet speech from content regulation by the government,” and to reverse a New York Supreme Court case that held “an online service provider’s decision to moderate the content of its message boards rendered it a ‘publisher’ of users’ defamatory comments on the boards.”[129] Both purposes were aimed at promoting the continued development of the internet, and while AI and the internet were once separate and distinct, they have become increasingly intertwined.[130]

Like the internet, AI has and continues to evolve at extreme speed.[131] The drafters were aware of the rapidly changing nature of the internet, and section 230’s immunity for “publisher[s]” and “speaker[s]” was drafted without highly specific or limiting language to account for inevitable and unforeseeable technological changes.[132] The first web page was launched in 1991, just five years before section 230 was passed.[133] In the early 1990s, people were only just beginning to hear about the new information superhighway that would one day change their lives.[134] By 2024, contemporary AI—including recommender systems and ML algorithms—is viewed much like the internet was when section 230 was first drafted in the early 1990s.[135]

As highlighted by Senator Ron Wyden and former Representative Christopher Cox, “many of the major Internet platforms engaged in content curation [were] a precursor to the targeted recommendations that today are employed by YouTube and other contemporary platforms.”[136] Senator Wyden and former Representative Cox agree that the recommender systems at issue in Gonzalez—which are representative of typical AI systems used by online service providers—are the “direct descendants” of early content curation efforts.[137] And just as Wyden, Cox, and other regulators of the 1990s were seeking to promote the development of the internet, regulators are now seeking to promote AI.[138] So because the internet and AI are intrinsically linked, regulation of companies’ use of AI should fall within the scope of section 230.

Beyond the original intent and plain language of section 230, the statute has also been applied as a broad shield to protect online service providers from liability since its inception.[139] As noted by Justice Thomas in Malwarebytes, Inc. v. Enigma Software Group, USA, LLC, “the first appellate court to consider the statute held that . . . § 230 confers immunity even when a company distributes content that it knows is illegal.”[140] This broad interpretation set the stage for future section 230 jurisprudence, and subsequent decisions “adopted this holding as a categorical rule across all contexts.”[141]

Courts have also upheld the principle that section 230 should be interpreted broadly, even in the context of AI.[142] Although Gonzalez was the first time the issue reached the Supreme Court, it is not the first time a court considered whether AI use could fall within the scope of the statute.[143]

In Force v. Facebook, Inc., the Second Circuit interpreted section 230 to protect AI use.[144] There, the court noted that because the algorithms at issue were “content ‘neutral,’ . . . merely arranging and displaying others’ content . . . [was] not enough to hold Facebook responsible.”[145] However, the court went further, providing additional clarification on section 230’s scope:

We do not mean that Section 230 requires algorithms to treat all types of content the same. To the contrary, Section 230 would plainly allow Facebook’s algorithms to, for example, de-promote or block content it deemed objectionable. We emphasize only—assuming that such conduct could constitute “development” of third-party content—that plaintiffs do not plausibly allege that Facebook augments terrorist-supporting content primarily on the basis of its subject matter.[146]

By recognizing the plain language and overall intent behind the statute—to allow online service providers to monitor what is on their sites, while recognizing that no provider could prevent all illegal or undesirable content—the court in Force reached the conclusion the Supreme Court should have affirmed in Gonzalez.

The plain language of section 230, express legislative intent behind its drafting, and the subsequent interpretation of the statute all support the prevailing view that section 230 should be interpreted broadly. When considering these aspects of section 230, as well as others discussed below, the decision is clear: The Supreme Court should have used Gonzalez as an opportunity to affirm the broad scope of section 230 and extend liability protection to online service providers that incorporate AI recommender systems into their platforms.

B. Congress or the Courts? Promoting Beneficial AI Development in the United States

Interpreting section 230’s liability protections to include AI was necessary to foster innovation and strengthen AI development in the United States. As noted by section 230’s drafters, “[b]y providing legal certainty for platforms, the law has enabled the development of innumerable internet business models based on user-created content.”[147] Like the internet, AI has the potential to have a dramatic impact on our lives,[148] and while AI has become increasingly integrated into large scale business models, small and midsize businesses have begun to fall behind.[149] This is partly because larger businesses typically have the resources and capital to implement AI and are better able to offset the costs and litigation risks associated with testing and developing cutting-edge technology.

Despite litigation risks and other obstacles, AI use more than doubled between 2017 and 2022.[150] However, the proportion of global businesses that use AI has plateaued between 50 and 60 percent,[151] and a May 2023 report found that only 25 percent of small businesses have begun testing or using AI in their operations.[152] Compared with larger companies, the benefits of AI have the potential to generate an even greater impact for small businesses; the benefits include cost savings through improved processes, accelerated time from production to market for new products, and access to talent that would otherwise be too expensive.[153]

Despite its many benefits, AI is still largely underutilized by small businesses.[154] Fortunately, small percentage increases in AI adoption have the potential to have a major impact, as small businesses of 500 employees or less make up 99.9 percent of all U.S. businesses.[155] Promoting small business growth is a high priority among government regulators,[156] and lawmakers should be doing everything in their power to help wherever possible. Accordingly, because the legal certainty provided by section 230 “enabled the development of innumerable internet business models,”[157] interpreting section 230 to include AI would provide crucial opportunities and support for small businesses, just as it did for early internet sites.

Finally, the Gonzalez courts’ sole focus on whether recommender systems are within the scope of section 230 does not limit the applicability of the decision to other types of AI. Increasingly popular generative AI products, such as ChatGPT and other chatbots, “can and do rely on and relay information that is provided by another.”[158] Thus, it is likely that a broad interpretation in Gonzalez would extend to other forms of AI, like generative AI.

In sum, a broad application of section 230 is supported by the plain text of the statute, the legislative intent of the drafters, subsequent interpretation by lower courts, and prevailing policy considerations. Gonzalez presented a great opportunity to solidify these concerns by affirming section 230’s broad scope, resulting in the conclusion that the decision not to reach the issue was misguided.

V. Guidance from Abroad and the Potential for Regulation by Default

By default, the Gonzalez decision left lower courts and AI-reliant companies in the same position as before the Court granted certiorari. But questions about the scope of section 230 and companies’ liability for the AI use are not going away; as AI advances and becomes more prevalent in society, these questions will continue to pop up with greater frequency. Although the Supreme Court may argue that the decision is better left for Congress, continued inaction risks allowing foreign regulations to dictate the outcome instead.

For example, a decision may come in the form of AI or speech regulations from the European Union (EU). In 2018, the EU passed the General Data Protection Regulation (GDPR), the self-proclaimed “strongest privacy and security law in the world.”[159] Even though the GDPR is only targeted towards protecting EU residents, many companies “made global changes to their services to comply with European regulations.”[160] Shortly after the GDPR was passed, the European Union passed the Digital Services Act (DSA), which came into effect on November 16, 2022.[161] The DSA requires big tech companies, like Google and Facebook, “to police their platforms more strictly to better protect European users from hate speech, disinformation, and other harmful online content.”[162] Both the GDPR and DSA threaten large fines for noncompliant companies,[163] and while the laws only require compliance inside the EU, it is often more practical to make global changes rather than region-specific adjustments.

On December 9, 2023, the European Parliament reached a provisional agreement with the European Council for “a bill to ensure AI in Europe is safe, respects fundamental rights and democracy, [and allows] businesses [to] thrive and expand.”[164] Known as the AI Act, the bill would be the world’s first comprehensive AI law, creating “obligations for providers and users depending on the level of risk” from artificial intelligence.[165] Although still in its early stages, the AI Act would, among other things, ban categorization systems that use sensitive characteristics, such as political, religious, or philosophical beliefs, as well as sexual orientation and race.[166] If passed, the effects of the Act would likely be similar to the GDPR and DSA: The risk of non-compliance and practical difficulties of making region-specific changes would lead companies to tailor their algorithms in areas outside the EU to ensure compliance. So, by failing to outline the protections for AI stemming from section 230, the Supreme Court missed an opportunity to set the rule for what was protected in the United States, opening the door for EU regulations to set the standard.

VI. No Perfect Solution

Although a broad interpretation of section 230 is the best solution, it is not a perfect solution. The online world is a dangerous place, and bad actors will inevitably take advantage of or work around online algorithms to commit crimes and other bad acts. Beyond concerns that algorithms help promote terrorism, interest groups have warned that several other problems—including human trafficking, child exploitation, and the spread of misinformation—will become worse if section 230 is interpreted broadly.[167] While mitigating these harms is difficult, a highly specific and restrictive interpretation would cause more harm than good, and the novel, dynamic nature of AI makes comprehensive regulation currently impractical. As such, broad regulation is the only reasonable step at this stage.

As highlighted by the National Center on Sexual Exploitation (NCOSE), the internet is the primary location for the sexual exploitation of children, and section 230 “was never intended to provide legal protection to websites that . . . facilitate traffickers in advertising the sale of unlawful sex acts.”[168] Both points are uncontroverted and address abhorrent societal problems which require continued commitment and action by regulators to eradicate. But preventing exploitation and human trafficking online is a complex challenge. And while narrowing the scope of section 230 might provide limited assistance in addressing these pinpoint issues, altering the interpretation of a broad statute based on the concerns of a small subset of stakeholders would do more harm than good. As noted in an amicus brief filed by Reddit Inc., “[j]udicial interpretation should not move at Internet speeds, and there is no telling what a sweeping order removing targeted recommendations from the protection of Section 230 would do to the Internet as we know it.”[169]

Section 230 has been interpreted broadly since its enactment.[170] Although the significant immunity from liability given to online service providers has resulted in negative consequences, the broader implications of a drastic change would be difficult for the Court to predict. Thus, a narrow interpretation of section 230’s scope would have been misguided.

In the realm of free speech, less regulation has traditionally been associated with more freedom.[171] But some argue that AI has the potential to disrupt that balance. In its July 2023 report, PEN America argued that “generative A.I. threatens free expression by ‘supercharging’ the dissemination of disinformation and online abuse,” resulting in “the potential for people to lose trust in language itself, and thus in one another.”[172] While the dissemination of misinformation online is of increasing concern, online service providers are already taking steps to mitigate misinformation risks on their platforms.[173] And while there is always more that can be done, the “massive volume of content and the nuanced nature of misinformation”[174] make creating effective regulations difficult, if not impossible. Interpreting section 230 narrowly in hopes of addressing these concerns would still fail to effectively confront these issues, while chilling freedom of the press by discouraging journalists from reporting on issues that might lead to legal trouble.[175]

Despite the pitfalls of interpreting section 230 broadly, the novel and increasingly complex nature of AI has resulted in a lack of currently feasible alternatives. AI is particularly difficult to regulate because it is used to perform a wide variety of tasks, exists in many different forms with distinct characteristics, often involves the use of multiple algorithms working together, and consistently evolves through updates and new data.[176]

These characteristics are part of what makes AI so useful. It is dynamic, easily adaptable, and able to advance on its own. Unfortunately, Congress does not share these characteristics, and targeted regulations in the near future are unlikely. As a result, it is important to make do with what we have—section 230. Drafted nearly thirty years ago, section 230 has served as an effective regulator of internet speech since its creation, and even though applying its language to AI is by no means a perfect solution, it currently is the best available option.

Conclusion

AI is new, complex, and changing daily—as a result, lawmakers have struggled to develop and pass regulations that can keep up with AI’s rapid development. Referring to the European AI Act,[177] Tom Siebel, founder and CEO of C3.ai, an emerging AI company, said that “[i]f you can understand one sentence of it, you will understand one more sentence than I, and I think you will understand one more sentence than the people who wrote it.”[178] Regulating AI presents a significant challenge, but like any emerging technology, it comes with obstacles. Leaders in the industry still haven’t found the perfect solution, and a perfect web of AI laws will not emerge overnight.

Still, it is important to maximize the effectiveness of the regulations already in existence by tailoring our interpretation of existing law to include AI. In Gonzalez, the Supreme Court had the opportunity to do just that, by affirming the way many lower courts have interpreted section 230 in the past. By failing to affirm lower courts’ previous interpretations, the Supreme Court effectively affirmed the status quo—that section 230 might be applied to protect online service providers from liability—while also spreading uncertainty about companies’ future exposure to liability for the use of AI.

  1.  47 U.S.C. § 230; Gonzalez v. Google LLC, 2 F.4th 871, 942 (9th Cir. 2021).
  2. Interactive computer services are “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” See 47 U.S.C. § 230(f)(2); see also Jeff Kosseff, The Twenty-Six Words That Created the Internet 1 (2019).
  3. Kosseff, supra note 2, at 3.
  4. Brief of Senator Ron Wyden and Former Representative Christopher Cox as Amici Curiae in Support of Respondent, Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023) (No. 21-1333); see, e.g., Gonzalez, 2 F.4th 871; Dyroff v. Ultimate Software Grp., 934 F.3d 1093 (9th Cir. 2019); Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
  5. Recommender systems generate “personalized suggestions for items that are most relevant to each user.” See Francesco Casalegno, Recommender Systems – A Complete Guide to Machine Learning Models, Medium (Nov. 25, 2022), https://towardsdatascience.com/recommender-systems-a-complete-guide-to-machine-learning-models-96d3f94ea748.
  6. 143 S. Ct. 1191 (2023) (per curiam); see also Ron Wyden & Christopher Cox, The Authors of Section 230: ‘The Supreme Court Has Provided Much-Needed Certainty About the Landmark Internet Law–but AI Is Uncharted Territory, Fortune (Sept. 7, 2023), https://fortune.com/2023/09/07/authors-of-section-230-supreme-court-certainty-landmark-internet-law-ai-uncharted-territory-politics-tech-wyden-cox/; Gonzalez, 2 F.4th at 942.
  7. Gonzalez, 143 S. Ct. 1191.
  8. Id. at 1191–92.
  9. Leading Case, Twitter, Inc. v. Taamneh, 137 Harv. L. Rev. 400, 400 (2023) (quoting Marbury v. Madison, 5 U.S. (1 Cranch) 137, 177 (1803)).
  10. See Riccardo Righi et al., Eur. Comm’n, JRC 125613, EU in the Global Artificial Intelligence Landscape (2021).
  11. John Kell, AI Is About to Face Many More Legal Risks. Here’s How Businesses Can Prepare, Fortune (Nov. 8, 2023), https://fortune.com/2023/11/08/ai-playbook-legality/.
  12. Shari Davidson, The Growth of AI Law: Exploring Legal Challenges in Artificial Intelligence, Nat’l L. Rev. (Jan. 28, 2025), https://natlawreview.com/article/growth-ai-law-exploring-legal-challenges-artificial-intelligence.
  13. Kell, supra note 11.
  14. Michael Haenlein & Andreas Kaplan, A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence, Cal. Mgmt. Rev., Aug. 2019, at 5, 6–7.
  15. Id.
  16. Tanya Roy, The History and Evolution of Artificial Intelligence, AI’s Present and Future, All Tech Mag. (July 19, 2023), https://alltechmagazine.com/the-evolution-of-ai/.
  17. Kell, supra note 11.
  18. Id.
  19. See Doe v. Facebook, Inc., 142 S. Ct. 1087, 1088 (2022) (Thomas, J., concurring in denial of certiorari).
  20. Susannah Fox & Lee Rainie, Pew Rsch. Ctr., The Web at 25 in the U.S. 9 (2014) (finding that only 14% of U.S. adults had internet access in 1995).
  21. See Brian Kennedy et al., Pew Rsch. Ctr., Public Awareness of Artificial Intelligence in Everyday Activities (2023).
  22. See Max Roser, The Brief History of Artificial Intelligence: The World Has Changed Fast – What Might Be Next?, Our World in Data (Dec. 6, 2022), https://ourworldindata.org/brief-history-of-ai.
  23. AI is now used in everything from determining airline ticket prices to deciding who is released from jail. See id.
  24. See Lyria B. Moses, Recurring Dilemmas: The Law’s Race to Keep up with Technological Change 4 (Univ. of New S. Wales Working Paper No. 2007-21, 2007), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=979861.
  25. What is AI?, McKinsey & Co. (Apr. 3, 2024), https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-ai; see Understanding the Different Types of Artificial Intelligence, IBM Data & AI Team (Oct. 12, 2023), https://www.ibm.com/think/topics/artificial-intelligence-types.
  26. IBM Data & AI Team, supra note 25; see also Naveen Joshi, 7 Types of Artificial Intelligence, Forbes (June 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/06/19/7-types-of-artificial-intelligence/.
  27. IBM Data & AI Team, supra note 25. General AI and Super AI are both strictly theoretical concepts; even OpenAI’s ChatGPT is considered a form of Narrow AI because it’s limited to the single task of text-based chat. Id.
  28. Narrow AI, DeepAI, https://deepai.org/machine-learning-glossary-and-terms/narrow-ai (last visited May 24, 2025).
  29. Ben Nancholas, What Are the Different Types of Artificial Intelligence?, Univ. Wolverhampton (June 7, 2023), https://online.wlv.ac.uk/what-are-the-different-types-of-artificial-intelligence/. General AI, also known as Artificial General Intelligence (AGI), uses “previous learnings and skills to accomplish new tasks in a different context without the need for [humans] to train the underlying models.” IBM Data & AI Team, supra note 25. Super AI, if ever successfully developed, “would think, reason, learn, make judgments and possess cognitive abilities that surpass those of human beings.” Id.
  30. IBM Data & AI Team, supra note 25.
  31. Id. The four types of AI based on functionalities all fit into the broader category of Artificial Narrow AI. Id.; see also Joshi, supra note 26.
  32. IBM Data & AI Team, supra note 25; see also How Netflix’s Recommendations System Works, Netflix: Help Ctr., https://help.netflix.com/en/node/100639 (last visited May 24, 2025).
  33. IBM Data & AI Team, supra note 25.
  34. Id.
  35. Id.
  36. Id. Theory of Mind AI is currently being developed, and Self-Aware AI is strictly theoretical. Id.
  37. See Artificial Intelligence (AI) vs. Machine Learning, Columbia Eng’g, https://ai.engineering.columbia.edu/ai-vs-machine-learning/ (last visited May 24, 2025).
  38. See Artificial Intelligence (AI) vs. Machine Learning (ML), Microsoft Azure, https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/artificial-intelligence-vs-machine-learning (last visited May 24, 2025).
  39. Id.
  40. What’s the Difference Between Business Intelligence and Machine Learning?, AWS, https://aws.amazon.com/compare/the-difference-between-business-intelligence-and-machine-learning/ (last visited May 24, 2025).
  41. Kristin Burnham, Artificial Intelligence vs. Machine Learning: What’s the Difference?, Ne. Univ. Graduate Programs (May 6, 2020), https://graduate.northeastern.edu/resources/artificial-intelligence-vs-machine-learning-whats-the-difference/.
  42. Id.
  43. The Evolution and Techniques of Machine Learning, DataRobot (Jan. 7, 2025), https://www.datarobot.com/blog/how-machine-learning-works/.
  44. Id.
  45. Julianna Delua, Supervised Versus Unsupervised Learning: What’s the Difference?, IBM (Mar. 12, 2021), https://www.ibm.com/blog/supervised-vs-unsupervised-learning/.
  46. Id.
  47. Id.
  48. See Gaudenz Boesch, Supervised vs Unsupervised Learning for Computer Vision, viso.ai (Dec. 21, 2023), https://viso.ai/deep-learning/supervised-vs-unsupervised-learning/; Alyshai Nadeem, Machine Learning 101: Supervised, Unsupervised, Reinforcement Learning Explained, datasciencedojo (Sept. 15, 2022), https://datasciencedojo.com/blog/machine-learning-101/.
  49. Gonzalez v. Google, LLC, 2 F.4th 871, 881 (9th Cir. 2021). Recommender systems fall into the category of Artificial Narrow and are a type of reactive machine AI. See IBM Data & AI Team, supra note 25; Casalegno, supra note 5.
  50. See Rem Darbinyan, How AI Transforms Social Media, Forbes (Mar. 16, 2023), https://www.forbes.com/sites/forbestechcouncil/2023/03/16/how-ai-transforms-social-media/.
  51. Bernard Marr, The Amazing Ways YouTube Uses Artificial Intelligence and Machine Learning, Forbes (Aug. 23, 2019), https://www.forbes.com/sites/bernardmarr/2019/08/23/the-amazing-ways-youtube-uses-artificial-intelligence-and-machine-learning/.
  52. Id.; see Nadeem, supra note 48; see also Tamara Biljman, AI in Social Media: Benefits, Tools, and Challenges, Sendible (Jun. 4, 2024), https://www.sendible.com/insights/ai-in-social-media.
  53. Sara Brown, Machine Learning, Explained, MIT Mgmt. Sloan Sch.: Ideas Made to Matter (Apr. 21, 2021), https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained; see Katherine Haan & Robb Watts, How Businesses Are Using Artificial Intelligence, Forbes Advisor (Apr. 24, 2023), https://www.forbes.com/advisor/business/software/ai-in-business/.
  54. Brown, supra note 53.
  55. Id.
  56. Id.; Anthony Cardillo, How Many Companies Use AI? (New Data), Exploding Topics, https://explodingtopics.com/blog/companies-using-ai (May 1, 2025); IBM, IBM Global AI Adoption Index 2022 (May 2022), https://www.ibm.com/downloads/cas/GVAGA3JP; The State of AI in 2023: Generative AI’s Breakout Year, McKinsey & Co. (Aug. 1, 2023), https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year#steady.
  57. Ryan Tracy, ChatGPT’s Sam Altman Warns Congress That AI ‘Can Go Quite Wrong, Wall St. J. (May 16, 2023), https://www.wsj.com/tech/ai/chatgpts-sam-altman-faces-senate-panel-examining-artificial-intelligence-4bb6942a.
  58. See Wyden & Cox, supra note 6, at 2; Stratton Oakmont, Inc. v. Prodigy Serv. Co., No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995).
  59. 47 U.S.C. § 230(a)(1).
  60. See Kosseff, supra note 2, at 9–10.
  61. Stratton Oakmont, 1995 WL 323710; Wyden & Cox, supra note 6, at 2; see also Kosseff, supra note 2, at 45–56.
  62. Stratton Oakmont, 1995 WL 323710, at *1.
  63. Id. at *4–5.
  64. Wyden & Cox, supra note 6, at 2.
  65. See Kosseff, supra note 2, at 2.
  66. Id.; Gonzalez v. Google LLC, 2 F.4th 871, 886 (9th Cir. 2021).
  67. 47 U.S.C. § 230(c)(1).
  68. Gonzalez, 2 F.4th at 886–87 (quoting Doe v. Internet Brands, Inc., 824 F.3d 846, 850 (9th Cir. 2016)).
  69. Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157, 1162 (9th Cir. 2008) (quoting 47 U.S.C. § 230(f)(3)).
  70. Id. at 1162–63.
  71. Section 230, EFF, https://www.eff.org/issues/cda230 (last visited May 24, 2025).
  72. Id.
  73. Rebecca Kern, SCOTUS to Hear Challenge to Section 230 Protections, Politico (Oct. 3, 2022), https://www.politico.com/news/2022/10/03/scotus-section-230-google-twitter-youtube-00060007. Compare Prager Univ. v. Google LLC, 85 Cal. App. 5th 1022 (Cal. Ct. App. 2022), and Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093 (9th Cir. 2019), with Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
  74. Zach Schonfeld, Chief Justice Centers Supreme Court Annual Report on AI’s Dangers, Hill (Dec. 31, 2023), https://thehill.com/regulation/court-battles/4383324-chief-justice-centers-supreme-court-annual-report-on-ais-dangers/.
  75. Tracy, supra note 57.
  76. Id.
  77. Id.
  78. Lawrence Norden & Benjamin Lerude, States Take the Lead on Regulating Artificial Intelligence, Brennan Ctr. for Just. (Nov. 6, 2023), https://www.brennancenter.org/our-work/research-reports/states-take-lead-regulating-artificial-intelligence.
  79. See EU AI Act: First Regulation on Artificial Intelligence, Eur. Parl.: Topics (Feb. 19, 2025), https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
  80. Norden & Lerude, supra note 78.
  81. Kern, supra note 73.
  82. 18 U.S.C. § 2333.
  83. Gonzalez v. Google LLC, 2 F.4th 871, 880 (9th Cir. 2021). Gonzalez’s initial complaint was later amended and joined by other family members and similarly situated plaintiffs. Id. at 882.
  84. Id. at 880; Lori Hinnant, 2015 Paris Attacks Suspect: Deaths of 130 ‘Nothing Personal, AP News (Sept. 15, 2021), https://apnews.com/article/europe-france-trials-paris-brussels-f2031a79abfae46cbd10d4315cf29163.
  85. Gonzalez, 2 F.4th at 882.
  86. Id. at 881.
  87. Id.
  88. See Gonzalez v. Google, Inc., 282 F. Supp. 3d 1150, 1171 (N.D. Cal. 2017); Fed. R. Civ. P. 12(b)(6).
  89. Gonzalez, 2 F.4th at 880. Taamneh and Clayborn involve claims against Google, Twitter, and Facebook. Id.
  90. Gonzalez, 2 F.4th at 879, 883, 884; 1 Artificial Intelligence: Law and Litigation § 3.02, Lexis (database updated May 2024).
  91. Gonzalez, 2 F.4th at 879.
  92. Id. at 880 (quoting 18 U.S.C. § 2333(a)).
  93. Id. at 885 (quoting Justice Against Sponsors of Terrorism Act (JASTA), Pub. L. No. 144-222, 130 Stat. 852 (2016)).
  94. Id. at 880.
  95. The Gonzalez plaintiffs’ revenue-sharing theory is distinct from their other theories of liability because the allegations were not based on the content ISIS placed on YouTube. Id. at 898. Instead, the allegations were “premised on Google providing ISIS with material support by giving ISIS money.” Id. The revenue-sharing allegations stemmed from Google’s AdSense program, which involved “Google shar[ing] a percentage of revenues generated from those advertisements with ISIS.” Id.
  96. Id. at 882.
  97. Id. at 880. The district court in Taamneh did not reach the issue of section 230 immunity. Id.
  98. Id. The Taamneh plaintiffs only appealed the dismissal of their aiding and abetting claim. Id. at 908. The Ninth Circuit reversed the district court’s dismissal after concluding that the complaint’s allegations “that defendants provided services that were central to ISIS’s growth and expansion, and that this assistance was provided over many years,” adequately alleged the defendants’ assistance to ISIS was substantial. Id. at 910.
  99. Gonzalez v. Google LLC, 143 S. Ct. 80 (2022) (mem.); Twitter, Inc. v. Taamneh, 143 S. Ct. 81 (2022) (mem.).
  100. Gonzalez v. Google, Elec. Priv. Info. Ctr., https://epic.org/documents/onzalez-v-google/ (last visited May 24, 2025); see also Gonzalez v. Google LLC, 143 S. Ct. 1191, 1191–92 (2023) (per curiam).
  101. See Danielle Draper & Sean Long, Summarizing the Amicus Briefs Arguments in Gonzalez v. Google LLC, Bipartisan Pol’y Ctr. (Feb. 21, 2023), https://bipartisanpolicy.org/blog/arguments-gonzalez-v-google/.
  102. Richard L. Pacelle, Jr., Amicus Curiae Briefs in the Supreme Court, Oxford Rsch. Encyclopedias (April 20, 2022), https://doi.org/10.1093/acrefore/9780190228637.013.1992.
  103. Draper & Long, supra note 101.
  104. Id.
  105. See generally Transcript of Oral Argument, Gonzalez v. Google, 143 S. Ct. 1191 (2023) (No. 21-1333) [hereinafter Gonzalez Oral Argument Transcript]; Transcript of Oral Argument, Twitter v. Taamneh, 143 S. Ct. 1206 (2023) (No. 21-1496) [hereinafter Taamneh Oral Argument Transcript].
  106. See Gonzalez Oral Argument Transcript, supra note 105, at 1, 164; Taamneh Oral Argument Transcript, supra note 105, at 1, 151.
  107. Gonzalez Oral Argument Transcript, supra note 105, at 49.
  108. Taamneh Oral Argument Transcript, supra note 105, at 72–73.
  109. Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72; Taamneh Oral Argument Transcript, supra note 105, at 12–13, 54, 126.
  110. Kate Klonick, How 236,471 Words of Amici Briefing Gave Us the 565 Word Gonzalez Decision, Klonickles (May 29, 2023), https://klonick.substack.com/p/how-236471-words-of-amici-briefing.
  111. Gonzalez v. Google, 143 S. Ct. 1191 (2023) (per curiam).
  112. Id. at 1192.
  113. Id.
  114. Taamneh, 143 S. Ct. at 1218.
  115. 705 F.2d 472 (D.C. Cir. 1983).
  116. Taamneh, 143 S. Ct. at 1218 (quoting Justice Against Sponsors of Terrorism Act (JASTA), Pub. L. No. 114-222, § 2(a)(5), 130 Stat. 852, 852 (2016)).
  117. Id. at 1230.
  118. Id. at 1230–31.
  119. See Leading Case, supra note 9, at 404–06. “Judicial minimalism is the principle that judges should ‘say[] no more than necessary to justify an outcome.’” Id. at 405 (alteration in original) (quoting Cass R. Sunstein, The Supreme Court, 1995 Term — Foreword: Leaving Things Undecided, 110 Harv. L. Rev. 4, 6 (1996)).
  120. See Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72; Taamneh Oral Argument Transcript, supra note 105, at 12–13, 54, 126.
  121. See Doe v. Facebook, Inc., 142 S. Ct. 1087, 1088–89 (2022) (Thomas, J., concurring in denial of certiorari).
  122. See id. at 1087.
  123. Id. at 1088 (quoting In re Facebook, 625 S.W.3d 80 (Tex. 2021)).
  124. Gonzalez Oral Argument Transcript, supra note 105, at 34, 64, 72.
  125. Doe, 142 S. Ct. at 1088 (2022) (Thomas, J., concurring in denial of certiorari) (quoting Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 18 (2020)).
  126. See Malwarebytes, 141 S. Ct. at 16.
  127. Wyden & Cox, supra note 6, at 2.
  128. See Kosseff, supra note 2, at 2.
  129. Wyden & Cox, supra note 6, at 6.
  130. See George Glover, It’s Time to See Whether AI Is the New Internet — or the Next Metaverse,’ Bus. Insider (July 11, 2023), https://www.businessinsider.com/ai-chatgpt-artificial-intelligence-internet-dot-com-metaverse-crypto-blockchain-2023-7; Einaras Von Gravrock, How AI Empowers the Evolution of the Internet, Forbes (Nov. 15, 2018), https://www.forbes.com/sites/forbeslacouncil/2018/11/15/how-ai-empowers-the-evolution-of-the-internet/.
  131. See generally How Has the Internet Changed in the Last 20 Years, in.house.media, https://www.ihm.co.uk/blog/how-has-the-internet-changed-in-the-last-20-years/ (last visited May 24, 2025).
  132. 47 U.S.C. § 230(c)(1); see Wyden & Cox, supra note 6, at 2 (“Congress drafted Section 230 in light of its understanding of the capabilities of then-extant online platforms and the evident trajectory of Internet development.”).
  133. Josie Fischels, A Look Back at the Very First Website Ever Launched, 30 Years Later, NPR (Aug. 6, 2021), https://www.npr.org/2021/08/06/1025554426/a-look-back-at-the-very-first-website-ever-launched-30-years-later.
  134. See Fox & Rainie, supra note 20.
  135. See Danny Hajek et al., What Is AI and How Will It Change Our Lives? NPR Explains., NPR (May 25, 2023), https://www.npr.org/2023/05/25/1177700852/ai-future-dangers-benefits; How Artificial Intelligence Is Changing Your Life Unknowingly, Econ. Times (Mar. 15, 2023), https://economictimes.indiatimes.com/news/how-to/how-artificial-intelligence-is-changing-your-life-unknowingly/articleshow/98455922.cms?from=mdr; Mike Thomas, The Future of AI: How Artificial Intelligence Will Change the World, builtin, https://builtin.com/artificial-intelligence/artificial-intelligence-future (Jan. 28, 2025).
  136. Wyden & Cox, supra note 6, at 8.
  137. Id. at 12–13.
  138. See, e.g., Exec. Order No. 14,110, 88 Fed. Reg. 75191 (Oct. 30, 2023).
  139. See Zeran v. Am. Online, Inc., 129 F.3d 327, 331–34 (4th Cir. 1997).
  140. Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 15 (2020) (Thomas, J., concurring in the denial of certiorari) (citing Zeran, 129 F.3d at 331–34).
  141. Malwarebytes, 141 S. Ct. at 15 (Thomas, J., concurring in the denial of certiorari) (citations omitted).
  142. See Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
  143. See id.
  144. Id. In Force, victims of terrorist attacks in Israel alleged that Facebook provided material support to Hamas terrorists by enabling Hamas “to disseminate its messages directly to its intended audiences and to carry out communication components of its terror attacks.” Id. at 59.
  145. Id. at 70.
  146. Id. at 70 n.24.
  147. Christopher Cox, The Origins and Original Intent of Section 230 of the Communications Decency Act, Rich. J.L. & Tech. Blog (Aug. 27, 2020), https://jolt.richmond.edu/2020/08/27/the-origins-and-original-intent-of-section-230-of-the-communications-decency-act/.
  148. See sources cited supra note 135.
  149. See Poornima Apte, How AI is Leveling the Marketing Playing Field Between SMBs and Big Business, U.S. Chamber of Comm.: CO (Aug. 7, 2023), https://www.uschamber.com/co/good-company/launch-pad/how-small-businesses-are-using-ai.
  150. Michael Chui et al., The State of AI in 2022—and A Half Decade in Review, McKinsey & Co. (Dec. 6, 2022), https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review.
  151. Id.
  152. Report: Small Business Owners Embrace the Future – Majority Say They Will Adopt Generative AI, FreshBooks, https://www.freshbooks.com/press/data-research/data-research-majority-of-small-business-owners-will-use-ai (last visited May 24, 2025); see also Michelle Kumar, Navigating the Era of AI: Implications for Small Businesses, Bipartisan Pol’y Ctr. (Nov. 3, 2023), https://bipartisanpolicy.org/blog/navigating-the-era-of-ai-implications-for-small-businesses (highlighting a recent survey that found that 23% of small businesses use AI in some form).
  153. See Apte, supra note 149.
  154. See id.
  155. Martin Rowinski, How Small Businesses Drive The American Economy, Forbes (Mar. 25, 2022), https://www.forbes.com/councils/forbesbusinesscouncil/2022/03/25/how-small-businesses-drive-the-american-economy/.
  156. See, e.g., FACT SHEET: The Small Business Boom Under the Biden-Harris Administration, White House (Apr. 28, 2022), https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2022/04/28/fact-sheet-the-small-business-boom-under-the-biden-harris-administration/.
  157. Cox, supra note 147.
  158. Christopher MacColl, Defamatory Bots and Section 230: Navigating Liability in the Age of Artificial Intelligence, JD Supra (July 18, 2023), https://www.jdsupra.com/legalnews/defamatory-bots-and-section-230-3202468 (quoting 47 U.S.C. § 230(c)(1)).
  159. The General Data Protection Regulation, Eur. Council (June 13, 2024), https://www.consilium.europa.eu/en/policies/data-protection-regulation/.
  160. Jared Schroeder, Meet the EU Law That Could Reshape Online Speech in the U.S., Slate (Oct. 27, 2022), https://slate.com/technology/2022/10/digital-services-act-european-union-content-moderation.html.
  161. See Questions and Answers On the Digital Services Act, Eur. Comm’n (Feb. 23, 2024), https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_2348.
  162. Kelvin Chan & Raf Casert, EU law targets Big Tech Over Hate Speech, Disinformation, Associated Press (April 23, 2022), https://apnews.com/article/technology-business-police-social-media-reform-52744e1d0f5b93a426f966138f2ccb52.
  163. See Schroeder, supra note 160.
  164. Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI, Eur. Parl.: News (Sept. 12, 2023), https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai.
  165. See EU AI Act: First Regulation on Artificial Intelligence, Eur. Parl.: News, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (Feb. 19, 2025); The Digital Services Act Package, Eur. Comm’n, https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package (Feb. 12, 2025).
  166. Artificial Intelligence Act, supra note 164.
  167. See, e.g., Brief of the National Center on Sexual Exploitation, the National Trafficking Sheltered Alliance, and RAINN, as Amici Curiae in Support of Petitioners, Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023) (No. 21-1333) [hereinafter NCSE Brief]. See generally Sivile Manene et al., Mitigating Misinformation About the COVID-19 Infodemic on Social Media: A Conceptual Framework, NIH Nat’l Libr. Med., May 2023, at 1, 2 (“Social media platforms have taken steps to mitigate the spread of COVID-19 misinformation by implementing policies . . . which prohibit[] users from using the platform’s services to share false or misleading information about COVID-19.”).
  168. NCSE Brief, supra note 167.
  169. Brief for Reddit, Inc. and Reddit Moderators as Amici Curiae in Support of Respondent, Gonzalez, 143 S. Ct. 1191 (No. 21-1333).
  170. See Zeran v. Am. Online, Inc., 129 F.3d 327, 331–34 (4th Cir. 1997).
  171. See John Samples, Why the Government Should Not Regulate Content Moderation of Social Media, CATO Inst. (Apr. 9, 2019), https://www.cato.org/policy-analysis/why-government-should-not-regulate-content-moderation-social-media.
  172. Sue Halpern, The Year A.I. Ate the Internet, New Yorker (Dec. 8, 2023), https://www.newyorker.com/culture/2023-in-review/the-year-ai-ate-the-internet.
  173. See Manene et al., supra note 167, at 2 (“Social media platforms have taken steps to mitigate the spread of COVID-19 misinformation by implementing policies . . . which prohibit[] users from using the platform’s services to share false or misleading information about COVID-19.”).
  174. See Nandita Krishnan et al., Research Note: Examining How Various Social Media Platforms Have Responded to COVID-19 Misinformation, Harv. Kennedy Sch. Misinformation Rev. (Dec. 15, 2021), https://misinforeview.hks.harvard.edu/article/research-note-examining-how-various-social-media-platforms-have-responded-to-covid-19-misinformation/.
  175. See Gabrielle Lim & Samantha Bradshaw, Chilling Legislation: Tracking the Impact of “Fake News” Laws on Press Freedom Internationally, Ctr. for Int’l Media Assistance (July 19, 2023), https://www.cima.ned.org/publication/chilling-legislation/.
  176. See Cary Coglianese, Regulating Machine Learning: The Challenge of Heterogeneity, Competition Pol’y Int’l, Feb. 2023, at 1, 3.
  177. Artificial Intelligence Act, supra note 164.
  178. Kell, supra note 8.

Benjamin Riley

Social Media’s Rise to the Forefront

Over the last few decades, social media platforms have gained immense popularity with Americans,[1] and statistics point to the average American having accounts on multiple platforms.[2] Yet, as is the case with many trends, this growth has not come without its fair share of controversy. These platforms have taken center stage in many recent legal battles, perhaps most notably a high-profile case decided by the Supreme Court this summer that explored First Amendment issues and the dissemination of information through social media platforms.[3] Moreover, there has also been a wide array of legislative proposals relating to social media in 2024.[4] Apart from constitutional disputes and state legislation, questions have also been raised about worrisome political ramifications[5] and potential health effects.[6] Needless to say, social media’s rise to the forefront of the American consciousness has not been unanimously applauded.

Government Officials Take Action

Recently, concerns over social media’s health effects on children and teenagers have become a frequently discussed topic.[7] This concern was addressed by the Surgeon General of the United States, Vivek H. Murthy, in an advisory released in mid-2023, warning that social media can affect the well-being of the country’s young people.[8] This advisory was escalated in June of 2024 to a powerfully worded, public message to Congress and the country explaining that a Surgeon General’s warning on social media platforms is needed.[9] The message, which appeared as an opinion piece in The New York Times, draws attention to the effect social media has on children’s anxiety, depression, and self-image.[10] Moreover, the message also points to how surgeon general’s warning labels were able to combat tobacco use, in an attempt to establish the efficacy of these warnings.[11] Along with calling for warnings on the platform, the Surgeon General also challenged parents, medical professionals, schools, and companies to all play a role in limiting the adverse effects of social media.[12]

This opinion received a powerful show of support when a coalition of forty-two attorneys general, including North Carolina’s Attorney General Josh Stein, wrote a letter in support of the Surgeon General’s call for a warning on social media platforms.[13] The letter, which was addressed to Speaker of the House Mike Johnson, Senate Majority Leader Chuck Schumer, and Senate Minority Leader Mitch McConnell, argues that Congress can take action against the threats of social media and “protect future generations of Americans.”[14]

The letter explains that social media is contributing to a “mental health crisis” in children and teenagers.[15] This language makes clear the urgency with which the writers believe the issue needs to be addressed. More specifically, the letter takes issue with “algorithm-driven social media platforms,” and reinforces many of the concerns presented in the Surgeon General’s New York Times opinion.[16] Previous legislation and legal action taken by both state legislatures and State Attorneys General are highlighted, as well as ongoing state investigations and litigation against the social media powerhouse TikTok.[17]  However, it is contended that “this ubiquitous problem requires federal action.”[18] According to the group, a surgeon general’s warning on social media platforms “would be a consequential step” in addressing this problem.[19] This letter follows legal action taken by a similar coalition of State Attorneys General last fall, where lawsuits were filed against social media giant Meta, alleging that features on Meta’s social media platforms adversely affect children. [20]

One of the more interesting aspects of this letter is the impressively bipartisan nature of the coalition. The alliance of forty-two attorneys general is comprised of differing political ideologies and is spread across the country. The uniqueness of this cooperation is not lost in the letter, which explains that “[a]s State Attorneys General we sometimes disagree about important issues, but all of us share an abiding concern for the safety of the kids in our jurisdiction.”[21] The willingness of officials to work together on combating the adverse effects of social media can also be seen in recent legislation at the federal level. The Kids Online Safety Act, which was proposed by Senator Richard Blumenthal, a Democrat, has been cosponsored by many lawmakers on both sides of the aisle.[22]

It is also worth noting what this letter signals to social media companies. The letter accuses social media companies of being complacent in the crisis by saying the “problem will not solve itself and the social media platforms have demonstrated an unwillingness to fix the problem on their own.”[23] Moreover, with attorneys general making children’s online safety a priority,[24] this letter should serve as a reminder to social media companies that policymakers are unlikely to relent in their pursuit of greater safety measures on social media. 

Future Implications

At this time, it is unclear if Congress will follow the advice given by the Surgeon General and subsequently endorsed by many attorneys general. Similarly, it is also unclear whether these warnings would have any effect on children’s social media usage and the associated health effects.

However, while the viability of a surgeon general’s warning and its actual efficacy cannot yet be known, developments like this show that officials are unlikely to alleviate any of the pressure they have placed on social media companies. Officials calling for these warnings should be interpreted as an escalation against the youth mental health crisis, and consequently social media companies. In short, social media companies should expect further bipartisan action to counteract the negative side effects of social media, and citizens should be prepared that some of their favorite platforms may soon carry a warning about the potential health effects of scrolling.


[1]See Belle Wong, Top Social Media Statistics and Trends of 2024, Forbes Advisor,  https://www.forbes.com/advisor/business/social-media-statistics/ (May 18, 2023, 2:09 PM).

[2] Id.

[3] See Murthy v. Missouri, 144 S. Ct. 1972 (2024).

[4] See Social Media and Children 2024 Legislation, National Conference of State Legislatures, https://www.ncsl.org/technology-and-communication/social-media-and-children-2024-legislation (June 14, 2024).

[5] See Stephanie Burnett & Helen Coster, Fake U.S. Election-Related Accounts Proliferating on X, Study Says, Reuters (May 24, 2024, 8:31 AM) https://www.reuters.com/world/us/fake-us-election-related-accounts-proliferating-x-study-says-2024-05-24/; U.S. Groups Urge Social Media Companies to Fight ‘Big Lie,’ Election Misinformation, Reuters (May 12, 2022, 10:07 AM), https://www.reuters.com/world/us/us-groups-urge-social-media-companies-fight-big-lie-election-disinformation-2022-05-12/; Tiffany Hsu & Steven Lee Myers & Stuart A. Thompson, Elections and Disinformation Are Colliding Like Never Before in 2024, N.Y. Times, https://www.nytimes.com/2024/01/09/business/media/election-disinformation-2024.html (Jan. 11, 2024).

[6] See Teens and Social Media Use: What’s the Impact?, Mayo Clinic (Jan. 18, 2024), https://www.mayoclinic.org/healthy-lifestyle/tween-and-teen-health/in-depth/teens-and-social-media-use/art-20474437.

[7] See Claire Cain Miller, Everyone Says Social Media is Bad for Teens. Proving it is Another Thing, N.Y. Times: The Upshot (June 17, 2023), https://www.nytimes.com/2023/06/17/upshot/social-media-teen-mental-health.html; Natalie Proulx, Does Social Media Harm Young People’s Mental Health?, N.Y. Times (May 25, 2023) https://www.nytimes.com/2023/05/25/learning/does-social-media-harm-young-peoples-mental-health.html.

[8] Surgeon General Issues New Advisory About Effects Social Media Use Has on Youth Mental Health, U.S. Department of Health and Human Services (May 23, 2023), https://www.hhs.gov/about/news/2023/05/23/surgeon-general-issues-new-advisory-about-effects-social-media-use-has-youth-mental-health.html.

[9] See Vivek H. Murthy, Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms, N.Y. Times (June 17, 2024), https://www.nytimes.com/2024/06/17/opinion/social-media-health-warning.html.

[10] Id.

[11] Id.

[12] Id.

[13] Letter from Rob Bonta, Cal. Att’y Gen., Phil Weiser, Colo. Att’y Gen., Russel Coleman, Ky. Att’y Gen., Lynn Fitch, Miss. Att’y Gen., Matthew J. Platkin, N.J. Att’y Gen., Letitia James, N.Y. Att’y Gen., Jonathan Skrmetti, Tenn. Att’y Gen., Steve Marshall, Ala. Att’y Gen., Fainu’ulelei Falefatu Ala’ilima-Uta, Am. Sam. Att’y Gen., Tim Griffin, Ark. Att’y Gen., William Tong, Conn. Att’y Gen., Kathleen Jennings, Del. Att’y Gen., Brian Schwalb, D.C. Att’y Gen., Ashley Moody, Fla. Att’y Gen., Christopher M. Carr, Ga. Att’y Gen., Anne E. Lopez, Haw. Att’y Gen., Raúl Labrador, Idaho Att’y Gen., Kwame Raoul, Ill. Att’y Gen., Todd Rokita, Ind. Att’y Gen., Aaron M. Frey, Me. Att’y Gen., Anthony G. Brown, Md. Att’y Gen., Andrea Joy Campbell, Mass. Att’y Gen., Dana Nessel, Mich. Att’y Gen., Keith Ellison, Minn. Att’y Gen., Aaron D. Ford, Nev. Att’y Gen., John M. Formella, N.H. Att’y Gen., Raúl Torrez, N.M. Att’y Gen., Josh Stein, N.C. Att’y Gen., Drew H. Wrigley, N.D. Att’y Gen., Gentner Drummond, Okla. Att’y Gen., Ellen F. Rosenblum, Or. Att’y Gen., Michelle Henry, Pa. Att’y Gen., Peter F. Neronha, R.I. Att’y Gen., Alan Wilson, S.C. Att’y Gen., Marty Jackley, S.D. Att’y Gen., Gordon C. Rhea, V.I. Att’y Gen. (Nominee), Sean D. Reyes, Utah Att’y Gen., Charity Clark, Vt. Att’y Gen., Jason S. Miyares, Va. Att’y Gen., Robert W. Ferguson, Wash. Att’y Gen., Joshua L. Kaul, Wis. Att’y Gen., Bridget Hill, Wyo. Att’y Gen., to Mike Johnson, Speaker of the House, Chuck Schumer, Senate Majority Leader, Mitch McConnel, Senate Minority Leader (Sept. 9, 2024) (on file with the National Association of Attorneys General).

[14] Id.

[15] Id.

[16] Id.

[17] Id.

[18] Id.

[19] Id.

[20] See Barbara Ortutay, States Sue Meta Claiming its Social Platforms are Addictive and Harm Children’s Mental Health, Associated Press https://apnews.com/article/instagram-facebook-children-teens-harms-lawsuit-attorney-general-1805492a38f7cee111cbb865cc786c28 (Oct. 24, 2023); Cristiano Lima-Strong & Naomi Nix, 41 States Sue Meta, Claiming Instagram, Facebook are Addictive, Harm Kids, Washington Post, https://www.washingtonpost.com/technology/2023/10/24/meta-lawsuit-facebook-instagram-children-mental-health/ (Oct. 24, 2024, 3:25 PM).

[21] Letter from Rob Bonta et. al. to Mike Johnson et. al., supra note 13.

[22] The Kids Online Safety Act, S. 1409, 118th Cong. (2023).

[23] Letter from Rob Bonta et. al. to Mike Johnson et. al., supra note 13.

[24] Attorney General Josh Stein Urges Congress to Require Warning on Social Media Platforms, N.C. Department of Justice (Sept. 11, 2024), https://ncdoj.gov/attorney-general-josh-stein-urges-congress-to-require-warning-on-social-media-platforms/; see Ortutay, supra note 20.


Will Coltzer

The Supreme Court is set to determine whether the government can regulate the way social media platforms (“Platforms”) like X,[1] Facebook, and YouTube moderate third-party content.[2] Although social media has become ubiquitous and has been described as the modern “public forum,”[3] there remain serious questions about the authority of the government to require private entities to host certain third-party content. Must people rely on Elon Musk and Mark Zuckerberg—two of the wealthiest people in the world—to ensure “free speech around the globe”?[4]

The Freedom of Speech is one of the most essential tenants of American democracy, yet that right is not absolute.[5] The First Amendment prohibits States from passing laws that “abridg[e] the Freedom of Speech.”[6] Thus, because Platforms are private businesses, individuals cannot use the First Amendment to pursue recourse against censorship on a private platform.[7] Instead, States have attempted to enforce the ideals of free speech by regulating Platforms content moderation policies.[8] The question remains whether this regulation infringes the Platforms own right to control its “speech.”

On February 26, 2024, the Court will hear oral arguments to address these questions in Moody v. NetChoice[9] and NetChoice v. Paxton.[10] In 2021, Texas and Florida passed laws that prevented large Platforms from censuring third-party created content.[11] The proponents of these laws argue Platforms “have unfairly censored” and  “shadow banned” users based on political speech‚— particularly conservative speech.[12] In response, NetChoice, a trade association that represents large technology businesses including Meta,[13] filed actions in the Northern District of Florida and the Western District of Texas seeking preliminary injunctions against the State’s regulation of Platforms.[14]

On appeal, the Eleventh and Fifth Circuit split on the key constitutional questions. Now, the two main issues before the Court are: (1) whether Platform’s moderation of content is considered “speech” for First Amendment analysis, and (2) whether Platforms are “common carriers” who hold themselves open to the public.[15] This article will address both issues in turn, concluding that the Court should uphold the States regulations under the common carrier doctrine.

  1. The “Speech” Issue

The Court must first ascertain whether Texas and Florida’s regulations affect the Platform’s “Speech.”[16] In exercising some “doctrinal gymnastics,”[17] the Eleventh Circuit found Florida’s statute violates the Platform’s First Amendment rights because it removes its “editorial judgment” over the content published on its private platform.[18] On the other hand, the Fifth Circuit found the Texas statute “does not regulate the Platform’s speech at all; it protects other people’s speech and regulates the Platform’s conduct.”[19]

These conflicting interpretations derive from a complex body of case law that has attempted to apply the same First Amendment principles to vastly different mediums of communication.[20] The Court is tasked with comparing social media to the mediums in four major cases: Miami Herald Pub. Co. v. Tornillo,[21] Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Bos.,[22] PruneYard Shopping Center v. Robbins,[23] and Rumsfeld v. Forum for Acad. & Inst. Rts, Inc. (“FAIR”).[24] These cases establish two lines of precedent. 

  1. Editorial Judgments

The first line of precedent, which derives from Miami Herald and Hurley, establishes the right of publishers to exercise “editorial judgment” over the content they publish.[25] In Miami Herald the Court held that a newspaper’s “choice of material” and the “treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment”  protected by the First Amendment.[26] Most recently, the Court extended the editorial-judgment principle in Hurley.[27] There, the Court rejected a Massachusetts public accommodation statute because it infringed on the parade organizer’s First Amendment right to control the message of the parade.[28]

Together, these editorial judgment cases can be read two ways. First, these cases may establish a private entity’s decisions about disseminating third-party content are “editorial judgments protected by the First Amendment,” as the Eleventh Circuit found.[29] Alternatively, editorial judgments may be merely a factor rather than a “freestanding category of protected expression,” as the Fifth Circuit found.[30]  The first reading is more persuasive; the decision to accept or reject third-party content creates a message that a reasonable user would perceive. A private speaker “who chooses to speak may also decide ‘what not to say’ and ‘tailor’ the content of his message as he sees fit.”[31] The message need not be substantially tailored.[32] Before evaluating the first issue here, these editorial judgment cases must be placed in contrast to the “host speech” cases.

  1. Hosting Speech

The second line of precedent, which derives from PruneYard and FAIR, establishes the government may sometimes compel private actors to “host other’s speech.”[33] In PruneYard, the Court affirmed a state court’s decision that required a privately owned shopping mall to allow members of the public to circulate pamphlets on its property.[34]Importantly, the mall owner did not allege this circulation affected the owner’s autonomy to speak.[35] Extending PruneYard, the Court in FAIR unanimously upheld a federal statute—the Solomon Amendment—that required law schools to allow military recruiters the same access to campuses as other employers.[36] The Court distinguished FAIR from the editorial judgment cases by noting “the schools are not speaking when they host interviews and recruiting receptions.”[37] Together, these cases apply to a narrow set of facts where “hosting” third-party speech does not interfere with the owner’s right to speak.[38]

How will the Court decide the “Speech” issue?

The Court is likely to find Platforms have First Amendment protections under the editorial judgment line of cases. Platforms require terms and conditions, remove content based on their guidelines, and are in the business of curating certain edited experiences.[39] Algorithms curate content for users based on past activity.[40] The fact this is accomplished by an algorithm does not change the constitutional analysis.[41] Because Platforms are in the business of curating a tailored experience and they exercise substantial control over the content published, the Court will likely find social media more analogous to the newspaper publisher in Miami Herald than the law school in FAIR. Furthermore, the very justification for States passing these statutes in Texas and Florida was the alleged threat of a leftist agenda in BigTech against conservative speech.[42] Overall, social media companies should still retain the First Amendment protection over third-party speech published on their platform. However, social media platforms that uniquely hold themselves out as public forums may still be vulnerable to public accommodation laws under the common carrier doctrine.

  1. Common Carrier Issue

The State has an alternative argument that is gaining steam among key Supreme Court Justices: the “common carrier” doctrine.[43] This common law doctrine allows States to pass public accommodation laws that regulate businesses that hold themselves open to the public, even if that regulation affects the private actor’s speech.[44] The doctrine derives from English common law and was incorporated early on into the Court’s analysis of the original meaning of “Freedom of Speech.”[45]

The Supreme Court’s recent decision in 303 Creative v. Elenis[46] illuminates the doctrine’s potential application to online platforms. In 303 Creative, the Court held a Colorado statute that required a private website to accommodate certain messages was an unconstitutional infringement on the private website’s Freedom of Speech because the website did not have “monopoly power” over a public utility.[47] Importantly, the three dissenting Justices critiqued the majority for requiring “monopoly power,” which may signal a lower threshold for upholding public accommodation laws among the liberal wing of the Court.[48] Still, the Court has not addressed the unique application of the doctrine to social media, which is likely distinguishable from the small website in 303 Creative..

The common carrier doctrine is the State’s best argument for upholding Texas and Florida’s regulations for three reasons. First, several key justices have signaled support for the theory.[49] Second, it is the best tool to align our modern understanding of social media with the original meaning of the Constitution while leaving needed room to apply the same legal principles to past and future technology. Finally, using the monopoly power concept espoused in 303 Creative, the Court could distinguish large social media companies that hold themselves out as “public forums” from other websites that do not receive the liability benefits of this common carrier designation.[50] Social media companies are not liable for the content of third parties under Section 230.[51] Because these Platforms receive the legal benefit of being a common carrier by avoiding liability, States should have the power to ensure the platforms meet constitutionally permissive public accommodations laws.[52] You cannot have your cake and eat it too: either social media businesses open their Platforms to the public, like a restaurant, or they close their doors and should be liable for the third-party content circulated, like a newspaper publisher.

  1. Conclusion

      In short, the Court should uphold the regulations in Moody and Paxton to promote public discourse. The Court must reconcile competing precedents and use century-old doctrines to evaluate our First Amendment rights on social media.[53] If social media is to remain a “public square,” [54] the Court should ensure these businesses are subject to some legal accountability. The State’s best argument is perhaps the most intuitive: the First Amendment should not be morphed into a tool for upholding censorship of political speech on the modern equivalent of the public square.[55] The Court should recognize the unique way social media affects modern discourse and use these flexible legal standards, especially the common carrier doctrine, to uphold the ideals of free speech.


[1] Twitter was renamed to X in the summer of 2023. See Ryan Mac & Tiffany Hsu, From Twitter to X: Elon Musk Begins Erasing an Iconic Internet Brand, N.Y. TIMES (July 24, 2023), https://www.nytimes.com/2023/07/24/technology/twitter-x-elon-musk.html#:~:text=Late%20on%20Sunday%2C%20Elon%20Musk,letter%20of%20the%20Latin%20alphabet.

[2] NetChoice, L.L.C. v. Paxton, 49 F.4th 439, 447 (5th Cir. 2022), cert. granted in part sub nom. Netchoice, LLC v. Paxton, 216 L. Ed. 2d 1313 (Sept. 29, 2023) (hereinafter “Paxton”); NetChoice, LLC v. Att’y Gen., Fla., 34 F.4th 1196, 1212 (11th Cir. 2022), cert. granted in part sub nom. Moody v. Netchoice, LLC, 216 L. Ed. 2d 1313 (Sept. 29, 2023), and cert. denied sub nom. NetChoice, LLC v. Moody, 144 S. Ct. 69 (2023) (hereinafter “Moody”)..

[3] Packingham v. North Carolina, 582 U.S. 98, 107108 (2017).

[4] Billy Perrigo, ‘The Idea Exposes His Naiveté.’ Twitter Employees On Why Elon Musk Is Wrong About Free Speech, Time (Apr. 14, 2022, 2:04 PM), https://time.com/6167099/twitter-employees-elon-musk-free-speech/ (noting that Musk claimed his reason for purchasing Twitter was to spread free speech in an SEC filing report).

[5] Gitlow v. People of State of New York, 268 U.S. 652, 666 (1925) (“It is a fundamental principle, long established, that the freedom of speech and of the press which is secured by the Constitution, does not confer an absolute right to speak or publish, without responsibility[.]”); Schenck v. United States, 249 U.S. 47, 52 (1919) (“The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic.”).

[6] U.S. Const. amend. I (“Congress shall make no law . . . prohibiting the free exercise thereof; or abriding the Freedom of Speech, or of the press.”); Gitlow, 268 U.S. at 666 (incorporating the Freedom of Speech against the States through the Due Process Clause of the Fourteenth Amendment).

[7] Grace Slicklen,  For Freedom or Full of It? State Attempts to Silence Social Media, 78

U. Miami L. Rev. 297, 319–23 (2023); see also Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1926 (2019) (noting that the Freedom of Speech is a shield that “constrains governmental actors and protects private actors”).

[8] See S.B. 7072, 123rd Reg. Sess. (Fla. 2021); H.B. 20, 87th Leg. Sess. § 1201.002(a) (Tex. 2021).

[9] Supreme Court Docket for NetChoice v. Moody, Supreme Court, https://www.supremecourt.gov/docket/docketfiles/html/public/22-277.html (last visited Jan. 21, 2023)

[10] Supreme Court Docket for NetChoice v. Paxton, Supreme Court, https://www.supremecourt.gov/docket/docketfiles/html/public/22-555.html (last visited Jan. 21, 2023)

[11] See S.B. 7072, 123rd Reg. Sess. (Fla. 2021); H.B. 20, 87th Leg. Sess. § 1201.002(a) (Tex. 2021).

[12] Moody, 34 F.4th at 1205.

[13] Slicklen, supra note 7 at 307.

[14] NetChoice, LLC v. Moody, 546 F. Supp. 3d 1082, 1096 (N.D. Fla. 2021) (finding Florida’s legislation “is plainly content-based and subject to strict scrutiny . . . [which] [t]he legislation does not survive”); NetChoice LLC v. Moody, 546 F. Supp. 3d 1092, 1100–01 (W.D. Tex. 2021) (granting a preliminary injunction against the State enforcement of Texas legislation, but finding the constitutional question a close call).

[15] Moody, 34 F.4th at 1210.

[16]Id. at 1209 (“In assessing whether the Act likely violates the First Amendment, we must initially consider whether it triggers First Amendment scrutiny in the first place—i.e., whether it regulates ‘speech’ within the meaning of the Amendment at all. In other words, we must determine whether social-media platforms engage in First Amendment-protected activity.” (citations omitted)).

[17] Paxton, 49 F.4th at 455 (rejecting the “Platforms’ efforts to reframe their censorship as speech” because “no amount of doctrinal gymnastics can turn the First Amendment’s protections for free speech into protections for free censoring”).

[18] Moody, 34 F.4th at 1213–14 (“Social-media platforms exercise editorial judgment that is inherently expressive.”).

[19] Paxton, 49 F.4th at 448.

[20] Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 790 (2011) (“[W]hatever the challenges of applying the Constitution to ever-advancing technology, ‘the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary’ when a new and different medium for communication appears.”).

[21] 418 U.S. 241 (1974).

[22] 515 U.S. 557 (1995).

[23] 447 U.S. 74 (1980).

[24] 547 U.S. 47 (2006).

[25] Moody, 34 F.4th at 1210–1211.

[26] Miami Herald, 418 U.S. at 258.

[27] Id.; see Moody, 34 F.4th at 1211 (describing the extension of Miami Herald’s editorial judgment principle to several subsequent Supreme Court decisions); Pac. Gas & Elec. Co. v. Pub. Utilities Comm’n of California, 475 U.S. 1, 9–12 (1986) (plurality opinion); Turner Broad. Sys., Inc. v. F.C.C., 512 U.S. 622, 636 (1994).

[28] Hurley, 515 U.S. at 570–75 (noting that the choice “not to propound a particular point of view” was a form of expressive speech that was “presumed to lie beyond the government’s power to control”).

[29] Moody, 34 F.4th at 1210–12.

[30] Paxton, 49 F.4th at 463.

[31] Hurley, 515 U.S. at 576.

[32] See Id. at 574–75 (finding parade organizer exercised editorial control over its message by rejecting a “particular point of view” even though they generally did not provide “considered judgment” for most forms of content).

[33] Paxton, 49 F.4th at 462.

[34] PruneYard Shopping Center v. Robbins, 477 U.S. 74, 76–77 (1980).

[35] Moody, 34 F.4th at 1215 (noting that the PruneYard decision was narrowed significantly by Pacific Gas and Hurley and arguing that “PruneYard is inapposite” to social-media content); Hurley, 515 U.S. at 580 (“The principle of speaker’s autonomy was simply not threatened in [PruneYard].”).

[36] FAIR, 547 U.S. at 70.

[37] Id. at 56, 60, 64.

[38] 303 Creative LLC v. Elenis, 600 U.S. 570, 588–89 (2023) (noting that the key factor in Hurley and other editorial-judgment cases was the regulation “affect[ed] their message”).

[39] See Moody, 34 F.4th at 1204–05 (noting that “social-media platforms aren’t ‘dumb pipes,’” and that “the platforms invest significant time and resources into edition and organizing—the best word, we think is curating—users’ posts into collections of content that they then disseminate to others”).

[40] Id.

[41] Slicklen, supra note 7 at 332.

[42] Moody, 34 F.4th at 1203.

[43] See NetChoice, L.L.C. v. Paxton, 142 S. Ct. 1715, 1716 (2022) (Alito, J., joined by Thomas and Gorsuch, JJ., dissenting from grant of application to vacate stay) (noting that the issue of whether social media platforms are common carriers raises “issues of great importance that will plainly merit this Court’s review”); see also Biden v. Knight First Amend. Inst., 141 S. Ct. 1220, 1224 (2021) (Thomas, J., concurring) (“There is a fair argument that some digital platforms are sufficiently akin to common carriers or places of accommodation to be regulated in this manner.”); Paxton, 49 F.4th at 493 (“The Eleventh Circuit quickly dismissed the common carrier doctrine without addressing its history or propounding a test for how it should apply.”).

[44]  For a more in-depth discussion of the common carrier doctrine, see Eugene Volokh, Treating Social Media Platforms Like Common Carriers?; 1 J. Free Speech L. 377 (2021); Ashutosh Bhagwat, Why Social Media Platforms Are Not Common Carriers, 2 J J. Free Speech L. 127 (2022); Christopher S. Yoo, The First Amendment, Common Carriers, and Public Accommodations:  Net Neutrality, Digital Platforms, and Privacy, 1 J. Free Speech L. 463 (2021).

[45] Paxton, 49 F.4th at 469–73 (describing the historical root of common carrier and its application prior to the 20th century); Adam Candeub, Bargaining for Free Speech: Common Carriage, Network Neutrality, and Section 230, 22 Yale J.L. & Tech. 391, 401–402 (2020).

[46] 600 U.S. 570 (2023).

[47] Id. at 590–92.

[48] Id. at 610–611 (Sotomayor, J., joined by Kagan and Jackson, JJ., dissenting).

[49] NetChoice, L.L.C. v. Paxton, 142 S. Ct. 1715, 1716 (2022) (Alito, J., joined by Thomas and Gorsuch, JJ., dissenting from grant of application to vacate stay).

[50] See Candeub, supra note 42 at 403–413 (noting that the “history of telecommunications regulation” demonstrates the common carriage doctrine was a “regulatory deal” where the carrier gets “special liability breaks in return for the carrier refraining from using some market power to further some public good”); Id. at 418–422 (“Section 230 can be seen as a common carriage-type deal—but without the government demanding much in return from internet platforms)

[51] Communications Decency Act of 1996, 47 U.S.C. § 230 (2018); Candeub, supra note 42 at 395 (“[S]ection 230 exempts internet platforms from liability arising from third-party speech.”).

[52] Id. at 429–433.

[53] Biden v. Knight First Amend. Inst. At Columbia Univ., 141 S. Ct. 1220, 1221 (2021) (“Today’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors. Also unprecedented, however, is the concentrated control of so much speech in the hands of a few private parties. We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”).

[54] Packingham, 582 U.S. at 107-108 (2017) (“[Social media platforms] are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.” (emphasis added)).

[55] Paxton, 49 F.4th at 445 (“[W]e reject the idea that corporations have a freewheeling First Amendment right to censor what people say.”). Id. at 455 (“We reject the Platforms efforts to reframe their censorship as speech. . . . [N]o amount of doctrinal gymnastics can turn the First Amendment’s protection for free speech into protections for free censoring.”)

Free Meta Logo illustration and picture


Trinity Chapman 

On October 24, 2023, thirty-three states filed suit against Meta[1], alleging that its social media content harms and exploits young users.[2] The plaintiffs go on to allege that Meta’s services are intentionally addictive, promoting compulsive use and leading to severe mental health problems in younger users.[3]  The lawsuit points to specific aspects of Meta’s services that the states believe cause harm. The complaint asserts that “Meta’s recommendation Algorithms encourage compulsive use” and are harmful to minors’ mental health,[4] and that the use of “social comparison features such as ‘likes’” cause further harm.[5]  The suit further asserts that the push notifications from Meta’s products disrupt minors’ sleep and that the company’s use of visual filters “promote[s] eating disorders and body dysmorphia in youth.”[6]

Social media plays a role in the lives of most young people.  A recent Advisory by the U.S. Surgeon General revealed that 95% of teens ages thirteen to seventeen and 40% of children ages eight to twelve report using social media.[7] The report explains that social media has both negative and positive effects.[8]  On one hand, social media connects young people with like-minded individuals online, offers a forum for self-expression, fosters a sense of acceptance, and promotes social connections.[9]  Despite these positive effects, social media harms many young people; researchers have linked greater social media use to poor sleep, online harassment, lower self-esteem, and symptoms of depression.[10]  Social media content undoubtedly impacts the minds of young people—often negatively.  However, the question remains as to whether companies like Meta should be held liable for these effects.

This is far from the first time that Meta has faced suit for its alleged harm to minors.  For example, in Rodriguez v. Meta Platforms, Inc., the mother of Selena Rodriguez, an eleven-year-old social media user, sued Meta after her daughter’s death by suicide.[11]  There, the plaintiff alleged that Selena’s tragic death was caused by her “addictive use and exposure to [Meta’s] unreasonabl[y] dangerous and defective social media products.”[12]  Similarly, in Heffner v. Meta Platforms, Inc., a mother sued Meta after her eleven-year-old son’s suicide.[13]  That complaint alleged that Meta’s products “psychologically manipulat[ed]” the boy, leading to social media addiction.[14]  Rodriguez and Heffner are illustrative of the type of lawsuit regularly filed against Meta.

A.        The Communications Decency Act

 In defending such suits, Meta invariably invokes the Communications Decency Act.  Section 230 of the act dictates that interactive online services “shall not be treated as the publisher or speaker of any information provided by another information content provider.”[15] In effect, the statute shields online services from liability arising from the effects of third-party content.  In asserting the act, defendant [1] [2] internet companies present a “hands off” picture of their activities; rather than playing an active role in the content that users consume, companies depict themselves as merely opening a forum through which third parties may produce content.[16]

Plaintiffs have responded with incredulity to this application of the act by online service providers, and the act’s exact scope is unsettled.[17]  In Gonzalez v. Google LLC, the parents of a man who died during an ISIS terrorist attack sued Google, alleging that YouTube’s algorithm recommended ISIS videos to some users, leading to increased success by ISIS in recruitment efforts.[18]  In defense, Google relied on Section 230 of the Communications Decency Act.[19]  The Ninth Circuit ruled that Section 230 barred the plaintiff’s claims,[20] but the Supreme Court vacated the Ninth Circuit’s Ruling on other grounds, leaving unanswered questions about the act’s scope.[21]

Despite that uncertainty, the defense retains a high likelihood of success. In the October 24 lawsuit, Meta’s success on the Section 230 defense depends on how active a role the court determines Meta played in suggesting and exposing the harmful content to minors.

B.        Product Liability

The October 24 complaint against Meta alleges theories of product liability.[22] In framing their product liability claims, plaintiffs focus on the harmful design of Meta’s “products” rather than the harmful content to which users may be exposed.[23] The most recent lawsuit alleges that “Meta designed and deployed harmful and psychologically manipulative product features to induce young users’ compulsive and extended use.”[24]

A look at Meta’s defense in Rodriguez is predictive of how the company will respond to the October 24 suit. There, the company refuted the mere qualification of Instagram as a “product.”[25] Meta’s Motion to Dismiss remarked that product liability law focuses on “tangible goods” or “physical articles” and contrasted these concepts with the “algorithm” used by Instagram to recommend content.[26]  Given traditional notions about what constitutes a “product,” Meta’s defenses are poised to succeed.  As suggested by Meta in their motion to dismiss Rodriguez’s suit, recommendations about content, features such as “likes,” and communications from third parties fall outside of what is typically considered a “product” by courts.[27]

To succeed on a product liability theory, plaintiffs must advocate for a more modernized conception of what counts as a “product” for purposes of product liability law.  Strong arguments may exist for shifting this conception; the world of technology has transformed completely since the ALI defined product liability in the Restatement (Second) of Torts.[28]  Still, considering this well-settled law, plaintiffs are likely to face an uphill battle.

 C.        Whose job is it anyway?

Lawsuits against Meta pose large societal questions about the role of courts and parents in ensuring minors’ safety.  Some advocates place the impetus on companies themselves, urging top-down prevention of access by minors to social media.[29]  Others emphasize the role of parents and families in preventing minors from unsafe exposure to social media content[30]; parents, families, and communities may be in better positions than tech giants to know, understand, and combat the struggles that teens face.  Regardless of who is to blame, nearly everyone can agree that the problem needs to be addressed.


[1] In 2021, the Facebook Company changed its name to Meta. Meta now encompasses social media apps like WhatsApp, Messenger, Facebook, and Instagram. See Introducing Meta: A Social Technology Company, Meta(Oct. 28, 2021), https://about.fb.com/news/2021/10/facebook-company-is-now-meta/

[2] Complaint at 1, Arizona v. Meta Platforms, Inc., 4:23-cv-05448 (N.D. Cal. Oct. 24, 2023) [hereinafter October 24 Complaint] (“[Meta’s] [p]latforms exploit and manipulate its most vulnerable users: teenagers and children.”).

[3] Id. at 23.

[4] Id. at 28.

[5] Id. at 41.

[6] Id. at 56.

[7] U.S. Surgeon General, Advisory: Social Media and Youth Mental Health 4 (2023).

[8] Id. at 5.

[9] Id. at 6.

[10] Id. at 7.

[11] Complaint at 2, Rodriguez v. Meta Platforms, Inc., 3:22-cv-00401 (Jan. 20, 2022) [hereinafter Rodriguez Complaint].

[12] Id.

[13] Complaint at 2, Heffner v. Meta Platforms, Inc., 3:22-cv-03849 (June 29, 2022).

[14] Id. at 13.

[15] 47 U.S.C.S. § 230 (LEXIS through Pub. L. No. 118-19).

[16] See, e.g., Dimeo v. Max, 433 F. Supp. 2d 523, 34 Media L. Rep. (BNA) 1921, 2006 U.S. Dist. LEXIS 34456 (E.D. Pa. 2006), aff’d, 248 Fed. Appx. 280, 2007 U.S. App. LEXIS 22467 (3d Cir. 2007). Dimeo is just one example of the strategy used repeatedly by Meta and other social media websites.

[17] Gonzalez v. Google LLC, ACLU, https://www.aclu.org/cases/google-v-gonzalez-llc#:~:text=Summary-,Google%20v.,content%20provided%20by%20their%20users (last updated May 18, 2023).

[18] Gonzalez v. Google LLC, 2 F.4th 871, 880–81 (9th Cir. 2021).

[19] Id. at 882.

[20] Id. at 881.

[21] Gonzalez v. Google LLC, 598 U.S. 617, 622 (2023).

[22] October 24 Complaint, supra note 1, at 145–98.

[23] Id. at 197.

[24] Id. at 1.

[25] Motion to Dismiss, Rodriguez v. Meta Platforms, Inc., 3:22-cv-00401 (June 24, 2022).

[26] Id.

[27] Id.

[28] Restatement (Second) of Torts § 402A (Am. L. Inst. 1965).

[29] Rachel Sample, Why Kids Shouldn’t get Social Media Until they are Eighteen, Medium (June 14, 2020), https://medium.com/illumination/why-kids-shouldnt-get-social-media-until-they-are-eighteen-2b3ef6dcbc3b.

[30] Jill Filipovic, Opinion: Parents, Get your Kids off Social Media, CNN (May 23, 2023, 6:10 PM), https://www.cnn.com/2023/05/23/opinions/social-media-kids-surgeon-general-report-filipovic/index.html.


By Mary Catherine Young

Last month, an Azerbaijani journalist was forced to deactivate her social media accounts after receiving sexually explicit and violent threats in response to a piece she wrote about Azerbaijan’s cease-fire with Armenia.[1] Some online users called for the Azerbaijan government to revoke columnist Arzu Geybulla’s citizenship—others called for her death.[2] Days later, an Irish man, Brendan Doolin, was criminally charged for online harassment of four female journalists.[3] The charges came on the heels of a three-year jail sentence rendered in 2019 based on charges for stalking six female writers and journalists online, one of whom reported receiving over 450 messages from Doolin.[4] Online harassment of journalists is palpable on an international scale.

Online harassment of journalists abounds in the United States as well, with females receiving the brunt of the persecution.[5] According to a 2019 survey conducted by the Committee to Protect Journalists, 90 percent of female or gender nonconforming American journalists said that online harassment is “the biggest threat facing journalists today.”[6] Fifty percent of those surveyed reported that they have been threatened online.[7] While online harassment plagues journalists around the world, the legal ramifications of such harassment are far from uniform.[8] Before diving into how the law can protect journalists from this abuse, it is necessary to expound on what online harassment actually looks like in the United States.

In a survey conducted in 2017 by the Pew Research Center, 41 percent of 4,248 American adults reported that they had personally experienced harassing behavior online.[9] The same study found that 66 percent of Americans said that they have witnessed harassment targeted at others.[10] Online harassment, however, takes many shapes.[11] For example, people may experience “doxing” which occurs when one’s personal information is revealed on the internet.[12] Or, they may experience a “technical attack,” which includes harassers hacking an email account or preventing traffic to a particular webpage.[13] Much of online harassment takes the form of “trolling,” which occurs when “a perpetrator seeks to elicit anger, annoyance or other negative emotions, often by posting inflammatory messages.”[14] Trolling can encompass situations in which harassers intend to silence women with sexualized threats.[15]

The consequences of online harassment of internet users can be significant, invoking mental distress and sometimes fear for one’s physical safety.[16] In the context of journalists, however, the implications of harassment commonly affect more than the individual journalist themselves—free flow of information in the media is frequently disrupted due to journalists’ fear of cyberbullying.[17] How legal systems punish those who harass journalists online varies greatly both internationally and domestically.[18]

For example, the United States provides several federal criminal and civil paths to recourse for victims of online harassment, though not specifically geared toward journalists.[19] In terms of criminal law, provisions protecting individuals against cyber-stalking are included in 18 U.S.C. § 2261A, which criminalizes stalking in general.[20] According to this statute, “[w]hoever . . . with the intent to kill, injure, harass, intimidate, or place under surveillance with intent to . . . harass, or intimidate another person, uses . . . any interactive computer service . . . [and] causes, attempts to cause, or would be reasonably expected to cause substantial emotional distress to a person . . .” may be imprisoned.[21] In terms of civil law, plaintiffs may be able to allege defamation or copyright infringement claims.[22] For example, when the harassment takes the form of sharing an individuals’ self-taken photographs without the photographer’s consent, whether they are explicit or not, the circumstances may allow the victim to pursue a claim under the Digital Millennium Copyright Act.[23]

Some states provide their own online harassment criminal laws, though states differ in whether the provisions are included in anti-harassment legislation or in their anti-stalking laws.[24] For example, Alabama,[25] Arizona,[26] and Hawaii[27] all provide for criminal prosecution for cyberbullying in their laws against harassment, whereas Wyoming,[28] California,[29] and North Carolina[30] include anti-online harassment provisions in their laws against stalking.[31] North Carolina’s stalking statute, however, was recently held unconstitutional as applied under the First Amendment after a defendant was charged for posting a slew of Google Plus posts about his bizarre wishes to marry the victim.[32] The North Carolina Court of Appeals decision in Shackelford seems to reflect a distinctly American general reluctance to interfere with individuals’ ability to freely post online out of extreme deference to First Amendment rights.

Other countries have taken more targeted approaches to legally protecting journalists from online harassment.[33] France, in particular, has several laws pertaining to cyberbullying and online harassment in general, and these laws have recently provided relief for journalists.[34] For example, in July 2018, two perpetrators were given six-month suspended prison sentences after targeting a journalist online.[35] The defendants subjected Nadia Daam, a French journalist and radio broadcaster, to months of online harassment after she condemned users of an online platform for harassing feminist activists.[36] Scholars who examine France’s willingness to prosecute perpetrators of online harassment against journalists and non-journalists alike point to the fact that while the country certainly holds freedom of expression in high regard, this freedom is held in check against other rights, including individuals’ right to privacy and “right to human dignity.”[37]

Some call for more rigorous criminalization of online harassment in the United States, particularly against journalists, to reduce the potential for online harassment to create a “crowding-out effect” that prevents actually helpful online speech from being heard.[38] It seems, however, that First Amendment interests may prevent many journalists from finding relief—at least for now.


[1] Aneeta Mathur-Ashton, Campaign of Hate Forces Azeri Journalist Offline, VOA (Jan. 8, 2021), https://www.voanews.com/press-freedom/campaign-hate-forces-azeri-journalist-offline.

[2] Id.

[3] Tom Tuite, Dubliner Charged with Harassing Journalists Remanded in Custody, The Irish Times (Jan. 18, 2021), https://www.irishtimes.com/news/crime-and-law/courts/district-court/dubliner-charged-with-harassing-journalists-remanded-in-custody-1.4461404.

[4] Brion Hoban & Sonya McLean, ‘Internet Troll’ Jailed for Sending Hundreds of Abusive Messages to Six Women, The Journal.ie (Nov. 14, 2019), https://www.thejournal.ie/brendan-doolin-court-case-4892196-Nov2019/.

[5] Lucy Westcott & James W. Foley, Why Newsrooms Need a Solution to End Online Harassment of Reporters, Comm. to Protect Journalists (Sept. 4, 2019), https://cpj.org/2019/09/newsrooms-solution-online-harassment-canada-usa/.

[6] Id.

[7] Id.

[8] See Anya Schiffrin, How to Protect Journalists from Online Harassment, Project Syndicate (July 1, 2020), https://www.project-syndicate.org/commentary/french-laws-tackle-online-abuse-of-journalists-by-anya-schiffrin-2020-07.

[9] Maeve Duggan, Online Harassment in 2017, Pew Rsch. Ctr. (July 11, 2017), https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/.

[10] Id.

[11] Autumn Slaughter & Elana Newman, Journalists and Online Harassment, Dart Ctr. for Journalism & Trauma (Jan. 14, 2020), https://dartcenter.org/resources/journalists-and-online-harassment.

[12] Id.

[13] Id.

[14] Id.

[15] Id.

[16] Duggan, supra note 9.

[17] Law Libr. of Cong., Laws Protecting Journalists from Online Harassment 1 (2019), https://www.loc.gov/law/help/protecting-journalists/compsum.php.

[18] See id. at 3–4; Marlisse Silver Sweeney, What the Law Can (and Can’t) Do About Online Harassment, The Atl. (Nov. 12, 2014), https://www.theatlantic.com/technology/archive/2014/11/what-the-law-can-and-cant-do-about-online-harassment/382638/.

[19] Hollaback!, Online Harassment: A Comparative Policy Analysis for Hollaback! 37 (2016), https://www.ihollaback.org/app/uploads/2016/12/Online-Harassment-Comparative-Policy-Analysis-DLA-Piper-for-Hollaback.pdf.

[20] 18 U.S.C. § 2261A.

[21] § 2261A(2)(b).

[22] Hollaback!, supra note 19, at 38.

[23] Id.; see also 17 U.S.C. §§ 1201–1332.

[24] Hollaback!, supra note 19, at 38–39.

[25] Ala. Code § 13A-11-8.

[26] Ariz. Rev. Stat. Ann. § 13-2916.

[27] Haw. Rev. Stat. § 711-1106.

[28] Wyo. Stat. Ann. § 6-2-506.

[29] Cal. Penal Code § 646.9.

[30] N.C. Gen. Stat. § 14-277.3A.

[31] Hollaback!, supra note 19, at 39 (providing more states that cover online harassment in their penal codes).

[32] State v. Shackelford, 825 S.E.2d 689, 701 (N.C. Ct. App. 2019), https://www.nccourts.gov/documents/appellate-court-opinions/state-v-shackelford. After meeting the victim once at a church service, the defendant promptly made four separate Google Plus posts in which he referenced the victim by name. Id. at 692. In one post, the defendant stated that “God chose [the victim]” to be his “soul mate,” and in a separate post wrote that he “freely chose [the victim] as his wife.” Id. After nearly a year of increasingly invasive posts in which he repeatedly referred to the victim as his wife, defendant was indicted by a grand jury on eight counts of felony stalking. Id. at 693–94.

[33] Law Libr. of Cong., supra note 17, at 1–2.

[34] Id. at 78–83.

[35] Id. at 83.

[36] Id.

[37] Id. at 78.

[38] Schiffrin, supra note 8.

Post Image by Kaur Kristjan on Unsplash.

Composite image created using an original photograph by Gage Skidmore of President Donald Trump, via flickr.com.

By Christopher R. Taylor

On August 6th, President Trump issued Executive Order 13,942 (“TikTok Prohibition Order”) prohibiting transactions with ByteDance Ltd. (“ByteDance”), TikTok’s parent company, because of the company’s data collection practices regarding U.S. users and its close relationship with the Peoples Republic of China (“PRC”).[1] Eight days later President Trump issued a subsequent order (“Disinvestment Order”) calling for ByteDance to disinvest from Musical.ly, an application that was acquired by ByteDance and later merged with TikTok’s application.[2] TikTok is now engulfed in a legal battle against the Trump administration fighting both of these orders and was recently partially granted a preliminary injunction from the TikTok Prohibition Order.[3] However, the question remains—how successful will TikTok be in stopping the orders and what effect does this have on future cross-border transactions?

The foundation for President Trump’s TikTok orders was laid over a year earlier with Executive Order 13,873.[4] This order declared a national emergency under the International Emergency Economic Power Act (“IEEPA”) because of the “unusual and extraordinary threat” of “foreign adversaries . . . exploiting vulnerabilities in information and communication technology services.”[5] This national emergency was renewed for another year on May 13th, 2020.[6] Shortly after this renewal, the Trump administration issued both TikTok orders.

The TikTok Prohibition Order delegated to the Secretary of the Department of Commerce the task of defining specific prohibited transactions with ByteDance within 45 days of the execution of the order.[7] Following the president’s directive, the Secretary issued five phased prohibitions on transactions with TikTok, all with the stated purpose of limiting TikTok’s spread of U.S. users’ sensitive personal information to the PRC.[8] The Department of Commence implemented these prohibitions based primarily on two threats: (1) TikTok would share U.S. users’ personal data with the PRC to further efforts of espionage on the U.S. government, U.S. corporations, and U.S. persons and (2) TikTok would use censorship on the application to shape U.S. users’ perspective of the PRC.[9]

While the Trump administration was at work attempting to remove or substantially change TikTok’s U.S. presence, TikTok did not stand by idly. Instead, TikTok and ByteDance initiated an action challenging the Trump administration’s authority under the Administrative Procedure Act (“APA”) and the U.S. Constitution.[10] After filing the action in the U.S. District Court for the District of Columbia, TikTok moved for a preliminary injunction.[11] On September 29th, the court partially granted the preliminary injunction.[12]

Among the various arguments presented for the preliminary injunction, TikTok’s strongest argument was that the Trump administration’s actions violated APA § 706(2)(C) by exceeding its statutory authority under the IEEPA.[13] The IEEPA prohibits the President from “directly or indirectly” regulating “personal communication, which does not involve a transfer of anything of value” or the importation or exportation of “information or information materials.”[14] The IEEPA does not define “information materials,” however, it does provide examples, which include photographs, films, artworks, and news wire feeds.[15]

TikTok argued both of these exceptions applied, making the Trump administration’s prohibitions unlawful.[16] First, TikTok argued that the information exchanged by its global users includes art, films, photographs, and news.[17] Therefore, the information exchanged on TikTok fits within the definition of information materials.[18] Second, TikTok argued most of the communications exchanged on the application are among friends, and thus do not involve anything of value.[19]

The government countered by arguing that neither exception applied, contending for a narrower interpretation of the IEEPA exceptions.[20] First, the government argued the information materials exception did not apply because the TikTok prohibitions only regulate “business-to-business economic transactions,” and does not regulate the exchange of “information materials” by TikTok users themselves.[21] In the alternative, the government asserted Congress did not intend to create such a broad exception that would allow foreign adversaries to control data services.[22] Second, the government argued that some communications on TikTok are of value to users and, even if all communications are not of value to all users, they are of value to TikTok itself.[23] The government asserted that the use of the application alone provides value to TikTok making the exchanged communications fall outside of the IEEPA exception.[24]

In partially granting TikTok’s preliminary injunction, the court found both exceptions applied to TikTok.[25] First, the court held the content on TikTok’s application constitutes “information materials.”[26] Although the government only regulates economic transactions, the prohibitions still indirectly regulate the exchange of “information materials.”[27] Thus, the Trump administration’s actions directly fit within the IEEPA exception barring indirect regulation of information materials.[28]

Turning to the second exception on value, the court recognized some information on TikTok was of value.[29] However, it found the majority of the information provided no value to users.[30] Furthermore, the government’s argument regarding the value of communications to TikTok was at odds with Congressional intent.[31] The court found if Congress meant to look at the value provided to the company, as opposed to the value provided to users, the exception would be read out of existence.[32]

After finding that both exceptions applied, the court found irreparable harm to TikTok and equity supported partially granting the preliminary injunction.[33] However, the court refused to grant an injunction blocking the whole TikTok Prohibition Order because only one of the prohibitions was an imminent threat to TikTok.[34] The injunction only blocked the prohibition on TikTok downloads and updates from online application stores and marketplaces, leaving the remaining four prohibitions unaffected.[35]

While it appears TikTok has won the first round of this legal dispute, this fight is likely far from over. In response to the grant of the partial preliminary injunction, the Department of Commerce explained it is prepared to “vigorously defend the . . . [Executive order] and the Secretary’s implementation efforts from legal challenges.”[36] Based on this strong reaction, the dispute seems fertile for further quarrels regarding the merits of both executive orders.

The current TikTok dispute and the Trump administration’s willingness to use the IEEPA will likely also have broader implications for cross-border transactions, especially those involving the Peoples Republic of China or personal data. Since its enactment in 1979, presidential use of the IEEPA has become more frequent and broader in scope.[37] Thus, it is likely presidential use of the IEEPA will continue to grow no matter the President. Furthermore, the Trump administration’s strong stance toward the PRC has exacerbated tensions and led to an uptick in investigations into cross-border deals with Chinese companies.[38] Therefore, in-depth looks at deals with Chinese companies will likely continue to be the norm, at least for the remainder of the Trump presidency. In an effort to avoid disputes similar to TikToks, business dealmakers should obtain clearance from the Committee on Foreign Investment in the United States before the completion of any cross-border transaction, especially those involving the PRC or personal data.[39]


[1] Exec. Order No. 13,942, 85 Fed. Reg. 48,637 (Aug. 6, 2020).

[2] Order on the Acquisition of Musical.ly by ByteDance Ltd, 2020 Daily Comp. Pres. Doc. 608 (Aug. 14, 2020).

[3] TikTok, Inc. v. Trump, No. 1:20-cv-02658, 2020 U.S. Dist. LEXIS 177250, at *11, *26 (D.D.C. Sept. 27, 2020).

[4] Exec. Order No. 13,873, 84 Fed. Reg. 22,689 (May 15, 2019).

[5] Id.

[6] Notice on Continuation of the National Emergency with Respect to Securing the Information and Communications Technology and Services Supply Chain, 2020 Daily Comp. Pres. Doc. 361 (May 13, 2020).

[7] Exec. Order 13,942, at 48,638.

[8] See Identification of Prohibited Transactions to Implement Executive Order 13942 and Address the Threat Posed by TikTok and the National Emergency with Respect to the Information and Communications Technology and Services Supply Chain, 85 Fed. Reg. 60,061 (Sept. 24, 2020) (prohibiting new downloads and updates from the app-store; servers supporting TikTok in the U.S.; content delivery services used by TikTok; internet transit or peering agreements; and the use of TikTok code, services or functions). The Secretary set up a phrased implementation of this order, making the app store ban effective September 20th, 2020, and the remaining four prohibitions effective November 12th, 2020. Id.

[9] Defendants’ Memorandum in Opposition to Plaintiffs’ Motion for a Preliminary Injunction at Ex. 1, TikTok, Inc. v. Trump, No. 1:20-cv-02658, 2020 U.S. Dist. LEXIS 177250 (D.D.C. Sept. 27, 2020).

[10] Complaint at 30–42, TikTok, Inc. v. Trump, No. 1:20-cv-02658, 2020 U.S. Dist. LEXIS 177250 (D.D.C. Sept. 27, 2020). The specific counts in the complaint include allegations of (1) violations of APA § 706(2)(A) and § 706(2)(E), (2) violations of the First Amendment’s Right to Free Speech, (3) violations of the Due Process Clause of Fifth Amendment, (4)  ultra vires action under IEEPA because there is no national emergency, (5) ultra vires action because actions restrict personal communications and information violating IEEPA, (6) violation of Non-Delegation Doctrine of IEEPA, and (7)  violation of Fifth Amendment Taking Clause. Id.

[11] TikTok, Inc. v. Trump, No. 1:20-cv-02658, 2020 U.S. Dist. LEXIS 177250, at *11–12 (D.D.C. Sept. 27, 2020).

[12] Id. at *26.

[13] See id. at *21. 

[14] 50 U.S.C. § 1702(b)(1), (3).

[15] Id. § 1702(b)(3).

[16] TikTok, 2020 U.S. Dist. LEXIS 177250, at *14.

[17] Id. at *15–16.

[18] Id. at *15.

[19] See id. at *20.

[20] See id. at *16, *17–18, *20.

[21] Id. at *16.

[22] Id. at *17–18.

[23] Id. at *20. The government’s argument was that value is provided to TikTok simply by users’ presence on the application. Id.

[24] Id.

[25] See id. at *20–21 (“Plaintiffs have demonstrated that they are likely to succeed on their claim that the prohibitions constitute indirect regulation of ‘personal communication[s]’ or the exchange of ‘information or information materials.'”).

[26] Id. at *16

[27] Id. at *16–17.

[28] See id. at *17.

[29] See id. at *20.

[30] Id.

[31] Id.

[32] Id.

[33] Id. at *21–25.

[34] Id. at *26.

[35] Id. at *25–26.

[36] Commerce Department Statement on U.S. District Court Ruling on TikTok Preliminary Injunction, U.S. Dept. of Commerce (Sept. 27, 2020), https://www.commerce.gov/news/press-releases/2020/09/commerce-department-statement-us-district-court-ruling-tiktok.

[37] Christopher A. Casey et al., Cong. Rsch. Serv., R45618, The International Emergency Economic Powers Act: Origins, Evolution, and Use 17 (2020).

[38] See Julia Horowitz, Under Trump, the US Government Gives Many Foreign Deals a Closer Look, CNN (Mar. 16, 2018, 12:11 AM), https://money.cnn.com/2018/03/16/news/economy/trump-cfius-china-technology/index.html; Jeanne Whalen, TikTok was Just the Beginning: Trump Administration is Stepping Up Scrutiny of Past Chinese Tech Investments, Wash. Post. (Sept. 29, 2020, 3:12 PM), https://www.washingtonpost.com/technology/2020/09/29/cfius-review-past-chinese-investment/.

[39] See Adam O. Emmerich et al., Cross-Border M&A–2019 Checklist for Successful Acquisitions in the United States, Harv. L. Sch. F. on Corp. Governance (Jan. 30, 2019), https://corpgov.law.harvard.edu/2019/01/30/cross-border-ma-2019-checklist-for-successful-acquisitions-in-the-united-states/.

By Gabriel L. Marx

Donald Trump is once again at the center of a legal dispute. The Forty-Fifth President of the United States has been no stranger to legal controversies during and before his presidency,[1] but the latest update in Knight First Amendment Institute at Columbia University v. Trump[2] has President Trump petitioning for a writ of certiorari to the Supreme Court after more than three years of litigation.[3]  

The case began in July 2017 when the Knight First Amendment Institute at Columbia University (“Knight Institute”) filed a lawsuit against President Trump in federal court alleging that he violated the First Amendment by blocking Twitter users from his @realDonaldTrump account after they criticized his policies and presidency.[4] The U.S. District Court for the Southern District of New York found that Donald Trump, as President, exercised sufficient control over the Twitter account such that the @realDonald Trump account was “susceptible to analysis under the Supreme Court’s [First Amendment] forum doctrines, and is properly characterized as a designated public forum.”[5] The District Court then held that President Trump’s blocking of these Twitter users was discrimination based on the users’ viewpoints and impermissible under the First Amendment.[6] In July 2019, a three-judge panel for the U.S. Court of Appeals for the Second Circuit unanimously affirmed the district court’s decision[7] and subsequently denied rehearing, sitting en banc, in March of this year.[8] Despite his lack of success so far, the administration has continued his fight against the Knight Institute as Acting Solicitor General Jefferey Wall submitted a petition for a writ of certiorari to the Supreme Court at the end of August.[9]

The petition includes both legal and policy-based arguments about the importance of the case.[10] In terms of legal arguments, Solicitor General Wall argues that the Second Circuit wrongly concluded that (1) President Trump’s blocking of the Twitter users was a state action susceptible to the First Amendment rather than an act of a private citizen; (2) the @realDonaldTrump account was a designated public forum; and (3) the governmental-speech doctrine, which would exempt President Trump’s account from a First Amendment challenge, did not apply to President Trump’s actions.[11] Putting the legal arguments aside, Solicitor General Wall also argues, “the court of appeals’ decision . . . has important legal and practical implications that reach beyond the circumstances of this case.”[12] That is, public officials are “increasingly likely to maintain social media accounts to communicate their views, both personal and official,”[13] so if the Second Circuit’s decision were allowed to stand, it would significantly hinder the ability of these public officials to choose who they want to interact with on their own accounts: a choice afforded to every other social media user.[14] According to the petition, this choice—or lack thereof—takes on an even greater significance when the public official in question in the President of the United States.[15]

In response, the Knight Institute filed its brief in opposition on Sept. 21.[16] The Knight Institute first argues that there is no reason for the Court to hear the case because amongst the various lower courts that have dealt with this issue, all agree that public officials blocking critics from their social media accounts violates the First Amendment.[17] It additionally argues that the second circuit properly concluded that blocking users from the @realDonaldTrump account was state action, was not government speech, and that the account itself is a public forum.[18] The Knight Institute also counters Solicitor General Wall’s policy-based arguments, asserting that the impact of the Second Circuit’s decision has not and will not hinder the President’s or other public officials’ use of social media to communicate to the general public.[19] Finally, the Knight Institute maintains that the only cases where the Court has granted certiorari solely due to presidential implications, and absent a circuit split, are those that deal with “fundamental issues of executive power” (such as separation-of-power concerns), unlike the case at hand, which only deals with whether President Trump can block Twitter users from his @realDonaldTrump account.[20]

Given the procedural history, the above arguments, and the fact that the Court usually only hears cases that have “national significance, might harmonize conflicting decisions in the federal circuit courts, and/or could have precedential value,”[21] it seems unlikely that the Court will grant certiorari. Looking at the procedural history, the two lower courts were in agreement that President Trump violated the First Amendment (with one panel holding that unanimously).[22] Therefore, the Court has little incentive to rehear a case that has already been decided so clearly, unless, as Solicitor General Wall argues, the court of appeals erred in its conclusions. The petition for rehearing was denied by the Second Circuit en banc, [23] however, so the decision has already been affirmed in some sense. Along similar lines, there is no conflict among federal circuit or district courts on the issue of public officials blocking users from their social media accounts, as the Knight Institute points out.[24] On the other hand, there has been an influx of cases dealing with this issue as of late,[25] so the Court might want to decide the issue once and for all to deter future litigation. Nevertheless, given, again, that so many lower courts are all in agreement on the issue, the Court probably will not wish to devote time and resources on a well-settled area of the law simply to deter future litigation—particularly as the issue does not reach an issue of traditional significance in executive authority, such as a separation-of-powers issue. As a final matter, neither the Court’s current make-up of Justices nor the projected addition of Amy Coney Barrett should have much effect on the decision-making process in light of the above factors weighing so heavily against granting certiorari.

While it is unlikely that the Court will grant President Trump’s petition, if it does grant certiorari, the case would be interesting to watch unfold at the nation’s highest court. If heard, Knight First Amendment Institute at Columbia University could set the precedent for the ever-prevalent issue of freedom of speech in social media, so it is certainly worth keeping an eye out for the Court’s decision on the petition for writ of certiorari in the coming weeks.


[1] See Peter Baker, Trump Is Fighting So Many Legal Battles, It’s Hard to Keep Track, N.Y. Times (Nov. 6, 2019), https://www.nytimes.com/2019/11/06/us/politics/donald-trump-lawsuits-investigations.html.

[2] 302 F. Supp. 3d 541 (S.D.N.Y. 2018), aff’d, 928 F.3d 226 (2d Cir. 2019).

[3] See Tucker Higgins, White House Asks Supreme Court to Let Trump Block Critics on Twitter, CNBC (Aug. 20, 2020, 12:00 PM), https://www.cnbc.com/2020/08/20/white-house-asks-supreme-court-to-let-trump-block-critics-on-twitter.html.

[4] See Knight Institute v. Trump, Knight First Amendment Inst. at Colum. Univ., https://knightcolumbia.org/cases/knight-institute-v-trump (last visited Oct. 8, 2020).

[5] Knight Inst., 302 F. Supp. 3d at 580.

[6] Id.

[7] See Knight First Amendment Inst. at Colum. Univ. v. Trump, 928 F.3d 226 (2d Cir. 2019);Knight First Amendment Inst. at Colum. Univ., supra note 4.

[8] See Knight First Amendment Inst. at Colum. Univ. v. Trump, 953 F.3d 216 (2d Cir. 2020) (en banc); Knight First Amendment Inst. at Colum. Univ., supra note 4.

[9] See Petition for Writ of Certiorari, Knight First Amendment Inst. at Colum. Univ. v. Trump, No. 20-197 (Aug. 20, 2020), https://www.supremecourt.gov/DocketPDF/20/20-197/150726/20200820102824291_Knight%20First%20Amendment%20Inst.pdf.

[10] See id.

[11] Id. at 11–27.

[12] See id. at 27.

[13] See id. at 27–28.

[14] Id. at 28–29.

[15] See id. at 29.

[16] See Brief in Opposition, Knight Inst., No. 20-197 (Sept. 21, 2020), https://www.supremecourt.gov/DocketPDF/20/20-197/154505/20200921141934655_20-197%20BIO.pdf.

[17] See id. at 11–15.

[18] See id. at 15–28.

[19] See id. at 29.

[20] See id. at 30.

[21] Supreme Court Procedures,U.S. Cts., https://www.uscourts.gov/about-federal-courts/educational-resources/about-educational-outreach/activity-resources/supreme-1 (last visited Oct. 8, 2020).

[22] See supra notes 5–8 and accompanying text.

[23] See supra note 8 and accompanying text.

[24] See supra note 17 and accompanying text.

[25] See Petition for Writ of Certiorari, supra note 9, at 28 n.2 (noting six recent cases from around the country concerning public officials’ blocking social media users on their personal accounts).

by: Hanna Monson and Sarah Spangenburg

Introduction

One recent issue circulating the legal world involves whether schools can discipline students for social media posts. In January 2018, the University of Alabama expelled a nineteen-year-old freshman after she posted two videos of her racist rantings to her Instagram account.[1] Another user recorded and posted the video on Twitter, which subsequently went viral and instilled anger both at the University of Alabama campus and across the country. As the University of Alabama is a public university, the student’s expulsion has raised questions surrounding the constitutionality of dismissing a student for using offensive speech. To further consider this constitutional issue, this post highlights some of the arguments made in a factually similar case Keefe v. Adams (8th Cir. 2016).[2] The Eighth Circuit concluded that a student who was removed from the Nursing Program of a college after he posted Facebook posts indicating frustration towards other students in the program did not have his First Amendment nor due process rights violated. While this Eighth Circuit case is the focus of our discussion, it is important to note that a case of this sort has also arisen in the Fifth Circuit, Bell v. Itawamba County School Board, where the Fifth Circuit also decided against the student and determined that his First Amendment free speech rights were not violated.[3]

Facts

Craig Keefe was a student in the Associate Degree Nursing Program at Central Lakes College.[4] Two students complained about posts the Keefe made on his Facebook account.[5] After a meeting with CLC Director of Nursing Connie Frisch during which “[Keefe] was defensive and did not seem to feel responsible or remorseful,” Frisch made the decision that Keefe should no longer be in the program.[6] In a letter sent to Keefe after the meeting, Frisch expressed concerns about Keefe’s professionalism and inability to represent the nursing profession because of his posts.[7] All students enrolled in this program had to follow the Nurses Association Code of Ethics, which included guidance on issues such as “relationships with colleagues and others,” “professional boundaries,” and “wholeness of character.”[8] Keefe appealed this decision to Vice President of Academic Affairs, Kelly McCalla, but the appeal was denied, prompting this lawsuit.[9]

First Amendment Claims

Keefe first contends that his First Amendment rights were violated because “a college student may not be punished for off-campus speech . . . unless it is speech that is unprotected by the First Amendment, such as obscenity.”[10] The Eighth Circuit addressed first the threshold question of whether a public university may even adopt this Code of Ethics.[11] The court held that the state has a large interest in the regulation the health profession, and “[b]ecause professional codes of ethics are broadly worded, they can be cited to restrict protected speech.”[12]

The court then considered Keefe’s contention that the university violated his First Amendment rights. The court held that “college administrators and educators in a professional school have discretion to require compliance with recognized standards of the profession, both on and off campus, ‘so long as their actions are reasonably related to legitimate pedagogical concerns.’”[13] Keefe’s words showed that he was acting contrary to the Code of Ethics, and “compliance with the Nurses Association Code of Ethics is a legitimate part of the Associate Degree Nursing Program’s curriculum . . . .”[14] The posts targeted and threatened his classmates and impacted their education, as one of the students stated she no longer wished to be in the same clinical as Keefe.[15] Keefe’s words also had the possibility of impacting patient care because adequate patient care requires the nurses to communicate and work together.[16] The court did not wish to interfere with Frisch’s discretion in deciding that Keefe’s actions showed that he was not fit for the profession, and the First Amendment did not prevent Frisch from making this decision.[17] Given that the district court had granted the defendant’s motion for summary judgment on the First Amendment claims, the Eighth Circuit affirmed.[18]

Due Process Claims

The second issue presented in this case was whether a violation of due process existed. Keefe argued that the Defendants violated his Fourteenth Amendment right to due process when he was removed from the Associate Degree Nursing Program.[19] Supreme Court precedent states that “federal courts can review an academic decision of a public educational institution under a substantive due process standard.”[20] One key inquiry is whether the removal was based on academic judgment that is not beyond the pale of reasoned academic decision making.[21] Even if a substantive due process claim is cognizable in these circumstances, there is no violation of substantive due process unless misconduct of government officials that violates a fundamental right is “so egregious, so outrageous, that it may fairly be said to shock the contemporary conscience” of federal judges.[22] Here, the court determined that Keefe’s removal rested on academic judgment that was not beyond the pale of reasoned academic decision making.[23] Ultimately, the court determined that Keefe had no substantive due process claim.[24]

The court also analyzed the procedural due process claim that Keefe presented. Citing Goss v. Lopez[25], the Eighth Circuit highlighted that the Supreme Court has held that even a short disciplinary suspension requires the student “be given oral or written notice of the charges against him and, if he denies them, an explanation of the evidence the authorities have and an opportunity to present his side of the story.”[26] The court believed that the Keefe’s removal after a disciplinary proceeding provided the kind of inquiry that involved effective notice and allowed Keefe to give his version of the events, thereby preventing erroneous action.[27] Ultimately, the court concluded that Keefe was given the due process he was required by the Fourteenth Amendment.

Conclusion

Ultimately, this issue presents free speech concerns for students. The decisions of the Eighth and Fifth Circuits seem to showcase that students’ free speech rights seem to stop at the door of the school, which contradicts much Supreme Court precedent. The prevalence of social media in today’s society ensures that this issue will continue to exist, and the Supreme Court one day might weigh in.

****

[1] Marwa Eltagouri, She was expelled from college after her racist chants went viral. Her mother thinks she deserves it.,Wash. Post (Jan. 19, 2018), https://www.washingtonpost.com/news/grade-point/wp/2018/01/19/she-was-expelled-from-college-after-her-racist-rants-went-viral-her-mother-thinks-she-deserves-it/?utm_term=.b0cd4c397d35.

[2] The full opinion can be found at: http://media.ca8.uscourts.gov/opndir/16/10/142988P.pdf.

[3] Mark Joseph Stern, Judges Have No Idea What to Do About Student Speech on the Internet, Slate (Feb. 18, 2016 5:15 PM), http://www.slate.com/articles/technology/future_tense/2016/02/in_bell_v_itawamba_county_school_board_scotus_may_rule_on_the_first_amendment.html.

[4] Keefe v. Adams, 840 F.3d 523, 525 (8th Cir. 2016).

[5] Id.at 526.

[6] Id. at 526–27.

[7] Id. at 527–28.

[8] Id. at 528–29.

[9] Id. at 526, 529.

[10] Id. at 529.

[11] Id. at 529–30.

[12] Id. at 530.

[13] Id. at 531 (quoting Hazelwood Sch. Dist. v. Kuhlmeier, 484 U.S. 260, 273 (1988)).

[14] Id.

[15] Id. at 532.

[16] Id.

[17] Id. at 533.

[18] Id.

[19] Id. at 533.

[20] Regents of University of Michigan v. Ewing, 474 U.S. 214, 222 (1985).

[21] Keefe, 840 F.3d at 533-34.

[22] Cnty. of Sacremento v. Lewis, 523 U.S. 833, 847 n.8 (1998) (quotation omitted).

[23] Keefe, 840 F.3d at 534.

[24] Id.

[25] 419 U.S. 565, 581 (1975).

[26] Keefe, 840 F.3d at 535.

[27] Id.

By: Kristina Wilson

On Monday, March 20, 2017, the Fourth Circuit issued a published opinion in the civil case Grutzmacher v. Howard County. The Fourth Circuit affirmed the District Court for the District of Maryland’s grant of summary judgment in favor of the defendant, holding that the defendant’s termination of plaintiffs did not violate the plaintiffs’ First Amendment Free Speech rights. The plaintiff raises two arguments on appeal.

Facts and Procedural History

Prior to initiating this action, plaintiffs worked for the defendant, the Howard County, Maryland Department of Fire and Rescue Services. In 2011, the defendant started drafting a Social Media Policy (“the Policy”) in response to a volunteer firefighter’s inflammatory and racially discriminatory social media posts that attracted negative media attention. The Policy prevented employees from posting any statements that may be perceived as discriminatory, harassing, or defamatory or that would impugn the defendant’s credibility. Additionally, in 2012, the defendant promulgated a Code of Conduct (“the Code”) that prohibited disrespectful conduct toward authority figures or the chain of command established by the defendant. Finally, the Code required employees to conduct themselves in a manner that reflected favorably on the defendant.

On January 20, 2013, one of the plaintiffs advocated killing “liberals” on his Facebook page while on duty for defendant. The defendant asked the plaintiff to review the Policy and remove any postings that did not conform. Although the plaintiff maintained that he was in compliance with the Policy, he removed the January 20th posting. On January 23, 2013, the plaintiff posted a series of statements that accused the defendant of stifling his First Amendment rights. On February 17, 2013, the plaintiff also “liked” a Facebook post by a coworker was captioned “For you, chief” and displayed a photo of an obscene gesture. Shortly thereafter, the defendant served the plaintiff with charges of dismissal and afforded the plaintiff an opportunity for a preliminary hearing on March 8, 2013. On March 14, 2013, the defendant terminated the plaintiff.

At the district court, the plaintiff argued that the defendant fired him in retaliation for his use of his First Amendment Free Speech rights and that the Policy and Code were facially unconstitutional for restricting employees’ Free Speech. The district court granted the defendant’s motion for summary judgment regarding the retaliation claims, holding that the plaintiff’s January 20th posts and “likes” were capable of disrupting the defendant’s ability to perform its duties and thus did not constitute protected speech. Similarly, the January 23rd post and February 17th “like” were not protected speech because they did not implicate a matter of public concern. In June of 2015, the defendant revised its Policy and Code to eliminate all the challenged provisions. As a result, the district court dismissed the plaintiff’s facial challenge as moot.

The Plaintiff’s Free Speech Rights Did Not Outweigh the Defendant’s Interest

In evaluating the plaintiff’s First Amendment retaliation claim, the Fourth Circuit applied the Mcvey v. Stacy three-prong test. 157 F.3d 271 (4th Cir. 1998). Under Mcvey, a plaintiff must show the following three conditions: i) that he was a public employee speaking on a matter of public concern, ii) that his interest in speaking about a matter of public concern outweighed the government’s interest in providing effective and efficient services to the public, and iii) that such speech was a “substantial factor” in the plaintiff’s termination. Id. at 277–78.

The first prong is satisfied when a plaintiff demonstrates that his speech involved an issue of social, political, or other interest to a community. Urofsky v. Gilmore, 216 F.3d 401, 406 (4th Cir. 2000) (en banc). To determine whether the issue was social, political, or of interest to a community, courts examine the speech’s content, context, and form in view of the entire record. Id. The Fourth Circuit concluded that at least some of the content of plaintiff’s posts and “likes” were matters of public concern because the public has an interest in the opinions of public employees. Although not all of the postings were of public concern, the Fourth Circuit advocated examining the entirety of the speech in context and therefore proceeded to the second prong of the Mcvey analysis.

The Mcvey Factors Weighed More Heavily in Favor of the Defendant

The Fourth Circuit next balanced the plaintiff’s interest in speaking about matters of public concern with the government’s interest in providing efficient and effective public services. The Fourth Circuit used the Mcvey multifactor test to weigh the following considerations: whether a public employee’s speech (1) impaired the maintenance of discipline by supervisors; (2) impaired harmony among coworkers; (3) damaged close personal relationships; (4) impeded the performance of the public employee’s duties; (5) interfered with the operation of the institution; (6) undermined the mission of the institution; (7) was communicated to the public or to coworkers in private; (8) conflicted with the responsibilities of the employee within the institution; and (9) abused the authority and public accountability that the employee’s role entailed. McVey, 157 F.3d at 278.

The Fourth Circuit held that all of the factors weighed in favor of the defendant. The first factor was satisfied because plaintiff was a chief battalion, a leadership position, and allowing plaintiff to violate the Policy and Code without repercussions would encourage others to engage in similar violations. The second and third factors weighed in the defendant’s favor because several minority firefighters issued complaints and refused to work with the plaintiff after the posts. Similarly, the fourth factor weighed in the government’s favor because of the plaintiff’s responsibilities as a leader. The plaintiff’s leadership duties depended on his subordinates taking him seriously and looking to him as an example. By violating the policies he was supposed to uphold, the plaintiff failed to act as a leader and carry out his duties as chief battalion. Finally, plaintiff’s actions also “undermined community trust” by advocating violence against certain groups of people. Community trust and preventing violence are central to the defendant’s mission because the defendant’s function is to protect the community. Therefore, although plaintiff’s speech did involve some matters of public concern, the matters were not of sufficient gravity to outweigh all nine factors of the Mcvey multifactor test. Thus, the government’s interest in effectively providing public services outweighed the plaintiff’s interest in speech about public concerns.

The District Court’s Dismissal of the Facial Challenge on Mootness Grounds Was Proper

While defendant repealed all the challenged sections of the Policy and Code, a party’s voluntary repeal of provisions can only moot an action if the wrongful behavior can be reasonably expected not to recur. The Fourth Circuit affirmed the district court’s dismissal of the facial challenge for mootness because the current Fire Chief issued a sworn affidavit asserting that the defendant will not revert to the former Policy or Code. Additionally, the defendant’s counsel at oral argument declared that the defendant has no hint of an intent to return to the former guidelines. The Fourth Circuit held that these formal declarations were sufficient to meet the defendant’s mootness burden.

Conclusion

The Fourth Circuit affirmed both the district court’s grant of summary judgment and its grant of a motion to dismiss on mootness grounds.