Daria Brown

The ever-growing prevalence of deepfake technology presents significant concerns surrounding privacy, democracy, and the ability of public figures to safeguard their reputations.[1] To complicate matters further, deepfake content creators are easily able to cloak themselves in anonymity.[2] This renders victims who seek to have deepfake content removed from social media unable to do so unless the social media platforms remove the content at their request. At present, these platforms have no legal obligation to do so because under Section 230 of the Communications Decency Act of 1996 (“CDA”),[3] operators of social media platforms are not liable for content posted by third parties.  As Chief Judge Wilkinson of the Fourth Circuit Court of Appeals explained in Zeran v. America Online, Inc. just one year after the CDA was enacted, “§ 230 precludes courts from entertaining claims that would place a computer service provider in a publisher’s role. Thus, lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content—are barred.”[4] As a result, victims lack the ability to hold anyone accountable for the creation, dissemination, and spread of deepfake content featuring their image and/or voice.

In light of this growing threat, Congress must act. The simplest resolution would be for Congress to enact a federal statute preventing creators of deepfake content from using another person’s image, likeness, and/or voice without their consent. This law could be modeled after state law right of publicity statutes. The right protected by these state laws is generally regarded as “the inherent right of every human being to control the commercial use of his or her identity.”[5] Further, Congress should designate the statute as an intellectual property law, and thus allow it to fit squarely within the CDA’s immunity carve-out provided in § 230(e)(2) for “any law pertaining to intellectual property.”[6] In so doing, Congress would create an avenue for victims to seek remedy directly from deepfake content creators, or alternatively, from operators of social media platforms that refuse to take down the illegal content.

What is a Deepfake?

On March 22nd, 2024, the Arizona Agenda (“Agenda”), a local newsletter that reports on state political matters, released a video portraying Kari Lake—a Republican Senate candidate from Arizona.[7] Despite often being the subject of the Agenda’s denigration, in the video, Lake can be seen praising the Agenda for its “hard-hitting real news” and urging individuals to subscribe.[8] Except, it isn’t Kari Lake (“Lake”). The video depicts the outcome of technology that learned Lake’s facial expressions and movements by extracting information from millions of publicly available data points and supplanting that compiled information onto another person’s body,[9] which made it appear as if Lake herself was speaking.

Deepfake technology has been used to create digitally manipulated videos in a variety of contexts since 2017, but its roots are found in pornography.[10] The first major deepfake creations were obscene videos that began spreading on social media depicting female celebrities’ faces on pornographic actresses’ bodies.[11] Between 2018 and 2019, the number of deepfakes available online doubled.[12]  In 2020, the number of deepfake videos increased to six times that of 2019.[13]

Examples of deepfakes range from amusing to abhorrent. On the more amusing end of the spectrum, there’s the utilizion of the technology to “bring back” Peter Cushing’s Grand Moff Tarkin and Carrie Fisher’s Princess Leia in Rogue One: A Star Wars Story.[14] However, on the other end of the spectrum are videos like the ones depicting celebrities in pornography and others depicting activists or political candidates delivering messages that they do not actually support and that misinform supporters. For example, one video surfaced in 2018 of gun control activist and survivor of the Parkland high school shooting, Emma González, ripping apart the U.S. Constitution.[15] In reality, she was tearing a gun range target in half.[16] In sum, deepfake content, especially content created without the consent of those depicted, is cause for concern in numerous areas of modern life. Given the potential risk of manipulation and misinformation in the upcoming presidential election, the time is now for the threat of deepfakes to be addressed.

How Can the Problem be Fixed?

Without Congressional intervention, victims will remain unable to effectively seek legal recourse for content depicting them doing or saying things they didn’t.[17] At present, there are only a handful of laws regarding deepfakes, including the National Defense Authorization Act for Fiscal Year 2020 and the Identifying Outputs of Generative Adversarial Networks Act.[18] Both laws target deepfakes aimed at election interference, but there is no federal law that would allow victims of deepfake content an avenue for recovery.[19] The main obstacle is the prevalence of anonymous posters. Thus, simply creating a law that would allow recovery from posters of deepfake content would be insufficient.

One way to address this issue would be for Congress to enact a federal law protecting a right to publicity that has been violated through the use of deepfake technology and designating that law as one pertaining to intellectual property. This would not only give victims who know the identity of the original creator of the content an avenue to recover from them directly, but it would also allow victims to pursue legal action against social media platforms that fail to remove the illegal content. That is why the intellectual property designation is crucial. Without it, social media platforms are not liable for content posted to their platforms by third parties. Because this problem is likely to continue growing as the technology evolves, the proposed solution is unlikely to be the only necessary step. However, as with any marathon, this one must be run one step at a time.


[1] Alyssa Ivancevich, Deepfake Reckoning: Adapting Modern First Amendment Doctrine to Protect Against the Threat Posed to Democracy, 49 Hastings Const. L.Q. 61, 63 (2022).

[2] Elizabeth Caldera, “Reject the Evidence of Your Eyes and Ears”: Deepfakes and the Law of Virtual Replicants, 50 Seton Hall L. Rev. 177, 191 (2019).

[3] 47 U.S.C. § 230.

[4] Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997).

[5] Joshua Dubnow, Ensuring Innovation As the Internet Matures: Competing Interpretations of the Intellectual Property Exception to the Communications Decency Act Immunity, 9 Nw. J. Tech. & Intell. Prop. 297, 298 (2010).

[6] 47 U.S.C. § 230(e)(2).

[7] Hank Stephenson, Kari Lake Does us a Solid, Ariz. Agenda (Mar. 22, 2024), https://arizonaagenda.substack.com/p/kari-lake-does-us-a-solid.

[8] Id.

[9] Lindsey Wilkerson, Still Waters Run Deep(Fakes): The Rising Concerns of “Deepfake” Technology and Its Influence on Democracy and the First Amendment, 86 Mo. L. Rev. 407, 409 (2021).

[10] Id.

[11] Id.

[12] Id.

[13] Natalie Lussier, Nonconsensual Deepfakes: Detecting and Regulating This Rising Threat to Privacy, 58 Idaho L. Rev. 352, 354 (2022).

[14] Corey Chichizola, Rogue One Deepfake Makes Star Wars’ Leia and Grand Moff Tarkin Look Even More Lifelike, Cinema Blend (Dec. 9, 2020),  https://www.cinemablend.com/news/2559935/rogue-one-deepfake-makes-star-wars-leia-and-grand-moff-tarkin-look-even-more-lifelike.

[15] Alex Horton, A Fake Photo of Emma González Went Viral on the Far Right, Where Parkland Teens are Villains, Wash. Post (Mar. 26, 2018, 7:19 AM),  https://www.washingtonpost.com/news/the-intersect/wp/2018/03/25/a-fake-photo-of-emma-gonzalez-went-viral-on-the-far-right-where-parkland-teens-are-villains/.

[16] Id.

[17] Elizabeth Caldera, “Reject the Evidence of Your Eyes and Ears”: Deepfakes and the Law of Virtual Replicants, 50 Seton Hall L. Rev. 177, 191 (2019).

[18] Natalie Lussier, Nonconsensual Deepfakes: Detecting and Regulating This Rising Threat to Privacy, 58 Idaho L. Rev. 352, 367 (2022).

[19] Id.