Daria Brown

The ever-growing prevalence of deepfake technology presents significant concerns surrounding privacy, democracy, and the ability of public figures to safeguard their reputations.[1] To complicate matters further, deepfake content creators are easily able to cloak themselves in anonymity.[2] This renders victims who seek to have deepfake content removed from social media unable to do so unless the social media platforms remove the content at their request. At present, these platforms have no legal obligation to do so because under Section 230 of the Communications Decency Act of 1996 (“CDA”),[3] operators of social media platforms are not liable for content posted by third parties.  As Chief Judge Wilkinson of the Fourth Circuit Court of Appeals explained in Zeran v. America Online, Inc. just one year after the CDA was enacted, “§ 230 precludes courts from entertaining claims that would place a computer service provider in a publisher’s role. Thus, lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content—are barred.”[4] As a result, victims lack the ability to hold anyone accountable for the creation, dissemination, and spread of deepfake content featuring their image and/or voice.

In light of this growing threat, Congress must act. The simplest resolution would be for Congress to enact a federal statute preventing creators of deepfake content from using another person’s image, likeness, and/or voice without their consent. This law could be modeled after state law right of publicity statutes. The right protected by these state laws is generally regarded as “the inherent right of every human being to control the commercial use of his or her identity.”[5] Further, Congress should designate the statute as an intellectual property law, and thus allow it to fit squarely within the CDA’s immunity carve-out provided in § 230(e)(2) for “any law pertaining to intellectual property.”[6] In so doing, Congress would create an avenue for victims to seek remedy directly from deepfake content creators, or alternatively, from operators of social media platforms that refuse to take down the illegal content.

What is a Deepfake?

On March 22nd, 2024, the Arizona Agenda (“Agenda”), a local newsletter that reports on state political matters, released a video portraying Kari Lake—a Republican Senate candidate from Arizona.[7] Despite often being the subject of the Agenda’s denigration, in the video, Lake can be seen praising the Agenda for its “hard-hitting real news” and urging individuals to subscribe.[8] Except, it isn’t Kari Lake (“Lake”). The video depicts the outcome of technology that learned Lake’s facial expressions and movements by extracting information from millions of publicly available data points and supplanting that compiled information onto another person’s body,[9] which made it appear as if Lake herself was speaking.

Deepfake technology has been used to create digitally manipulated videos in a variety of contexts since 2017, but its roots are found in pornography.[10] The first major deepfake creations were obscene videos that began spreading on social media depicting female celebrities’ faces on pornographic actresses’ bodies.[11] Between 2018 and 2019, the number of deepfakes available online doubled.[12]  In 2020, the number of deepfake videos increased to six times that of 2019.[13]

Examples of deepfakes range from amusing to abhorrent. On the more amusing end of the spectrum, there’s the utilizion of the technology to “bring back” Peter Cushing’s Grand Moff Tarkin and Carrie Fisher’s Princess Leia in Rogue One: A Star Wars Story.[14] However, on the other end of the spectrum are videos like the ones depicting celebrities in pornography and others depicting activists or political candidates delivering messages that they do not actually support and that misinform supporters. For example, one video surfaced in 2018 of gun control activist and survivor of the Parkland high school shooting, Emma González, ripping apart the U.S. Constitution.[15] In reality, she was tearing a gun range target in half.[16] In sum, deepfake content, especially content created without the consent of those depicted, is cause for concern in numerous areas of modern life. Given the potential risk of manipulation and misinformation in the upcoming presidential election, the time is now for the threat of deepfakes to be addressed.

How Can the Problem be Fixed?

Without Congressional intervention, victims will remain unable to effectively seek legal recourse for content depicting them doing or saying things they didn’t.[17] At present, there are only a handful of laws regarding deepfakes, including the National Defense Authorization Act for Fiscal Year 2020 and the Identifying Outputs of Generative Adversarial Networks Act.[18] Both laws target deepfakes aimed at election interference, but there is no federal law that would allow victims of deepfake content an avenue for recovery.[19] The main obstacle is the prevalence of anonymous posters. Thus, simply creating a law that would allow recovery from posters of deepfake content would be insufficient.

One way to address this issue would be for Congress to enact a federal law protecting a right to publicity that has been violated through the use of deepfake technology and designating that law as one pertaining to intellectual property. This would not only give victims who know the identity of the original creator of the content an avenue to recover from them directly, but it would also allow victims to pursue legal action against social media platforms that fail to remove the illegal content. That is why the intellectual property designation is crucial. Without it, social media platforms are not liable for content posted to their platforms by third parties. Because this problem is likely to continue growing as the technology evolves, the proposed solution is unlikely to be the only necessary step. However, as with any marathon, this one must be run one step at a time.


[1] Alyssa Ivancevich, Deepfake Reckoning: Adapting Modern First Amendment Doctrine to Protect Against the Threat Posed to Democracy, 49 Hastings Const. L.Q. 61, 63 (2022).

[2] Elizabeth Caldera, “Reject the Evidence of Your Eyes and Ears”: Deepfakes and the Law of Virtual Replicants, 50 Seton Hall L. Rev. 177, 191 (2019).

[3] 47 U.S.C. § 230.

[4] Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997).

[5] Joshua Dubnow, Ensuring Innovation As the Internet Matures: Competing Interpretations of the Intellectual Property Exception to the Communications Decency Act Immunity, 9 Nw. J. Tech. & Intell. Prop. 297, 298 (2010).

[6] 47 U.S.C. § 230(e)(2).

[7] Hank Stephenson, Kari Lake Does us a Solid, Ariz. Agenda (Mar. 22, 2024), https://arizonaagenda.substack.com/p/kari-lake-does-us-a-solid.

[8] Id.

[9] Lindsey Wilkerson, Still Waters Run Deep(Fakes): The Rising Concerns of “Deepfake” Technology and Its Influence on Democracy and the First Amendment, 86 Mo. L. Rev. 407, 409 (2021).

[10] Id.

[11] Id.

[12] Id.

[13] Natalie Lussier, Nonconsensual Deepfakes: Detecting and Regulating This Rising Threat to Privacy, 58 Idaho L. Rev. 352, 354 (2022).

[14] Corey Chichizola, Rogue One Deepfake Makes Star Wars’ Leia and Grand Moff Tarkin Look Even More Lifelike, Cinema Blend (Dec. 9, 2020),  https://www.cinemablend.com/news/2559935/rogue-one-deepfake-makes-star-wars-leia-and-grand-moff-tarkin-look-even-more-lifelike.

[15] Alex Horton, A Fake Photo of Emma González Went Viral on the Far Right, Where Parkland Teens are Villains, Wash. Post (Mar. 26, 2018, 7:19 AM),  https://www.washingtonpost.com/news/the-intersect/wp/2018/03/25/a-fake-photo-of-emma-gonzalez-went-viral-on-the-far-right-where-parkland-teens-are-villains/.

[16] Id.

[17] Elizabeth Caldera, “Reject the Evidence of Your Eyes and Ears”: Deepfakes and the Law of Virtual Replicants, 50 Seton Hall L. Rev. 177, 191 (2019).

[18] Natalie Lussier, Nonconsensual Deepfakes: Detecting and Regulating This Rising Threat to Privacy, 58 Idaho L. Rev. 352, 367 (2022).

[19] Id.

By Tom Budzyn

On February 8, 2024, the Federal Communications Commission (“FCC”) issued a unanimous declaratory ruling giving agency guidance on the applicability of the Telephone Consumer Protection Act (“TCPA”) to unwanted and illegal robocalls using artificial intelligence.[1] In this ruling, the FCC stated its belief that unwanted spam and robocalls making use of artificial intelligence are in violation of existing consumer protections.[2] The FCC’s analysis focused on protecting consumers from the novel and unpredictable threats posed by artificial intelligence.[3] It may be a harbinger of things to come, as other agencies (and various tribunals) are forced to consider the applicability of older consumer protection laws to the unique challenge of artificial intelligence.[4] As federal agencies are often the first line of defense for consumers against predation,[5] the onus is on them to react to the dangers posed by artificial intelligence.

The FCC considered the TCPA, passed in 1991, which prohibits the use of “artificial” or “prerecorded” voices to call any residential phone line if the recipient has not previously consented to receiving such a call.[6] This blanket prohibition is effective unless there is an applicable statutory exception, or it is otherwise exempted by an FCC rule or order.[7] However, the statute does not define what an “artificial” or “prerecorded” voice is.[8] Thus, on November 16, 2023, the FCC solicited comments from the public as to the applicability of the TCPA to artificial intelligence in response to the technology’s fast and ongoing developments.[9] In its preliminary inquiry, the FCC noted that some artificial intelligence-based technologies such as voice cloning[10] facially appear to violate the TCPA.[11]

Following this initial inquiry, the FCC confirmed its original belief that phone calls made using artificial intelligence-generated technologies without the prior consent of the recipient violate the TCPA.[12] In doing so, the FCC looked to the rationale underlying the TCPA and its immediate applicability to artificial intelligence.[13] As a consumer protection statute, the TCPA safeguards phone users from deceptive, misleading, and harassing phone calls.[14] Artificial intelligence, and the almost limitless technological possibilities it offers,[15] presents a uniquely dangerous threat to consumers. While most phone users today are well-equipped to recognize and deal with robocalls or unwanted advertisements, they are likely much less able to deal with the shock of hearing the panicked voice of a loved one asking for help.[16] Pointing to these severe dangers, the FCC found that the TCPA must extend to artificial intelligence to adequately protect consumers.[17]

As a result, the FCC contemplates future enforcement of the TCPA against callers using artificial intelligence technology without the prior consent of the recipients of the calls.[18] The threat of enforcement looms heavy, as twenty-six state attorney generals wrote to the FCC in support of the decision, and more impressively, there is almost unanimous accord among the state attorney generals in their understanding of this law.[19]

It is worth noting that the FCC’s ruling is possibly not legally binding.[20]  The ruling serves to explain the agency’s interpretation of the TCPA, and as such, is not necessarily binding on the agency.[21] Moreover, the possible downfall of Chevron would mean that the FCC’s interpretation of the TCPA would likely be afforded little, if any deference.[22] Legal technicalities notwithstanding, the FCC’s common sense declaratory ruling states the obvious: unsolicited phone calls using artificial intelligence-generated voices are covered by the TCPA’s prohibition on “artificial” or “prerecorded” voices in unsolicited phone calls.[23] If there was any doubt before that callers should avoid the use of artificial intelligence, without the consent of call recipients, it is gone now.

Perhaps the most interesting part of the FCC’s ruling is its straightforward analysis of the application of the facts to the law. Other federal agencies will certainly be asked to make similar analyses in the future, as artificial intelligence becomes only more and more ubiquitous. In the TCPA context, the analysis is straightforward. It is much less so in the context of other consumer protection statutes.[24] For example, the Federal Trade Commission (“FTC”) is authorized to take action against “persons, partnerships, or corporations” from using unfair methods in competition affecting commerce or unfair or deceptive acts affecting commerce by 15 U.S.C. § 45.[25] Unsurprisingly, “person” is not defined by the statute[26]  as the law was originally enacted in 1914.[27] If it remains in its current form, it could exclude artificial intelligence from one of the most obvious consumer protections in the modern United States. While artificial intelligence has not been recognized as a person in other contexts,[28] it should be recognized as such where it can do as much harm, if not more, than a person could.

This statute is only one of many traditional consumer protection statutes that, as written, may not adequately protect consumers from the dangers of artificial intelligence.[29] While amending the law is certainly possible, legislative gridlock and inherent delays place greater importance on agencies being proactive to artificial intelligence developments. The FCC’s ruling is a step in the right direction, a sign that agencies will not wait for artificial intelligence to run rampant before seeking to rein it in. Hopefully, other agencies follow suit and issue similar guidance, using existing laws to protect consumers from new threats.


[1] F.C.C., CG Docket no. 23-362, Declaratory Ruling (2024) [hereinafter F.C.C. Ruling].  

[2] Id.

[3] Id.

[4] Fed. Trade Comm’n, FTC Chair Khan and Officials from DOJ, CFPB, AND EEOC Release Joint Statement on AI (2024), https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai.

[5] See, e.g., J. Harvie Wilkinson III, Assessing the Administrative State, 32 J. L. & Pol. 239 (2017) (discussing modern administrative state and its goals, including stabilizing financial institutions, making homes affordable and protecting the rights of employees to unionize).  

[6] Telephone Consumer Protection Act of 1991, 47 U.S.C. § 227.

[7] Id.

[8] Id.

[9] F.C.C. Ruling, supra note 1.

[10] See Fed. Trade Comm’n, Preventing the Harms of AI-enabled Voice Cloning (2024) https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning.

[11] FC.C. Ruling , supra note 1.

[12] Id.

[13] Id.

[14] See Telephone Consumer Protection Act of 1991, 47 U.S.C. § 227.

[15] See, e.g., Cade Metz, What’s the Future for AI?, N.Y. Times (Mar. 31, 2023).

[16] Ali Swenson & Will Weissert, New Hampshire investigating fake Biden robocall meant to discourage voters ahead of primary, Associated Press (Jan. 22, 2024), https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5.

[17] F.C.C. Ruling, supra note 1.

[18] Id.

[19] Id.

[20] Azar v. Allina Health Servs., 139 S. Ct. 1804, 1811 (2019) (explaining that interpretive rules, which are exempt from notice and comment requirements under the Administrative Procedure Act, “merely advise” the public of the agency’s interpretation of a statute).

[21] Chang Chun Petrochemical Co. Ltd. v. U.S., 37 Ct. Int’l Trade, 514, 529 (2013) (“Unlike a statute or regulations promulgated through notice and comment procedures, an agency’s policy is not binding on itself.”).

[22] See generally Caleb B. Childers, The Major Question Left for the Roberts Court, will Chevron Survive? 112 Ky. L.J. 373 (2023).

[23] F.C.C. Ruling, supra note 1.

[24] See 15 U.S.C. §§ 1601–1616 (consumer credit cost disclosure statute defines “person” as a “natural person” or “organization”).

[25] 15 U.S.C.§ 45.

[26] Id.

[27] Id.  

[28] See Thaler v. Hirshfeld, 558 F.Supp. 3d 328 (2021) (affirming United States Patent and Trademark Office’s finding that the term “individual” in the Patent Act referred only to natural persons, and thus artificial intelligence could not be considered an inventor of patented technology).

[29] See, e.g., 15 U.S.C. §§ 1601-1616, supra note 24.                                      

By Ivey Fidelibus

A few times within each generation, an invention is so novel and powerful that it changes not only a certain profession or specific task, but society itself.[1]  For example, consider the debut of Facebook in 2004 and the ensuing rise of social media[2] or Apple’s launch of the first iPhone in 2007.[3]  Now, a new technology has arisen that threatens to change the way we learn, research, communicate, and work.

ChatGPT is a “deep-learning software that can generate new text once trained on massive amounts of existing written material.”[4]  In plain words, ChatGPT engages in human-like conversation in response to user prompts.  It was released by OpenAI, a San Francisco AI research and deployment company whose self-proclaimed mission is to “ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.”[5]  For a few examples of ChatGPT’s functionality, I had ChatGPT respond to the following prompts, with answers ranging in novelty and accuracy:

“write a song in the style of dolly parton about the rule against perpetuities,”[6] “write a short judicial opinion in the style of chief justice john marshall about whether a hot dog is a sandwich,”[7] and “write a law school essay exam question for a decedents’ estates and trusts class about the rule against perpetuities”[8]

An avalanche of discourse has fallen describing how ChatGPT may cause, or contribute to, the demise of high school English classes,[9] college essays,[10] and democracy.[11]  ChatGPT must also affect the teaching and practice of law, considering that the legal profession hinges on effective written communication, such as motions, briefs, research memoranda, and contracts.  As to law school, ChatGPT seems to already both aid and hinder learning.[12]  As demonstrated, ChatGPT can equally produce questions to test student knowledge and the plagiarized answers to student assessments.  However, educators may be able to fight students’ plagiarism by AI with their own artificial intelligence programs that detect when text is written by chatbots.[13]

As to the practice of law, the future is completely uncertain, and questions remain unanswered about ChatGPT’s efficacy.[14]  Though the chatbot was able to pass certain sections within the multistate multiple-choice section of the bar exam, [15] as of now, ChatGPT is severely limited.  OpenAI describes some of the chatbot’s limitations as: “writ[ing] plausible-sounding but incorrect or nonsensical answers,” “often [being] excessively verbose and overus[ing] certain phrases,” and “guess[ing] what the user intended,” instead of asking “clarifying questions.”[16]  Another critical issue for lawyers is ChatGPT’s inability to “explain the source(s) of the information it provides.”[17]  Further, the chatbot “is not connected to the internet” and has “limited knowledge of world and events after 2021,”[18] so even if ChatGPT did cite its sources, an attorney would risk the exclusion of the most recent precedents and could not fully Shepardize the sources cited.

Further, even if ChatGPT could produce sufficiently accurate, complex legal writing, attorneys must tread carefully to avoid a breach of confidentiality.  “The attorney-client privilege is recognized in every state and federal jurisdiction in the United States . . . with over five hundred years of recognition at common law.”[19]  This privilege is based in part on “the attorney’s moral duty to maintain confidentiality of the client relationship or respect for autonomy of the client through protection of the fiduciary nature of the attorney’s role.”[20]  Therefore, attorneys have the “duty to protect the confidentiality of the communications to preserve the privilege.”[21]  Attorneys face a range of punishments for violating the ethical requirements of confidentiality, including discipline by states’ disciplinary processes, claims of malpractice by the client against the attorney, and waiver of attorney-client privilege or work-product protection.[22]

Attorneys already risked breaches of confidentiality when using common Internet programs and websites like Google, because of the rise of data monitoring and tracking.[23]  ChatGPT elevates these risks, in part because it users likely will need to input a greater volume and specificity of data to produce a well-informed, complex AI output.  Further, OpenAI openly admits that the company “review[s]” conversations to improve [its] systems and to ensure the content complies with [its] policies and safety requirements” and users’ “conversations may be reviewed by [its] AI trainers to improve [its] systems.” [24]  OpenAI also makes clear that specific prompts from users’ histories cannot be deleted, and the company pleads that users “[p]lease don’t share any sensitive information in [their] conversations.”[25]

Therefore, even if ChatGPT produced perfect outputs, data-storing AI programs still would likely be unusable because they would require attorneys to input sensitive information into a nonconfidential forum.  It seems like ChatGPT Esq. will have to wait.


[1] See James Fallows, The 50 Greatest Breakthroughs Since the Wheel, Atlantic (Nov. 2013), https://www.theatlantic.com/magazine/archive/2013/11/innovations-list/309536/.

[2] See Sarah Phillips, A brief history of Facebook, Guardian (Jul. 25, 2007), https://www.theguardian.com/technology/2007/jul/25/media.newmedia.

[3] See Rani Molla, How Apple’s iPhone changed the world: 10 years in 10 charts, Vox (Jun. 26, 2017), https://www.vox.com/2017/6/26/15821652/iphone-apple-10-year-anniversary-launch-mobile-stats-smart-phone-steve-jobs.

[4] Ian Bogost, ChatGPT is Dumber Than You Think, Atlantic (Dec. 7, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/.

[5] OpenAI, About, https://openai.com/about/ (last visited Jan. 18, 2023).

[6] “Verse 1:

In the land of property law, there’s a rule that you should know

It’s the rule against perpetuities, and it’s been around for centuries

It says that an interest in land can’t last forever, it’s got to end

Otherwise it’s a perpetuity, and that’s not how it should bend

 

Chorus:

So let the land be free, for future generations to see

Don’t tie it up for all time, let it breathe, let it shine

The rule against perpetuities, it’s there for a reason

To keep the land from being locked up, without any good reason

 

Verse 2:

Some folks might try to get around it, but they’ll find it’s not so easy

The rule’s been tested and true, it’s not something you can breezy

So if you’re thinking about a trust, or a lease that never ends

Just remember the rule against perpetuities, and how it bends

 

Chorus:

So let the land be free, for future generations to see

Don’t tie it up for all time, let it breathe, let it shine

The rule against perpetuities, it’s there for a reason

To keep the land from being locked up, without any good reason

 

Outro:

So let’s all respect the rule, and keep the land in motion

For future generations to enjoy, with no undue devotion.

[7] “It is the opinion of this Court that a hot dog is not a sandwich. The common understanding of a sandwich, as defined by Merriam-Webster, is ‘two or more slices of bread or a split roll having a filling in between.’ A hot dog, however, is comprised of a bun and a sausage, with the option of various toppings. The bun and sausage are not ‘in between’ slices of bread, and therefore do not fit the definition of a sandwich. We find that a hot dog is a distinct and separate food item, and should not be classified as a sandwich.”

[8] “In the state of XYZ, a testator created a trust in her will for the purpose of providing income to her granddaughter for her lifetime. Upon the granddaughter’s death, the trust was to be distributed to the testator’s great-grandchildren. The trust instrument provided that the trust was to last for 21 years after the death of the granddaughter.

 

However, after the testator’s death, it was discovered that one of the great-grandchildren was not born at the time the will was executed and would not reach the age of majority for more than 21 years after the granddaughter’s death.

 

Analyze the validity of the trust under the rule against perpetuities. Would the trust be valid if the state of XYZ has the common law rule against perpetuities or the wait and see approach? Explain your reasoning and cite any relevant case law or statutes.”

[9] Daniel Herman, The End of High-School English, Atlantic (Dec. 9, 2022), https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412/.

[10] Stephen Marche, The College Essay Is Dead, Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[11] Nathan E. Sanders & Bruce Schneier, How ChatGPT Hijacks Democracy, N.Y. Times (Jan. 15. 2023), https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html?campaign_id=39&emc=edit_ty_20230116&instance_id=82847&nl=opinion-today&regi_id=78377744&segment_id=122655&te=1&user_id=71ef53aec7c806778d2d3ec87228434a.

[12] Karen Sloan, Some Law Professors Fear ChatGPT’s Rise as Others See Opportunity, Reuters (Jan. 10, 2023, 7:19 PM), https://www.reuters.com/legal/legalindustry/some-law-professors-fear-chatgpts-rise-others-see-opportunity-2023-01-10/.

[13] Emma Bowman, A College Student Created an App that Can Tell Whether AI Wrote an Essay, NPR (Jan. 9, 2023, 5:01 AM), https://www.npr.org/2023/01/09/1147549845/gptzero-ai-chatgpt-edward-tian-plagiarism.

[14] Bailey Schulz, DoNotPay’s ‘First Robot Lawyer’ to Take on Speeding Tickets in Court Via AI. How It Works, USA Today (Jan. 10, 2023, 2 PM), https://www.usatoday.com/story/tech/2023/01/09/first-ai-robot-lawyer-donotpay/11018060002/; Jenna Greene, Will ChatGPT make lawyers obsolete? (Hint: be afraid), Reuters (Dec. 9, 2022, 2:33 PM), https://www.reuters.com/legal/transactional/will-chatgpt-make-lawyers-obsolete-hint-be-afraid-2022-12-09/.

[15] Michael J. Bommarito II and Daniel Martin Katz, GPT Takes the Bar Exam, SSRN (Dec. 29, 2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4314839.

[16] OpenAI, Blog, https://openai.com/blog/chatgpt/ (last visited Jan. 18, 2023).

[17] Oliver Jeffcott. ChatGPT and legal services, Lexology (Jan. 10, 2023), https://www.lexology.com/library/detail.aspx?g=dd045cf9-4dd7-4ff8-b17d-f4250a6e1a04.

[18] OpenAI, FAQ, https://help.openai.com/en/articles/6783457-chatgpt-faq (last visited Jan. 18, 2023).

[19] Anne Klinefelter, When to Research Is to Reveal: The Growing Threat to Attorney and Client Confidentiality from Online Tracking, 16 Va. J. L. & Tech. 1, 22 (2011).

[20] Id. at 23.

[21] Id.

[22] See id. at 34.

[23] See id. at 4-5.

[24] OpenAI, FAQ, https://help.openai.com/en/articles/6783457-chatgpt-faq (last visited Jan. 18, 2023).

[25] Id.