The Deeply Complicated Issues Surrounding Deepfakes

By Matthew Lowe

February 3, 2025

The Deeply Complicated Issues Surrounding Deepfakes

2.3.2025

By Matthew Lowe

As generative AI technologies like OpenAI’s GPT models gain traction, transforming everything from legal education to corporate strategies, a shadow looms in the form of deepfakes. A portmanteau, “deepfake” combines “deep” from “deep learning” – a subset of machine learning involving neural networks trained on large datasets – and “fake.” These AI-generated illusions, once a curiosity in the realm of digital manipulation, now pose a serious threat, with the potential to disrupt elections and to exploit targeted populations through the creation of intimate deepfake images. The need for regulatory enhancements to effectively address deepfakes in these contexts is critical, as the technology’s misuse has the potential for far-reaching implications.

In the electoral arena, deepfakes threaten the integrity of information, necessitating disclosure requirements to maintain transparency. Conversely, the use of deepfakes in pornography often involves non-consensual elements, requiring outright bans and stringent enforcement to protect individuals’ rights and dignity. The distinction between these uses underscores the importance of crafting regulations that are both effective and context sensitive.

Illustrating the disruptive power of deepfakes, a fabricated image of an explosion at the Pentagon in 2023 impacted financial markets.[1] Similarly, a deepfake audio threat against a Brooklyn couple in the dead of night, mimicking a loved one’s voice, highlights the deeply personal and psychological impact of this technology.[2] These examples, coupled with recent findings that “the mere possibility that AI content could be circulating is leading people to dismiss genuine images, video and audio as inauthentic,”[3] emphasize the urgency of developing nuanced legal responses.

The regulatory landscapes of states like California and New York offer insights into the varied approaches needed to tackle the multifaceted issues presented by deepfakes, reflecting the broader national efforts to balance innovation with ethical and legal considerations.

Election Implications

Concepts like integrity, veracity and accountability play crucial roles in the democratic process. However, deepfakes present a considerable threat by undermining that process and causing confusion among voters through the spread of disinformation. In July of this year, Elon Musk, CEO of the social media platform X, reposted an edited deepfake on his platform of one of Vice President Kamala Harris’s campaign ads.[4] In the video, the vice president’s voice is digitally altered to make it seem like she is saying President Joe Biden is senile, that she does not “know the first thing about running the country” and that, as a woman and a person of color, she is the “ultimate diversity hire.”[5] This incident came only a few months after a political consultant in New Hampshire faced a $6 million fine from the FCC, as well as a host of criminal charges – including 13 counts of voter suppression, a felony and 13 counts of impersonating a candidate, a misdemeanor – across four New Hampshire counties for commissioning deepfake robocalls using President Biden’s AI-generated voice to discourage voting.[6] The New Hampshire attorney general stated, “I hope that our respective enforcement actions send a strong deterrent signal to anyone who might consider interfering with elections, whether through the use of artificial intelligence or otherwise.”[7]

In February of this year, the FCC ruled that AI-generated voices in robocalls are illegal, aiding in the issuance of the fine to the New Hampshire consultant and equipping state attorneys general nationwide to prosecute such tactics.[8] Furthermore, under the Telephone Consumer Protection Act, the FCC possesses not only civil enforcement authority to fine robocallers but also the ability to block calls from carriers facilitating illegal robocalls.[9] Additionally, the legislation allows individual consumers or organizations to sue robocallers in court.[10] State attorneys general also have their own enforcement tools, which may be tied to robocall definitions under the law.[11]

Some states have begun passing their own deepfake laws to secure the election process further. California, one of the most legislatively active in artificial intelligence, has enacted laws limiting how election-related deepfakes – including those targeting candidates and officials or questioning election outcomes – can circulate. The bill was designed to take immediate effect to address the 2024 election and effectively prohibit individuals and organizations from knowingly sharing certain deceptive election-related deepfakes without proper disclosures.[12] It is enforceable for 120 days before an election, similar to laws in other states, but uniquely remains enforceable for 60 days after,[13] which The New York Times recognized as “a sign that lawmakers are concerned about misinformation spreading as votes are being tabulated.”[14]

California is just one of over a dozen states with election-related deepfake laws, including New York. New York’s amended election law mandates that “[a] person, firm, association, corporation, campaign, committee, or organization that distributes or publishes any political communication that was produced by or includes materially deceptive media and has actual knowledge that it is materially deceptive shall be required to disclose this use.”[15] The law defines the term “materially deceptive media” as:

“any image, video, audio, text, or any technological representation of speech or conduct fully or partially created or modified that: (1) exhibits a high level of authenticity or convincing appearance that is visually or audibly indistinguishable from reality to a reasonable person; (2) depicts a scenario that did not actually occur or that has been altered in a significant way from how they actually occurred; and (3) is created by or with software, machine learning, artificial intelligence, or any other computer-generated or technological means, including adapting, modifying, manipulating, or altering a realistic depiction.”[16]

In short, the use of deepfakes to portray a false and/or significantly altered scenario requires a disclosure label in New York.

In the election context, regulators must navigate the delicate balance between protecting potentially vulnerable voters and upholding Americans’ First Amendment right to free speech. California, like New York, permits the use of deepfakes as long as they are disclosed in compliance with the requirements of the law. Despite that concession, however, Senior U.S. District Judge John A. Mendez still blocked AB 2839 recently, finding that “[m]ost of [the law] acts as a hammer instead of a scalpel” and calling it “a blunt tool” that “hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas.”[17] He carved out an exception for a “not unduly burdensome” portion of the law that requires verbal disclosure of digitally altered content in audio-only recordings.[18] This exception is necessary, considering audio-only recordings are much more difficult to discern. By contrast, where visual deepfakes are concerned, there have been volumes of guidance published from various sources that help individuals to recognize when they are likely being duped by paying attention to things like the subjects’ lips, blinking patterns, skin texture, etc.[19]

The Pornography Problem

Freedom of speech is an important consideration as states look to act against election deception, but what happens when humor and/or parody is not the basis for an action – when the motivation is directly harmful to the average citizen?

There is a concerningly booming market in which individuals can enlist the help of AI to generate “explicit nonconsensual deepfake content, often referred to as nonconsensual intimate image abuse.”[20] According to a Wired investigative report, “Across the internet, a slurry of ‘nudity’ and ‘undress’ websites sit alongside more sophisticated tools and Telegram bots, and are being used to target thousands of women and girls around the world – from Italy’s prime minister to school girls in South Korea.”[21] One of New York’s own congressional representatives, Alexandria Ocasio-Cortez, has been a victim of these kinds of websites, which can be especially harmful for survivors of sexual abuse like herself.[22] In an interview with Rolling Stone magazine, Ocasio-Cortez reflected that “[t]here are certain images that don’t leave a person, they can’t leave a person. . . . It’s not a question of mental strength or fortitude – this is about neuroscience and our biology.”[23] This is a sentiment widely accepted among mental health advocates, including Emma Pickering, the head of technology-facilitated abuse and economic empowerment at Refuge, the UK’s largest domestic abuse organization, who says, “These types of fake images can harm a person’s health and well-being by causing psychological trauma and feelings of humiliation, fear, embarrassment, and shame.”[24]

As already alluded to, whereas regulatory construction is arguably best served by a disclosure requirement approach in the election context, such an approach is not feasible when it comes to deepfake pornography. Disclosure cannot undo or in any way materially mitigate the creation and distribution of images that have the potential to cause such significant harm. Instead, this category of illicit deepfake activity can only be curbed by the combination of laws that expressly prohibit them and/or grant private rights of action for victims, as well as state prosecutors who are aggressive about penal enforcement. Examples of these kinds of laws include California’s SB 926, which expands existing law that classifies it as disorderly conduct to knowingly distribute intimate images or sexual content of another identifiable person without consent, when both parties understood the content was to remain private, and the distribution causes the depicted person serious emotional distress.[25] Under this new bill, the previously existing prohibition now covers the intentional creation and distribution of realistic, computer-generated or digital images of intimate body parts or sexual acts involving identifiable individuals, if the images could reasonably be believed to be authentic and result in emotional distress.[26]

The city of San Francisco advanced this issue by filing an unprecedented lawsuit in August against the owners of 16 popular websites that allow users to generate nonconsensual nude images of women and girls. The lawsuit claims that the sites’ owners and operators are in violation of state and federal laws prohibiting deepfake pornography, revenge pornography and child pornography.[27] While this case is new and the outcome is pending, California has far greater leverage to succeed in the courts than it does in its deepfake election legal battles. This is because free speech is harder to argue when its practice constitutes harm, illegal activity and/or obscenity. In Miller v. California, the court established a framework for determining unprotected obscenity, which stated that the material, considered as a whole, must (1) appeal to the prurient interest in sex, (2) depict or describe specifically defined sexual conduct in “a patently offensive way” and (3) “lack serious literary, artistic, political, or scientific value.”[28] The website owners will also need to contend with the clear imbalance of any free speech claims against the violations of privacy and consent for subjects depicted in those obscene images. A slightly older California privacy law, AB 602 (“Depiction of individual using digital or electronic technology: sexually explicit material: cause of action”), creates a private cause of action for instances in which an individual is depicted in intimate images and/or has those images distributed by another person without having granted consent to do so.[29]

Fortunately, states are continuing to expand their existing laws, as California has, to stay current with technologies that can generate convincing and obscene deepfakes. New York’s S1042A “amends subdivision 1 and 2 of section 245.15 of the penal law to state that a person is guilty of unlawful dissemination or publication of an intimate image when they intentionally disseminate or publish a still or video image depicting a person with one or more intimate parts exposed or engaging in sexual conduct with another person, including images created or altered by digitization where such person may be reasonably identified.”[30] These laws are also unlike the first of a kind election laws being passed, in that their spirit, even if captured in new text and seeking to encompass new technologies, has existed for quite some time. Studies show:

  • Deepfake pornography accounts for 98% of deepfake videos online, and 99% of all deepfake porn features women.
  • The total number of deepfake porn videos produced in 2023 increased 464% from 2022 and in 2023.
  • When asked about their reaction if someone close to them became a victim of deepfake porn, 73% of American males surveyed expressed a desire to report the incident to authorities and 68% indicated they would feel shocked and outraged by the violation of privacy.[31]

These stats demonstrate that this is a problem mostly impacting women. The Violence Against Women Act, originally passed in 1994 and amended numerous times over the years, was recently updated in 2022 to create, inter alia, “a federal civil cause of action for individuals whose intimate visual images are disclosed without their consent, allowing a victim to recover damages and legal fees; creating a new National Resource Center on Cybercrimes Against Individuals; and supporting state, tribal, and local government efforts to prevent and prosecute cybercrimes, including cyberstalking and the nonconsensual distribution of intimate images.”[32]

Conclusion

In consideration of what the future of deepfake regulations will look like, New York and California offer strong demonstrations. The authors of New York’s bill argue:

“In 2019, the legislature passed a law creating a crime for individuals who disseminate or publicize an intimate image of another person without such person’s consent. This monumental legislation addressed the growing need for updated laws that reflect advancements in technology. Now, the creation of “deepfakes” demonstrates a need to update the law again.”[33]

This captures the current state of regulatory developments nationwide as states seek to protect people from some of the more negative consequences of rapid AI growth and use. States will largely continue to expand on existing cybercrime, election and pornography laws to include coverage for deepfake capabilities, at least in the short term, rather than treat deepfakes altogether separately. Some states will seek outright bans when it comes to certain applications of the technology; others will continue to align with common AI regulations requiring disclosure and transparency and treat that as sufficient, depending on the context.

The federal government also has a potential role to play, aside from passing a comprehensive national AI law, in revising and expanding existing federal laws such as the Violence Against Women Act and passing new laws like Senator Ted Cruz’s proposed Take It Down Act. Take It Down would, inter alia, require websites to have in place procedures to remove nonconsensual intimate image abuse pursuant to a valid request from a victim, within 48 hours. Websites must also make reasonable efforts to remove copies of the images.[34] California recently enacted a law with a similar aim, SB 981, which requires social media platforms “to provide a mechanism that is reasonably accessible to a reporting user who is a California resident who has an account with the social media platform to report sexually explicit digital identity theft to the social media platform.”[35] “Identity theft” under this law refers to “an image or video created or altered through digitization that would appear to a reasonable person to be an image or video of any of the following: (i) An intimate body part of an identifiable person, (ii) An identifiable person engaged in an act of sexual intercourse, sodomy, oral copulation, or sexual penetration, or (iii) An identifiable person engaged in masturbation.”[36] A federal law, however, would be a step in the right direction towards ensuring all Americans are offered this kind of protection.

While deepfakes present a hot area for legislation and subsequent enforcement actions, social media platforms, and other website operators, can be proactive about addressing some of these issues ahead of impending legislation. However, this requires close alignment between written policies and accountability from stakeholders. In the case of Musk’s repost, it was done in arguable conflict with existing X policy, which expressly prohibits sharing “synthetic, manipulated or out-of-context media that may deceive or confuse people and lead to harm.”[37]

Finally, and perhaps most important, generative AI requires a certain level of humility on the part of all those seeking to use, regulate, develop and/or distribute it. While existing guidance on spotting deepfakes is somewhat helpful, New York Attorney General Letitia James sums it up best: “Deepfakes can leave clues showing they are fake, but the technology is getting better all the time and fakes are harder to spot. The absence of clues is not a guarantee that the content is real.”[38] Possible solutions include updated regulations, leaders who can commit to continuous education about new technologies and cross-collaborative efforts that include website operators and AI developers who can create and/or implement effective tools and policies to counteract the potential downsides of generative AI technology.


Matthew Lowe is a director and in-house counsel for a large IT service provider. He is a fellow of information privacy with the International Association of Privacy Professionals and lectures at the University of Massachusetts Amherst on data privacy, cyber law, and AI ethics. He is also a member of the New York State Bar Association’s Committee on Technology and the Legal Profession.

Endnotes

[1] Philip Marcelo, FACT FOCUS: Fake Image of Pentagon Explosion Briefly Sends Jitters Through Stock Market, AP News, May 23, 2023, https://apnews.com/article/pentagon-explosion-misinformation-stock-market-ai-96f534c790872fde67012ee81b5ed6a4.

[2] Charles Bethea, The Terrifying A.I. Scam That Uses Your Loved One’s Voice, New Yorker, March 7, 2024, https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice.

[3] Tiffany Hsu and Stuart A. Thompson, A.I. Muddies Israel-Hamas War in Unexpected Way, N.Y. Times, Oct. 28, 2023, https://www.nytimes.com/2023/10/28/business/media/ai-muddies-israel-hamas-war-in-unexpected-way.html.

[4] Ken Bensinger, Elon Musk Shares Manipulated Harris Video, in Seeming Violation of X’s Policies, N.Y. Times, July 27, 2024, https://www.nytimes.com/2024/07/27/us/politics/elon-musk-kamala-harris-deepfake.html.

[5] Id.

[6] Shannon Bond, A Political Consultant Faces Charges and Fines for Biden Deepfake Robocalls, NPR, May 23, 2024, https://www.npr.org/2024/05/23/nx-s1-4977582/fcc-ai-deepfake-robocall-biden-new-hampshire-political-operative.

[7] Id.

[8] Stuart A. Thompson, FCC Makes AI-Generated Voices in Robocalls Illegal, FCC, https://www.fcc.gov/document/fcc-makes-ai-generated-voices-robocalls-illegal.

[9] Id.

[10] Id.

[11] Id.

[12] Text of AB 2839, https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB28.

[13] Id.

[14] Stuart A. Thompson, <i>California Passes Election ‘Deepfake’ Laws, Forcing Social Media Companies to Take Action <i>, N.Y. Times, Sept. 17, 2024, https://www.nytimes.com/2024/09/17/technology/california-deepfakes-law-social-media-newsom.html.

[15] Laws of New York – Chapter 17, Article 14, Title 1, §14-106 §§5(b)(i), https://www.nysenate.gov/legislation/laws/ELN/14-106.

[16] Id. at §14-106 §§5(a)(i)

[17] Tyler Katzenberger, Judge Blocks California Deepfakes Law That Sparked Musk-Newsom Row, Politico, Oct. 2, 2024, https://www.politico.com/news/2024/10/02/california-law-block-political-deepfakes-00182277.

[18] Id.

[19] Detect DeepFakes: How To Counteract Misinformation Created by AI, MIT, https://www.media.mit.edu/projects/detect-fakes/overview.

[20] Matt Burgess, Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram, Wired, Oct. 15, 2024, https://www.wired.com/story/ai-deepfake-nudify-bots-telegram.

[21] Id.

[22] Lorena O’Neil, Fake Photos, Real Harm: AOC and the Fight Against AI Porn, Rolling Stone, Apr. 8, 2024, https://www.rollingstone.com/culture/culture-features/aoc-deepfake-ai-porn-personal-experience-defiance-act-1234998491.

[23] Id.

[24] Burgess, supra note 21.

[25] California SB 926, https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB926.

[26] Id.

[27] Press Release: City Attorney Sues Most-Visited Websites That Create Nonconsensual Deepfake Pornography, City Attorney of San Francisco, Aug. 15, 2024, https://www.sfcityattorney.org/2024/08/15/city-attorney-sues-most-visited-websites-that-create-nonconsensual-deepfake-pornography.

[28] Miller v. California, 413 U.S. 15 (1973).

[29] AB 602.

[30] S1042A, https://www.nysenate.gov/legislation/bills/2023/S1042/amendment/A.

[31] Testimony of Spencer Overton, U.S. House Committee on Oversight and Accountability, Nov. 8, 2023, https://oversight.house.gov/wp-content/uploads/2023/11/Overton-Testimony-on-Advances-in-Deepfake-Technology-11-8-23-1.pdf.

[32] 15 U.S.C. § 6851; Fact Sheet: Reauthorization of the Violence Against Women Act (VAWA), White House, Mar. 16, 2022, https://www.whitehouse.gov/briefing-room/statements-releases/2022/03/16/fact-sheet-reauthorization-of-the-violence-against-women-act-vawa.

[33] S1042A.

[34] Sen. Cruz Leads Colleagues in Unveiling Landmark Bill to Protect Victims of Deepfake Revenge Porn, U.S. Senate Committee on Commerce, Science, and Transportation, June 18, 2024, https://www.commerce.senate.gov/2024/6/sen-cruz-leads-colleagues-in-unveiling-landmark-bill-to-protect-victims-of-deepfake-revenge-porn,

[35] Text of SB 981.

[36] Id.

[37] Bensinger, supra note 4.

[38] Protecting New York Voters From AI-Generated Election Misinformation, Office of New York State Attorney General, https://ag.ny.gov/publications/protecting-new-york-voters-ai-generated-election-misinformation.

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)

Join NYSBA

My NYSBA Account

My NYSBA Account