6 days 23 hrs 15 min
Register for Annual Meeting Before 12/12/2025 to Save $100

Copyright Law in the Age of AI: Navigating Authorship, Infringement, and Creative Rights

By Sarah (Sarang) Lim

June 20, 2025

Copyright Law in the Age of AI: Navigating Authorship, Infringement, and Creative Rights

6.20.2025

By Sarah (Sarang) Lim

Introduction

In April 2023, music streaming platforms witnessed an unprecedented phenomenon: a song titled “Heart on My Sleeve” went viral, seemingly featuring a collaboration between superstars Drake and The Weeknd.[1] The track amassed millions of plays before listeners discovered the shocking truth that neither artist had participated in its creation. Instead, an anonymous producer known as “Ghostwriter” had used artificial intelligence to mimic their distinctive vocal styles and artistic sensibilities.[2]

This “Fake Drake” incident represents just one example of how AI has begun reshaping the creative landscape. Today’s AI systems can generate music, visual art, literature, and films with increasing sophistication, blurring the line between human and machine creativity. Such developments have thrown traditional copyright frameworks into disarray.[3] Our copyright system – conceived long before computers could compose songs or paint pictures – now struggles to address fundamental questions, such as: Can AI-generated works receive copyright protection? Does training AI on copyrighted materials constitute infringement? How should we balance technological innovation with the rights of human creators?[4]

Recent months have seen multiple lawsuits by authors, musicians, and visual artists against AI developers. “Game of Thrones” author George R.R. Martin and other prominent writers have sued OpenAI for allegedly training its models on their novels without permission.[5] Visual artists have filed similar complaints against image generators like Stable Diffusion and Midjourney.[6] These disputes exist in a legal gray zone because current copyright law does not clearly address either AI-generated outputs or the training processes that make them possible.

This article examines the collision between AI and copyright through both doctrinal and practical lenses. Part II explores existing copyright principles – particularly human authorship requirements and fair use doctrine – as they relate to AI. Part III analyzes key cases and regulatory developments, from U.S. Copyright Office decisions to emerging legislation, like Tennessee’s “ELVIS Act.” Part IV discusses the core legal controversies these developments raise and explores potential solutions, from redefining authorship to creating new compensation mechanisms for creators. Part V concludes by arguing for thoughtful copyright reforms that balance technological progress with the protection of human creative labor.

Legal Background: Authorship and Infringement in the AI Context

The “Human Authorship” Principle in Copyright Law

The foundation of U.S. copyright law rests on a premise now being challenged: that authors are human. Both constitutional and statutory language have consistently been interpreted to exclude non-human creators from copyright protection.[7] The U.S. Copyright Office has long maintained that works must “owe their origin to a human agent” to qualify for registration.[8]

Courts have repeatedly reinforced this principle. In the seminal 19th-century case Burrow-Giles Lithographic Co. v. Sarony, the Supreme Court described an author as “he to whom anything owes its origin” and emphasized that copyright protects the fruits of intellectual labor performed by a human “person.”[9] More recently, the Ninth Circuit ruled in the “monkey selfie” case that animals cannot qualify as authors under copyright law, reasoning that terms like “authors” in the Copyright Act “necessarily exclude animals.”[10]

This human authorship requirement creates an immediate dilemma for AI-generated works. If an AI system autonomously produces a poem, painting, or song without meaningful human input, current doctrine suggests that such output would be uncopyrightable – effectively placing it in the public domain. The Copyright Office confirmed this interpretation in March 2023, stating that when AI “generates material autonomously without human involvement,” the resulting work is ineligible for registration.[11]

A notable illustration of this principle emerged when computer scientist Stephen Thaler attempted to register an image created by his AI system (which he named the “Creativity Machine”). The Copyright Office rejected his application because no human had authored the work.[12] When Thaler challenged this decision, the federal court in Thaler v. Perlmutter (2023) upheld the office’s determination, confirming that only human beings – not AI or algorithms – can qualify as authors of copyrightable works.[13] In January, the U.S. Copyright Office clarified that copyright protection extends only to works with meaningful human involvement. AI-generated outputs can receive protection only when a human creator significantly shapes or contributes to the final expressive content, rather than merely providing a simple prompt.[14]

Not all jurisdictions take such a strict approach. The United Kingdom explicitly recognizes “computer-generated” works as copyrightable and assigns authorship to “the person by whom the arrangements necessary for the creation of the work are undertaken.”[15] This more accommodating stance grants such works a shorter term of protection, representing a pragmatic compromise.

AI Training Data: Fair Use vs. Infringement

While the output of AI raises questions about authorship, the input side presents equally challenging questions about infringement. Generative AI systems develop their capabilities by analyzing vast datasets of existing works, many protected by copyright.[16] This training process has sparked heated legal debates.

The central question is whether using copyrighted works as AI training data without permission constitutes fair use or copyright infringement. AI developers contend that training represents a transformative analytical use – more akin to a reader learning from books than a competitor duplicating content.[17] They often draw analogies to search engines, noting that courts have previously held that Google’s scanning of millions of books to enable text search qualified as fair use because the purpose was transformative and did not substitute for the books themselves.

Creators and rights holders increasingly challenge this characterization. They argue that AI training exceeds the boundaries of fair use, particularly when the resulting output competes with or effectively replicates elements of the original works. Visual artists have sued companies behind image generators like Stable Diffusion, alleging that scraping millions of artworks from the internet constitutes piracy rather than fair use.[18] Authors, including George R.R. Martin, have joined class actions against OpenAI for using their books as training data without permission.[19]

An important early precedent emerged in the 2023-24 case Thomson Reuters v. Ross Intelligence. In this dispute, an AI legal research startup had used content from Westlaw’s copyrighted database of case summaries to train its algorithm after Thomson Reuters refused to license the data.[20] The court rejected the AI company’s fair use defense, with U.S. District Judge Stephanos Bibas finding that Ross’s use was commercial and insufficiently transformative, and that it directly harmed Thomson Reuters’ market by creating a competing product.[21] This pivotal ruling suggests that when AI training faithfully reproduces core expressive elements of protected works (even if only internally within the model), courts may view it as infringement, especially if the AI’s output serves as a market substitute for the original.[22]

The law in this area remains unsettled. Fair use analysis is inherently fact-specific, and AI training processes vary widely. Recognizing the growing significance of these questions, the U.S. Copyright Office announced in 2023 that it is studying how copyright law should apply to both AI training and AI-generated outputs.[23] Meanwhile, some AI developers have begun voluntarily licensing data from content owners, striking deals with image and music rightsholders to secure training material with appropriate compensation.[24] In May, the Copyright Office also addressed whether using copyrighted materials to train AI models constitutes fair use. According to its guidance, training AI systems on copyrighted works generally counts as transformative and potentially permissible fair use, especially when the use serves research or analytic purposes and does not substitute for the original works. However, the office warned that unauthorized mass commercial use of copyrighted materials for AI training falls clearly outside the scope of fair use, particularly when done through unauthorized scraping or other illicit methods.[25]

Key Cases and Examples: AI, Copyright, and Controversy

Copyright Office Decisions on AI-Generated Works

The U.S. Copyright Office has become an early testing ground for questions about AI authorship. As previously noted, the office has consistently maintained that purely AI-generated works cannot receive copyright protection due to the absence of human authorship.

The landmark case in this area involved Stephen Thaler’s “Recent Entrance to Paradise” image. Thaler explicitly listed his AI system as the author and acknowledged the piece was created “autonomously by machine” without human creative input.[26] Both the Copyright Office Review Board and subsequently the federal court in Thaler v. Perlmutter (D.D.C. 2023) rejected the registration, marking the first judicial endorsement of the office’s position that AI works require human authorship to qualify for copyright protection.[27]

A more nuanced scenario emerged with Kristina Kashtanova’s graphic novel “Zarya of the Dawn.” Kashtanova wrote the text and crafted the overall narrative but generated the illustrations using the AI tool Midjourney. After initially granting registration for the entire work, the Copyright Office learned about the AI-created imagery and conducted a review. In February 2023, it decided that while the overall graphic novel could remain registered –recognizing Kashtanova’s creative selection and arrangement – the individual AI-generated images within it were excluded from protection.[28] In other words, Kashtanova could claim copyright in her written text and the way she curated the visual storytelling, but not in the Midjourney-produced artwork itself.[29]

Following these cases, the Copyright Office issued formal guidance in March 2023 requiring applicants to disclose any AI-generated content in their submissions and clarifying that registrations would be limited to the human-authored portions.[30] The office warned that omitting such disclosure – for example, attempting to register an AI-created painting as one’s own work – could result in cancellation of the registration.

AI in Music: Deepfakes and Disputes in the Entertainment Industry

The music industry provides particularly striking examples of AI-related copyright challenges. When the AI-generated track “Heart on My Sleeve” went viral, it was not simply remixing or sampling existing recordings; it created an entirely new composition that sounded remarkably like the artists being imitated.[31]

Universal Music Group, representing the actual artists, quickly had the song removed from major platforms, citing copyright violations and potential infringement of the artists’ publicity rights.[32] However, the legal basis for removal highlighted emerging complexities. Since the melody and lyrics were original and therefore not copied from any existing Drake or Weeknd track, traditional copyright claims were not straightforward. The AI had been trained in these artists’ voices and musical styles, but the result was an unauthorized sound-alike rather than direct copying.

Industry lawyers noted the “messy questions” around authorship and infringement in such cases. The AI output did not copy any particular recording, but did mimic protected elements like vocal timbre, potentially implicating rights of publicity or trademark rather than traditional copyright.[33] Similar disputes have arisen with AI-generated “deepfake” tracks imitating other artists, from Kanye West to Frank Sinatra.

One notable legislative response is Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act of 2024. Nicknamed after Elvis Presley, this first-of-its-kind law specifically targets AI in entertainment by making a musician’s voice a protected attribute under Tennessee’s publicity rights statute.[34] The ELVIS Act prohibits unauthorized digital replication of an artist’s voice and likeness through AI,[35] effectively giving performers (and their estates) a legal tool to prevent others from cloning their voices without consent. Federal lawmakers have introduced similar concepts in Congress with the proposed NO FAKES Act, which would create nationwide protection against unauthorized digital replicas of individuals’ personas.[36]

Other Notable Disputes and Legislative Trends

Beyond individual cases, broader legislative attention has focused on the friction between AI and creative industries. One significant proposal at the federal level is the Generative AI Copyright Disclosure Act, introduced in Congress in 2024.[37] As of May, the Generative AI Copyright Disclosure Act remains under consideration in Congress, having been introduced in April 2024 but not yet advanced beyond committee referral. This bill addresses transparency in AI training datasets by requiring developers to disclose which copyrighted works they used to train their models through filings with the copyright office.[38] While it doesn’t prohibit using copyrighted material per se, it mandates public disclosure, enabling creators to identify when their work has been incorporated into AI systems and potentially seek compensation.[39]

Internationally, similar regulatory approaches are taking shape. The European Union’s developing AI Act and recent copyright directives seek to balance innovation with intellectual property protection. The EU’s 2019 Directive on Copyright introduced specific exceptions allowing text and data mining of works for AI research while crucially permitting rightsholders to opt out of commercial AI mining – suggesting that consent or compensation should factor into many commercial AI applications.[40]

Meanwhile, litigation like Getty Images v. Stability AI (filed in both the U.S. and UK) accuses AI firms of wholesale copying of millions of photographs to train image generators. Industry coalitions like the Human Artistry Campaign (representing entertainment unions and guilds) have published principles calling for any use of artists’ work in AI to be authorized and fairly compensated.[41]

Legal Debates and Proposed Solutions

Should AI-Generated Works Receive Copyright Protection?

The threshold question is whether works created by AI – with minimal or no human creative input – should qualify for copyright protection. Under current U.S. doctrine, the answer is no: AI creations lacking human authorship remain unprotected.

Some commentators defend this position on principle. They argue that copyright’s fundamental purpose is to incentivize human creativity, and an AI system – lacking consciousness or personal incentives – does not require moral or economic rights in the same way that human authors do. From this perspective, granting copyright to AI works might actually undermine copyright’s goals by rewarding technology companies at the expense of human creators.

However, compelling arguments also exist for adjusting the law. One practical concern is that completely denying copyright to AI outputs could discourage investment and creativity in AI applications. As one industry observer noted, current law leaves companies “vulnerable when it comes to leveraging generative AI” because the outputs remain “potentially not under IP protection” at all.[42]

Several potential solutions have emerged. One approach would require a minimum threshold of human involvement or creative input as a condition for copyright – essentially formalizing the copyright office’s  stance. This would encourage treating AI as an assisting tool with the human user recognized as the author based on their selection, arrangement, or refinement of the output.

A more innovative approach would create a new sui generis right or special shorter-term copyright for AI-generated works. Lawmakers might determine that AI-generated content deserves a distinct legal status – perhaps protection for a briefer period or with more limited rights – to prevent immediate wholesale copying while ensuring these works enter the public domain sooner than human creations.

Currently, the momentum in U.S. law strongly favors maintaining the human authorship requirement. Recent judicial decisions and Copyright Office policies have reaffirmed this principle.[43] In the near term, a more realistic solution involves clarifying the threshold of human input necessary for AI-assisted works to qualify for copyright.

Clarifying Fair Use for AI Training and Data

The question of when and how copyrighted materials can legitimately serve as AI training data demands urgent attention. Several approaches could address this impasse.

First, Congress could enact specific exceptions or safe harbors for text and data mining and AI training. Such legislation might allow AI training on copyrighted works for certain purposes provided the use does not reproduce substantial expressive portions and the resulting AI outputs don’t directly compete with the original works. This exception could include an opt-out mechanism allowing rightsholders to explicitly exclude their content from training datasets, like the EU’s approach.[44]

Second, the courts could develop clearer rules through case-by-case adjudication of ongoing lawsuits. If more decisions emerge like the Thomson Reuters case, a judicial consensus might form that commercial, competitive uses of copyrighted data for AI training fall outside fair use, while non-commercial or genuinely transformative applications might receive protection.

Third, industry self-regulation and licensing models offer a market-based solution already gaining traction. If major content owners and AI firms negotiate agreements (as some have – for example, Shutterstock’s arrangement to license its image and music libraries to OpenAI),[45] these deals can provide templates for broader adoption. The copyright office further recommends caution regarding immediate legislative intervention, noting that voluntary licensing arrangements are already emerging between AI developers and content creators. It considers premature any new statutory measures, suggesting instead that licensing solutions should naturally evolve in the marketplace. However, the office left open the possibility of exploring alternative frameworks, like extended collective licensing, if clear market gaps remain unresolved in the future.[46] An organized collective licensing system might emerge similar to how radio stations pay blanket license fees for music.

A particularly promising approach comes from legal scholars Frank Pasquale and Haochen Sun’s “Consent and Compensation” framework. Their proposal would give creators the right to opt out of having their works used in AI training, while implementing a compensation system ensuring that creators who do not opt out receive fair payment when commercial AI systems use their work.[47] This middle-ground solution allows AI training to proceed but with transparency and appropriate remuneration.

The proposed Generative AI Copyright Disclosure Act aligns with this transparency objective.[48] Once disclosure becomes mandatory, implementing opt-out rights or licensing requirements becomes more feasible, as creators can identify when their work has been incorporated into AI systems.

Ensuring Fair Compensation and Collaboration Models

Beyond legal classifications of infringement or authorship lies a fundamental policy question: how can we ensure human creators are not disadvantaged in an AI-driven creative economy? Addressing this challenge requires innovative compensation and collaboration models.

One approach involves royalty or levy systems. For instance, AI music generators that emulate existing artists could operate under a royalty framework similar to how streaming services pay for music usage. If an AI-generated song incorporates elements clearly derived from a particular artist’s style, that artist might receive a share of revenue. Some startups have begun developing technologies to detect whether AI outputs show influence from particular training data, which could support such attribution.[49]

Another approach involves contractual AI-human co-creation frameworks. In film production, studios might establish that when AI script-generation tools assist writers, those whose works trained the system receive co-writing credit and profit participation. These industry-level solutions could preempt conflicts by aligning incentives and positioning AI as a collaborative tool rather than a replacement for human creativity.

Legal reforms could strengthen these collaborative models. Enhanced attribution rights represent one possibility: if an AI work substantially draws from a particular artist’s style or approach, that artist might have the right to be credited. Competition and labor law perspectives also warrant consideration. The Writers Guild of America and Screen Actors Guild have already secured provisions on AI use in recent contract negotiations, protecting writers and actors from replacement or unconsented image exploitation.[50]

Conclusion

The rapid advancement of AI technology has exposed significant gaps in our copyright system.[51] Traditional doctrines now confront technologies capable of learning from and mimicking human expression in unprecedented ways.

A balanced approach to reform should include: (1) maintaining human authorship as the cornerstone of copyright, with AI enhancing rather than replacing human creativity; (2) establishing clear standards for when AI training constitutes permissible use versus infringement; (3) implementing supplementary protections, like the ELVIS Act,[52] where copyright falls short; and (4) developing equitable revenue-sharing models between AI developers and human creators.

The advent of AI parallels previous technological revolutions in creative industries. Rather than merely reacting to crises, we have the opportunity to proactively shape copyright’s response to AI. Through thoughtful reforms that both protect human creators and encourage innovation, we can ensure that AI’s integration into entertainment becomes a story of collaboration rather than disruption.[53]


Sarah Lim is a J.D. candidate at New York Law School with a strong background in biotechnology, having taught advanced biology and chemistry for over a decade. Her research focuses on the intersection of intellectual property law and the entertainment industry, exploring how emerging technologies impact creative rights and artist protections. She is particularly interested in how scientific innovation shapes legal frameworks within sports and entertainment law. This article appears in an upcoming issue of the Entertainment, Arts and Sports Law Journal. For more information, please visit NYSBA.ORG/EASL.

Endnotes:

[1] Matthew D. Kohel & Francelina M. Perdomo Klukosky, The ELVIS Act: Tennessee Law Addresses AI’s Impact on the Music Industry, Saul Ewing LLP (Apr. 16, 2024), https://www.saul.com/insights/article/elvis-act-tennessee-law-addresses-ais-impact-music-industry

[2] Id.

[3] Harry Borovick, Copyright Frameworks For AI‑Generated Content Need Urgent Scrutiny, MinuteHack (July 4, 2024), https://minutehack.com/opinions/copyright-frameworks-for-ai-generated-content-need-urgent-scrutiny.

[4] Id.

[5] Id.

[6] Id.

[7] Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed.Reg.16,190 (Mar. 16, 2023).

[8] Id.

[9] Id. (citing Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884)).

[10] Id. at 16191 (citing Naruto v. Slater, 888 F.3d 418, 426 (9th Cir. 2018)).

[11] Id. at 16192.

[12] Only Humans Can Be Authors of Copyrightable Works, Reinhart Law (Sept. 7, 2023), https://www.reinhartlaw.com/news-insights/only-humans-can-be-authors-of-copyrightable-works

[13] Id. See also Thaler v. Perlmutter, No. 22-cv-01564 (BAH), 2023 WL 5333236 (D.D.C. Aug. 18, 2023).

[14] U.S. Copyright Off., Copyright and Artificial Intelligence Part 2: Copyrightability 3, 18-19(Jan. 2025), https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf.

[15] Jonathan Coote and Don McCombie, AI-Generated Music and Copyright, Clifford Chance, (April 27, 2023), https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2023/04/ai-generated-music-and-copyright.html.

[16] Copyright Registration Guidance, supra note 7, at 16192.

[17] Michael J. Word and Diego F. Freire, Delaware District Court Ruling Raises Critical Fair Use Challenges for AI Companies: Implications for Copyright Compliance and Competitive Practices, Dykema (Feb. 17, 2025), https://www.dykema.com/news-insights/delaware-district-court-ruling-raises-critical-fair-use-challenges-for-ai-companies-implications-for-copyright-compliance-and-competitive-practices.html.

[18] Borovik, supra note 3, at 4.

[19] Id.

[20] Samantha G. Rothaus, Danielle Zolot and Andrew Richman, Court Rules AI Training on Copyrighted Works Is Not Fair Use – What It Means for Generative AI, Mondaq, (March 5, 2025), https://www.mondaq.com/unitedstates/copyright/1593212/

[21] Id. at 2.

[22] Id. at 3.

[23] Copyright Registration Guidance, supra note 7, at 16193.

[24] Press Release, Shutterstock, Shutterstock and OpenAI Partner to Provide AI Image Generation for All Shutterstock Customers (Oct. 25, 2022), https://investor.shutterstock.com/news-releases/news-release-details/shutterstock-partners-openai-and-leads-way-bring-ai-generated.

[25] U.S. Copyright Off., Copyright and Artificial Intelligence Part 3: Generative AI Training 4, 27-29 (May, 2025), https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf.

[26] Only Humans Can Be Authors, supra note 12.

[27] Id. at 2.

[28] Copyright Registration Guidance, supra note 7, at 16192.

[29] Id.

[30] Id. at 16194.

[31] Kohel, supra note 1, at 2.

[32] Coote & McCombie, supra note 15, at 4.

[33] Nilay Patel, AI Drake Just Set an Impossible Legal Trap for Google, The Verge (Apr. 19, 2023),. https://www.theverge.com/2023/4/19/23689879.

[34] Kohel, supra note 1, at 3

[35] Id.

[36] Sarah Bro, NO FAKES Act Would Create Individual Property Right to Control Digital Replicas, IP Update (Aug. 8, 2024), https://www.ipupdate.com/2024/08/no-fakes-act-would-create-individual-property-right-to-control-digital-replicas/.

[37] Sarkis Yeretsian, Generative AI Copyright Disclosure Act Introduced to Protect Creators, Lewis Brisbois (April 11, 2024), https://lewisbrisbois.com/newsroom/legal-alerts/generative-ai-copyright-disclosure-act-introduced-to-protect-creators.

[38] Id. at 2.

[39] Id.

[40] Thomas Fischl & Andreas Splittgerber, Text and Data Mining in the EU: A Practical Guide to Article 3 and 4 Exceptions, Reed Smith LLP (Feb. 5, 2024), https://www.reedsmith.com/en/perspectives/ai-in-entertainment-and-media/2024/02/text-and-data-mining-in-eu, Paul Keller, EU Copyright Directive: Text and Data Mining Exceptions, Lexology (Apr. 2, 2019), https://www.lexology.com/library/detail.aspx?g=fe18824c-d241-4090-a5f5-9cade6effbfe.

[41] Kelvin Chan and Matt O’Brien, Getty Images and Stability AI Face Off in British Copyright Trial That Will Test AI Industry, Associated Press (June 9,2025), https://apnews.com/article/580ba200a3296c87207983f04cda4680; see also Robert Booth, London AI Firm Says Getty Copyright Case Poses ‘Overt Threat’ to Industry, The Guardian (June 9, 2025), https://www.theguardian.com/technology/2025/jun/09/stability-ai-getty-lawsuit-copyright, Recording Industry of America, Press Release, Human Artistry Campaign Launches, Announces AI Principles (Mar. 16, 2023), https://www.riaa.com/news/human-artistry-campaign-launches-announces-ai-principles/.

[42] See Borovik, supra note 3, at 3.

[43] Only Humans Can Be Authors, supra note 12, at 2; Copyright Registration Guidance, supra note 7, at 16191.

[44] Fischl & Splittgerber, supra note 40, at 2.

[45] Shutterstock Support, Shutterstock AI and Computer Vision Contributor FAQ (last visited May 27, 2025), https://support.submit.shutterstock.com/s/article/Shutterstock-ai-and-Computer-Vision-Contributor-FAQ?language=en_US.

[46] U.S. Copyright Off., Copyright and Artificial Intelligence, Part 3: Generative AI Training 28-29 (May,2025), https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf.

[47] Frank A. Pasquale & Haochen Sun, Consent and Compensation: Resolving Generative AI’s Copyright Crisis, 110 Va.L.Rev.Online 207,294 (2024).

[48] Yeretsian, supra note 37, at 3.

[49] Grace Mary Power, Curate a Story: AI Detection Matters, Medium (Apr. 3, 2025), https://medium.com/curation-matters/ai-detection-matters-c229dc8b3d37.

[50] Press Release, Writers Guild of America, WGA and AMPTP Reach Tentative Agreement (Sept. 24, 2023).

[51] Borovik, supra note 3, at 2.

[52] Kohel, supra note 1, at 3.

[53] Copyright Registration Guidance, supra note 7, at 16194; Yeretsian, supra note 37, at 3.

Related Articles

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)

Join NYSBA

My NYSBA Account

My NYSBA Account