Going Meta: An Article on Artificial Intelligence by Artificial Intelligence
Note from the author: The article that you are about to read was crafted with the assistance of artificial intelligence (AI). While a human organized, edited and fact-checked the content, AI was used to generate most of the copy. For further insight into the process, please refer to the end of the article.
From music to movies, books to art, artificial intelligence has revolutionized entire industries as well as the way we create and consume media. However, with great power comes great responsibility, and the impact of AI on copyright, authorship and entertainment has raised several legal and ethical questions.
One example of this is the use of AI to generate music. In 2017, Amper Music (acquired by Shutterstock in 2020) released an AI-powered music composition tool that allows users to create original music by setting certain parameters, such as genre, tempo and mood, without having to pay royalties. While this tool has democratized music creation, it has also raised questions about who owns the copyright to the music. Does the AI own the copyright, or does it belong to the person who programmed it?
This question becomes even more complicated when the AI is trained (i.e., fed a large dataset to learn patterns) on copyrighted work, potentially leading to issues of infringement. In March 2023, Universal Music Group (UMG) urged streaming platforms, including Spotify and Apple, to take proactive steps to block AI services from using UMG-controlled music to train models and create output that emulates the lyrics, vocals or style of its musicians. Getty Images took it one step further by filing a lawsuit against Stability AI, alleging that its model is trained on Getty materials and has generated images containing ghostly versions of Getty’s watermark. OpenAI, the company behind the very software used to generate this article, is in the midst of a class-action lawsuit brought by prominent authors arguing that training the model on copyrighted materials used without permission from their authors is, in and of itself, copyright infringement.
Another example of AI’s impact on copyright is the use of AI to create deepfakes, which are manipulated videos that use AI to superimpose one person’s face onto another’s body. While deepfakes can be used for harmless fun, they also have the potential to be used for malicious purposes, such as creating fake news or revenge porn. This raises questions about who owns the copyright to the deepfake, as well as who is responsible for any potential legal consequences that may arise, such as defamation and right of publicity claims.
In the arts and entertainment industry, AI is being used to create new works of art. For example, in 2018, a portrait generated by an AI sold at Christie’s auction house for over $400,000. Now, platforms like Midjourney or Dall-E allow anyone to create art using text-based prompts. While the use of AI as an artistic tool has the potential to democratize the process of making art, it also raises questions about authorship and ownership. If a work of art is created entirely by an AI, who owns the copyright?
These examples highlight the complex legal and ethical issues surrounding the use of AI in copyright, authorship and entertainment. As AI continues to develop and become more widespread, it is important to consider these issues and ensure that our legal and ethical frameworks are equipped to handle them.
AI models are only as good as the training data and companies like StabilityAI and OpenAI currently face lawsuits regarding the datasets used to train their software. Large language models such as ChatGPT often undergo training using extensive collections of books and articles to enable the model to recognize patterns, learn rules, and generate new content. However, AI companies typically do not disclose the specific sources of their training data, raising concerns that copyrighted materials may be used without consent or compensation to the original authors.
This situation draws parallels to the Google v. Oracle case, where the court determined that Google’s copying of portions of the Java Application Programming Interfaces without a license constituted fair use, and to the Google Books case, where the court also found that Google’s mass digitization of books for the purpose of creating a searchable index and providing snippets fell within the boundaries of fair use. In both cases, the courts recognized that Google’s usage of the copyrighted materials was transformative and served a different purpose than originally intended. Moreover, the courts recognized the value of innovation, a call back to the Copyright Clause of the U.S. Constitution. AI companies have often relied on these rulings to justify their practices, although the recent interpretation of “transformativeness” by the Supreme Court in the Warhol Foundation v. Goldsmith case could potentially impact the application and implications of the Google cases.
Authorship in Flux
AI algorithms can be used to generate original works, or “output,” such as music and art, which may be eligible for copyright protection. However, the question of who owns the copyright in AI-generated works is a complex one. Under U.S. copyright law, copyright protection is granted to original works of authorship that are fixed in a tangible medium of expression, such as a book, painting or musical composition. In 2018, the Ninth Circuit ruled that copyright protection applies to works that are created by a human author only.
However, in the case of works created by a human using AI, the Copyright Office has stated that it will consider whether a work generated by an AI system qualifies as a work of authorship that is eligible for copyright protection, and if so, who is the author of the work. If the AI was merely a tool, and the human author made significant creative decisions in the creation of the work, then the work may be eligible for copyright protection. However, if the AI was the creator of the work, without significant human input, then the work may not be eligible for copyright protection.
Assuming that an AI-generated work is protected by copyright, the AI platform or software used to generate the work may also have a contractual right to the copyright in the output. For example, Midjourney’s terms of service provide that content generated by free users is owned by the platform and made available under a Creative Commons license that does not allow for commercial use. Overall, the determination of authorship and eligibility for copyright protection in works generated using AI will depend on the specific circumstances of each case and will likely involve a careful analysis of the level of human involvement in the creation of the work and existing contractual obligations.
Beyond its impact on creativity and the concept of authorship under copyright law, AI is also useful in the enforcement of copyright and detection of infringement. With the rise of digital media, it has become increasingly difficult to monitor and enforce copyright, in part due to how easy the internet makes it to reproduce content on a worldwide scale and to remain anonymous in the process. However, AI algorithms can be used to scan the internet for instances of copyright infringement, flagging potential violations for review by human copyright enforcers. This has led to an increase in the detection and enforcement of copyright violations, but it has also raised concerns about the accuracy and fairness of the enforcement process, where AI algorithms may flag non-infringing content as potentially infringing, leading to false positives and unwarranted enforcement actions. Some have argued that the use of AI algorithms to enforce copyright, such as YouTube’s Content ID system and Spotify’s infringement detection system, which are largely handled without human interaction, can be overly broad and stifling to creative expression, particularly in cases where original compositions are erroneously removed from the platforms or where copyrighted material is used for transformative purposes, such as parody or criticism, which would be protectable under the fair use doctrine set forth by the U.S. Copyright Act. It is important for copyright holders and enforcement agencies to exercise caution and human oversight when using AI algorithms for enforcement purposes and to ensure that the rights of content creators and users are properly protected.
In addition, there is the question of accountability. If an AI algorithm generates false or defamatory content, who is responsible for it? The programmer, the owner of the computer, the user of the algorithm or the AI itself? AI systems are not people and presumably not capable of being sued for libel or slander, and if the U.S. Copyright Office does not deem a human to be the author of an AI-generated work for copyright purposes, can that same person be held liable for publishing false or defamatory content as part of the work? In the U.S., a person or entity may be held responsible for defamatory statements made in the content they publish, even if they did not know the statements were false or defamatory at the time of publication, as well as for statements that are made with actual malice, i.e., where the publisher knew the statement was false or had reckless disregard for the truth.
As an example of where liabilities may arise, deepfakes involve using AI to create or manipulate audio, video or images to make it appear as though someone said or did something they did not. These techniques can be used to superimpose one person’s face onto another’s body, make a person’s voice say something they did not say or create completely fabricated content that appears real. Deepfakes have been a cause for concern because they can be used to spread misinformation, defame individuals and cause harm. Their creation and distribution can raise criminal charges related to identity theft, harassment, extortion or fraud; civil liability in connection with invasion of privacy, defamation or emotional distress; and copyright infringement claims.
From a social and ethical perspective, the use of AI in art creation raises questions about the role of human artists in the art world. If AI systems are able to generate art that is indistinguishable from human-made art, what will be the role of human artists in the art world? Will AI-generated art replace human-made art, or will there be a need for both in the art world? If AI systems are used to create art, what will be the value of human creativity and effort? Will AI-generated art be seen as less valuable or less authentic than human-made art? Taking music as an example, AI-generated music can be created quickly and cheaply without the need for human musicians and composers, which could lead to a decrease in demand for their services and a loss of revenue. However, it is worth noting that AI is not yet advanced enough to create truly original music that rivals the creativity and artistry of human-generated music. Human musicians and producers still have the ability to bring unique perspectives and emotions to their work that AI cannot replicate.
Additionally, there may be concerns about the potential for AI-generated content to perpetuate biases and stereotypes if the algorithms used to generate content are trained on biased data, such as racial or gender bias or misinformation and disinformation. In some cases, AI may not fully understand the nuances of a topic or may misinterpret data, or invent sources or facts, thereby leading to inaccuracies in the resulting article. The lack of human oversight and context can result in AI-generated content that is biased or contains factual errors. There is also a risk that AI-generated content could be used for malicious purposes, such as spreading propaganda or fake news. With the ability to quickly generate large volumes of content, AI could be used to inundate social media and news channels with false information, making it difficult for human journalists and fact-checkers to keep up.
As AI becomes more sophisticated and widespread in the arts, music, entertainment and media industries, it is important to develop clear legal frameworks and ethical guidelines to govern its use, protect against potential harm, and ensure that it is beneficial for all. Ultimately, humanity can strive to use AI in a way that enhances creativity, innovation and fairness, while also protecting the rights and interests of individuals and society as a whole.
Authors Note: In the process of writing this article, I asked various questions to OpenAI’s chatbot ChatGPT, such as “How does the United States Copyright Office decide whether a work that was generated using artificial intelligence is protected by copyright and who is the author?,” “What is the legal citation for the monkey selfie case?” and “Please explain in a few paragraphs what First Amendment issues and questions arise when using artificial intelligence to write articles.” I also used ChatGPT to help me come up with titles for headers, identify areas of improvement within my drafts, including the Author’s Note, and rewrite sentences or paragraphs. The algorithm was useful in providing research and generating explanatory (albeit not particularly creative and often repetitive) content that can be used as a baseline before copyediting, reorganizing and fact-checking. However, some of the specific examples that I queried were erroneous or just plain false, including citations to nonexistent jurisprudence or articles by very real writers, a phenomenon known as “hallucinating.” When asked to explain the reasons for such inaccuracies, ChatGPT responded:
“While AI technology has come a long way in recent years, inaccuracies still remain a common issue in AI-generated content. AI algorithms rely on large datasets and patterns to generate articles, and this can lead to errors in the output. In some cases, AI may not fully understand the nuances of a topic or may misinterpret data, leading to inaccuracies in the resulting article. As AI technology continues to evolve, it will be important to address these issues and ensure that AI-generated content is accurate and reliable.”
 Getty Images (US), Inc. v. Stability AI, Inc., No. 1:2023-cv-00135 (D. Del. filed on Feb. 03, 2023).
 Silverman et al v. Open AI, Inc. et al, No. 4-23-cv-03416 (N.D. Cal. filed on July 7, 2023).
 See Louise Carron, Welcome to the Machine: Artificial Intelligence and the Visual Arts, Center for Art Law, Nov. 26, 2018, https://itsartlaw.org/welcome-to-the-machine-law-artificial-intelligence-and-the-visual-arts.
 Google LLC v. Oracle America, Inc., 593 U.S. ___ (2021); Authors Guild, Inc. v. Google, Inc., 804 F.3d 202 (2d Cir. 2015).
 Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith, 598 U.S. ___ (2023).
 Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53 (1884).
 Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018).
 Robert J. Kasunic, U.S. Copyright Off., Zarya of the Dawn (Registration # VAu001480196) (2023).
 Midjourney Inc., Terms of Service, https://docs.midjourney.com/docs/terms-of-service (last accessed July 18, 2023).
 17 U.S.C. § 107.
Louise Carron is an associate at Klaris Law, where she advises content creators, start-ups and non-profits across creative industries on transactional matters, copyright and fair use, First Amendment, web3, NFTs and the metaverse, in addition to prepublication review of podcasts, documentaries and unscripted shows, news articles and book manuscripts. This article appeared in EASL Journal (2023, v. 34, no. 2), a publication of the Entertainment, Arts and Sports Law Section. For more information, please see NYSBA.ORG/EASL.