George Mason University Antonin Scalia Law School

Panel 5A: Generative AI & Human Authorship (C-IP2 2023 Annual Fall Conference)

The following post comes from Jake L. Bryant, a student in the Intellectual Property Law LL.M. program at Scalia Law and a Research Assistant at C-IP2.

silver copyright symbolOn October 12th and 13th, the Center for Intellectual Property x Innovation Policy (C-IP2) hosted its 2023 Annual Fall Conference, this year titled First Sale: The Role of IP Rights in Markets. One topic that attracted significant attention was the role of copyright law in generative artificial intelligence. A discussion on Generative AI & Human Authorship, was highlighted in one of the key copyright panels of the event. The discussion included a number of distinguished speakers: John Tehranian, the Paul W. Wildman Chair and Professor of Law at Southwestern Law School; Van Lindberg, a partner at Taylor English Duma LLP specializing in IP law; Molly Torsen Stech, General Counsel for the International Association of Scientific, Technical, and Medical Publishers and an adjunct professor at American University School of Law; and Keith Kupferschmid, CEO of the Copyright Alliance. The panel was moderated by Sandra Aistars, a professor at the Antonin Scalia Law School at George Mason University and the Senior Fellow for Copyright Research & Policy at C-IP2. Speakers addressed how  copyright law fits with generative AI technology.

According to Tehranian, the copyright issues raised by generative AI are not new but are based on law that has been developing for decades, if not centuries. Notably, the Copyright Act of 1976 does not define the word “author.” Cases like the Ninth Circuit’s Naruto v. Slater (2018) and the D.C. District Court’s Thaler v. Perlmutter (2023), as well as guidelines from the Copyright Office have each analogized to earlier case law to hold that only human beings can be authors for copyright purposes. Nevertheless. answering the question of whether human AI developers and prompt engineers can be authors of the outputs of generative AI models is an open question in determining AI’s place within copyright law.

Approaches vary in shaping AI’s place in copyright jurisprudence, and, as the panelists acknowledged, no definitive right answer has been established. Generative AI has seen IP scholars and practitioners return to the old forge of jurisprudence, one where the exchange of opposing ideas sharpens the tools necessary to develop a viable solution for protecting the rights of all copyright interests involved. Protection of creative expression and room for innovation in copyright was the guiding star for each panelist, addressing the rights of AI developers, existing copyright owners, and any rights to be found for users of AI systems. As Tehranian stated, one should not be quick to deem existing copyright law and its protections inadequate for new technologies. Among other interests, the discussion addressed the importance of hearing the voices of the creators whose rights would be affected by new developments. Touching on seminal cases like Burrow-Giles Lithographic Co. v. Sarony (1884) and Andy Warhol Foundation for the Visual Arts v. Goldsmith (2023), the panelists discussed a host of issues, including the role of authorship related to photographers and prompt engineers, subject rights in photographs and other visual works, and the application of the fair use doctrine to the use of copyrightable works in training AI models.

Kupferschmid discussed the ingestion process in training artificial intelligence and the effects on different industries, staking out five key principles. First, he stated that the rights of creatives and copyright owners must be respected in formulating new legislation. Second, longstanding copyright laws must not be cast aside to subsidize new AI technologies. Third, the ingestion of copyrighted works by AI systems implicates the right of reproduction described in 17 U.S.C. § 106. Fourth, Kupferschmid argued that the ingestion of copyrighted materials is not categorically fair use. Rather, he contended that fair use analysis requires a fact-intensive inquiry and will likely show that ingestion by AI is rarely fair use. Finally, he posited that AI developers must obtain a license from copyright owners of works used to train their models. Kupferschmid also asserted that the ability of copyright owners to license their works to AI developers is a market that would be usurped by deeming AI ingestion a fair use.

Lindberg also acknowledged that fair use analysis requires a fact-intensive inquiry but contended that the ingestion of copyrighted works in training AI systems is likely to be and should be considered a fair use. While a copy is created in the ingestion of a work by an AI, Lindberg analogized the training process of AI systems to a hypothetical where a person takes a book and creates a statistical table calculating the number of nouns, verbs, adjectives, and other parts of speech and the probability of their ordering. He claimed that this is both transformative and outside the scope of the copyright owner’s market. Lindberg likewise suggested that, in most cases, there is no translation from any specific ingested material to the outputs generated by a given prompt. Thus, there is no likelihood of substantial similarity between works ingested and outputs created by using an AI system. Kupferschmid replied that Lindberg’s description of the data used in training the AI is the essence of copyrightable expression—the words chosen by the author, and the order in which they are placed. That an AI system translates this function into computer code makes it no less protectable expression than if a human were to translate an author’s protected work from English into French. Lindberg partially conceded the point but contended that any substantial similarity that resulted on outputs would occur as a result of overtraining or overfitting AI models  a result that most proponents of generative AI do not seek to encourage and one that he conceded is unlikely to fall within the scope of fair use. The panelists cited the Books3 data set, which has been used to train various large language AI models, as an example of a problematic example of training sets that could result in a variety of undesirable outcomes.

Tehranian agreed with Lindberg, stating that existing precedent could deem AI training a fair use. Acknowledging that the recent Supreme Court case Andy Warhol Foundation for the Visual Arts v. Goldsmith cut back on the weight afforded to certain transformative uses in fair use determinations, he distinguished that the Court did not reduce the weight of trans-purpose uses, where the copyrighted material is not used to create a new work but instead used for a purpose beyond the scope of an author’s market. While Tehranian stated that he did not necessarily agree that ingestion during AI training should be fair use, he concluded that the existing law creates a likelihood that it will be so.

The panel also discussed the NO FAKES Act, introduced that week by senators from both major parties. See Chris Coons et al., Draft Copy of the NO FAKES Act of 2023, Chris Coons (Nov. 28, 2023), https://www.coons.senate.gov/imo/media/doc/no_fakes_act_draft_text.pdf. Tehranian noted that this proposed legislation would  help protect against unauthorized uses of a person’s name, image, or likeness by creating a federal right of publicity, explaining that federal trademark law and state rights of publicity are currently inadequately equipped to handle these issues clearly and consistently.

Stech agreed with each of the five points described by Kupferschmid. Specifically, she argued that the quality of data ingested by AI weighs against a finding of fair use. She also argued in favor of granting copyright over images to the subjects of photographs. She stated that “there are two humans contributing creativity in a photograph,” and that photographers may not be the only authors of photographs including a human subject. Professor Aistars reminded the panel of a case involving model Emily Ratajkowski posting on social media a photograph taken of her by paparazzi in which she had covered her face with a bouquet of flowers. She was then sued for copyright infringement by the photographer. Stech, Tehranian, and Aistars all suggested that this serves as an example where subjects may deserve some rights in photographs taken of them.

Abstract questions surrounding the meaning and value of art and creation continue to force copyright law to tread carefully in providing legal protection to creative expression without becoming a deterministic judge of artistic value. Whether prompt engineers will be considered authors of AI-generated works, whether the ingestion of copyrighted material in training AI models is fair use, and whether the subjects of visual works are entitled to some rights in the images taken of them are all questions at the forefront of IP law in the 21st Century. How Congress and higher courts will address them is not yet known, leaving open the discussion for creatives and lawyers alike to help discern the proper scope of protection for generative AI, its outputs, and the visual arts. As the panelists acknowledged, predictions for the state of policymaking regarding AI are unclear, but there is one certainty. Protecting the rights of artists and their creative expressions must be the driving force behind the application of copyright law to works generated with new technologies.


Additional Resources: