By: Aidan Testa

Among the recent explosion of AI tools, a subset has emerged which generate images.  These tools, for example Stable Diffusion and Midjourney, take written prompts from users and return images.  This technology is naturally exciting, allowing for users to produce images in a variety of art styles with nothing more than the written word.  One could construe it as cutting out the middleman of commissioned artwork, where a commissioner would normally describe what they want to a visual artist who would then produce the work over time, these tools allow for a quicker – and cheaper – return.[1]   Exciting though it may be, this technology comes with significant legal concerns in the field of copyright, both of the images themselves and the datasets used to train generative AI.

Who Owns the Output:

The first key legal question of these models is who owns the image the model produces?  It may be intuitive to say the user owns the image.  After all, the image is generated as a response to their prompting, for them.  This is not always the case, and in fact is likely not to be the case in most instances, at least under United States law.

Underpinning copyright law in the United States is the requirement for human authorship.  In short, if a human is not the author of a work, the work cannot be copyrighted.  An author under the Copyright Act is a person “to whom anything owes its origin; originator; maker; one who completes a work of science or literature.”[2]  Essentially, a person can only be an author if they are human and determine the expressive elements of a creative work’s output.[3]  Works that are produced by non-humans, for example animals and in this case machines, cannot benefit from copyright.[4]

This comes from the underlying functionality of generative AI tools.  In its response to a copyright application for “Zarya of the Dawn,” a comic book which used AI images, the copyright office denied the author copyright of the images in her comic.[5] [AT1]  The office differentiated between generative AI and other tools by stating that “Midjourney generates images in an unpredictable way.”[6] It further noted that Midjourney starts with “visual noise” and that “there is no guarantee that a particular prompt will generate any particular visual output.”[7]  Since prompts do not sufficiently control the result and the images are unpredictable, the claimant did not exhibit the necessary requirements of authorship.  The machine did the work, rather than her.

The claimant’s application was not totally denied, however.  Her writing and her arrangement of the generated images were both granted copyright – only the images themselves lacked the necessary authorship.[8]  Where AI generated images are arranged in a sufficiently creative way, that arrangement can be copyrighted.  This only protects the “human-authored aspects that are independent of and do not affect the copyright status of the AI-Generated material.”[9]  The images themselves, even if arranged creatively, are not themselves protected by their arrangement.

This does not mean that every image generated by an AI tool is not copyrightable.  Some AI models may have a different structure, rendering the reasoning used in Re Zarya of the Dawn inapplicable.  This may also change depending on one’s jurisdiction.  Some countries may enact laws which allow for model users to claim copyright.  As it stands, however, it seems to be that AI images are not copyrightable, meaning that users who generate and make use of an image cannot protect that image with copyright law.  Nor, by the same rules, could the owners of the generative AI itself.

The Datasets

A perhaps larger concern about generative AI is the copyright of their dataset.  Many of these generators are ‘trained’ on publicly available images found on the web.[10]  Public available in this context does not necessarily mean under a public license, merely accessible internet users, for example by a Google search.  The images may be copyrighted, and no consent given (or in some cases sought) for their use for AI models.[11]

The lack of consent on the use of training data has been cause of alarm among many whose works are included in these datasets, leading to a class action lawsuit in the US District Court for the Northern District of California.  This complaint alleges among other things that the use of these copyrighted works in the dataset amount to copyright violations, DMCA violations, right of publicity violations, and unlawful competition.[12] An important issue in this case is the ability of generative AI to return images “in the style of” a particular artist, siphoning commissions and therefore causing monetary damage.[13]

Much of this argument hinges on the technology itself.  The way it produces images is described as “derivative” by the complaint, and its sourcing does not consult with original owners.  This does suggest at least a reasonable likelihood that violations will be found, enough for a case to go to trial.  The result of these proceedings and the subsequent place of the technology remains to be seen.

Conclusion

Generative AI is a powerful, interesting tool that has taken the online world by storm.  It represents on the one hand an expansion of creative tools, and on the other a potential to harm existing creative industries.  The copyright issues around the tools and their training have yet to be definitively resolved.  In resolving them, courts must contend not just with the application of copyright law but with the availability of data and, in a way, what can and cannot count as artistry. 

Disclaimer: The information provided in this response is for general informational purposes only and is not intended to be legal advice. The content provided does not create a legal client relationship, and nothing in this response should be considered as a substitute for professional legal advice. The information is based on general principles of law and may not reflect the most current legal developments or interpretations in your jurisdiction. Laws and regulations vary by jurisdiction, and the application and impact of laws can vary widely based on the specific facts and circumstances involved. You should consult with a qualified legal professional for advice regarding your specific situation.


[1] This piece does not cast judgment as to who does and does not count as an artist.  For the purposes of this article, a “visual artist” denotes a human who creates a piece of visual artwork through means other than generative AI, and this term is so used as it is a commonplace way to describe someone in that profession.  “User” will denote a person that enters a prompt for a generative AI to then return an image.  This is primarily for clarity purposes and to align with the US Copyright Act suggestion on authorship and copyrightability.  

[2] Copyright Office statement of policy, citing: Burrow-Giles Lithographic Co v Sarony, 111 US 53 (1883) at 58.

[3] Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 FR 16190 (2023) (to be codified at 37 CFR 202) <https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence> [Policy Statement]

[4] See for example the “monkey selfie” case of Naruto v Slater, 888 F(3d) 418 (9th Cir 2018).

[5] Re Zarya of the Dawn (Registration # VAu001480196), United States Copyright Office, 2023, at 8. https://www.copyright.gov/docs/zarya-of-the-dawn.pdf.

[6] Ibid, at 9.  Midjourney is specified because it was the tool used by the claimant.

[7] Ibid, at 9-10.

[8] Ibid.

[9] Policy Statementsupra note 3.

[10] See for example Stable Diffusion, which uses a general crawl of the internet to find their images.  “Stable Diffusion” (2023), online <https://stablediffusionweb.com/#faq> [Stable Diffusion FAQ].

[11] According to Stable Diffusion, the dataset they used had “no opt-in or opt-out.”  See Stable Diffusion FAQsupra note 10.

[12] Andersen et al v Stability AI Ltd et al, (Dist Ct ND Cal 2023), (Complaint for Class Action, at 10-11).

[13] Ibid, at 2.