top of page

AI Legal Questions

With the rapid advancement of artificial intelligence adoption, particularly generative AI, many legal questions loom. Some of those questions will be addressed by AI-specific laws while others will be decided by case law. (Read about some of the key AI legal cases in this AI Lawsuits Registry.) Here are some of the key questions at issue in these AI legal cases.

Does training a model on copyrighted material require a license?

Generative AI relies on models that are trained on vast amounts of data, and such training generally entails making interim copies of the training materials as part of the training process. This training enables the algorithms to “learn” patterns and the statistical relationships between elements (e.g., for images, things like size, shape, proportion, color, and relative position), which enables AI models to gain an understanding, for example, of what makes a cat “catlike.” This information can then be used to create a new picture of a cat. One of the questions likely to be addressed in these cases is whether such interim copying requires a license. Proponents of these tools often argue that such interim copies constitute fair use because they are made for the purpose of extracting and gaining an understanding of unprotected elements of the training materials (e.g., factual and statistical information), rather than to copy protected expression. Fair use is an analysis that requires examining four factors that are set forth in the fair use section of the Copyright Act and applying them to the specific facts involved. Although there is not yet any applicable case law applying fair use to the process of training machine learning models, some point to cases in other areas (such as reverse engineering video games) that have held that interim copying of a work to gain understanding of unprotected elements is fair use. Others, however, argue fair use should not apply to generative AI tools. They argue that these tools are used to generate works that are of a similar nature to the works used to train the model and that therefore, this type of use is not sufficiently “transformative” (a key fair use factor).

Does the output generated by a generative AI tool infringe the copyright in the materials on which the model was trained?

These cases also raise infringement claims in connection with the generation of images or other output that results from the use of generative AI tools. Questions that may arise here include whether such output constitutes a derivative work of, or infringes the reproduction right in, the training data. Courts may consider factors such as whether or not any alleged similarities between the output and the training data are merely due to similarities in unprotected elements, or if there are substantial similarities in protected expression, and whether or not the use of a specific content to train a model could be considered a de minimis use. If AI output is found to be infringing, there may also be questions as to who is liable for such infringement.

Does generative AI violate restrictions on removing, altering, or falsifying copyright management information?

Section 1202 of the Digital Millennium Copyright Act (DMCA) provides certain restrictions regarding the alteration or removal of copyright management information (CMI) and regarding the provision, distribution, or importation of “false” CMI. CMI is defined in Section 1202(c) and includes, among other things, the copyright notice, title, and other information identifying a work, the name and other identifying information about the creators and copyright owners of the work, and information regarding the terms for use of the work. Section 1202(a) of the DMCA prohibits providing, distributing, or importing for distribution false CMI if it is done “knowingly” and with “the intent to induce, enable, facilitate, or conceal infringement.” Getty Images’ Complaint alleges that Stability AI provides false CMI, based on examples it provides showing output from the Stable Diffusion tool containing modified versions of the Getty Images watermark. This raises questions such as (1) whether generated output that includes someone else’s watermark constitutes false CMI under Section 1202, (2) whether Stability AI is “providing” the false CMI or whether it is an unintended result of an automated process initiated by the user, and (3) what is required to provide the requisite knowledge and intent. Section 1202(b) of the DMCA prohibits (1) intentionally removing or altering any CMI, (2) distributing CMI that one knows to have been altered or removed, or (3) distributing or publicly performing copies of works knowing that the CMI has been removed or altered, provided that in each case, a defendant must also be shown to have known, or had reason to know, that its actions would “induce, enable, facilitate, or conceal an infringement.” Both the Copilot and Getty Images lawsuits raise claims for violation of Section 1202(b). The Getty Images suit alleges the defendants intentionally removed or altered CMI in the form of watermarks and metadata associated with images Stability AI allegedly copied from the Getty Images website. One issue these cases may address is the level of proof necessary to establish that the removal or alteration was an “intentional” act and that the defendants knew or had reason to know that their actions would induce, enable, facilitate, or conceal an infringement.

Does generating work in the “style” of a particular artist violate that artist’s right of publicity?

Right of publicity law varies considerably from state to state, but generally speaking, it prohibits the commercial use of an individual’s name, image, voice, signature, or likeness (and in certain states this extends to broader aspects of “identity” or “persona”). Some states have specific statutes addressing right of publicity and other states rely on common law rights of publicity (and in some states, like California, there may be both). In all states, rights of publicity must be balanced against First Amendment-protected speech, especially where the use is in connection with an expressive work. Right of publicity statutes often have specific carveouts for certain types of expressive works, and courts have developed various tests to balance these competing interests. The Anderson class-action suit raises both statutory and common law right of publicity claims under California law. First, the Complaint alleges that the defendants “used Plaintiffs’ names and advertised their AI tool’s ability to copy or generate work in the artistic style that the plaintiff’s popularized in order to sell Defendant’s products and services.” Based on this initial pleading, this appears to be a traditional right of publicity claim based on use of the artists’ names in advertising the defendants’ products and services. The Complaint also focuses on the user’s ability to use a text prompt to request that the generated images be “in the style of” a specific artist, and this claim appears to be based, at least in part, on the alleged use of artistic “style” (which is not mentioned expressly in the California statute). The common law claims raised in the suit appear to be making the argument that the plaintiffs’ artistic “identities” extend to their body of work and their specific artistic styles and that the plaintiffs’ identity is used every time art is generated that reflects their “style.” Although California common law has recognized a somewhat broad definition of “identity,” (including impersonations of a professional singer’s distinctive voice[2]), we do not yet have case law on whether the California common law right of publicity protects an artist’s “style” based solely on the use of the artist’s artwork itself.

Does the incorporation of a trademark in generated output constitute trademark infringement or give rise to a dilution claim?

The Getty Images Complaint alleges that Stability AI has infringed several of Getty Images’ registered and unregistered trademarks in its generation of images and that such use is likely to cause confusion that Getty Images has granted Stability AI the right to use its marks or that Getty Images sponsored, endorsed, or is otherwise associated, affiliated, or connected with Stability AI and its synthetic images. The Complaint also brings a claim for federal trademark dilution under 15 U.S. Code § 1125(c) and alleges that Stability AI included a “Getty” watermark on generated images that lack the quality of images that a customer would find on the Getty Images website. It alleges that in some cases, the watermark appeared in connection with low quality, bizarre, and grotesque images. The Complaint argues that these uses cause both dilution by blurring (by lessening the capacity of the plaintiff’s mark to identify and distinguish goods or services) and by tarnishment (by harming the reputation of the mark by association with another mark).

How do open-source or creative commons license terms apply in connection with use for training AI models and distributing resulting output?

In the Copilot case, the plaintiffs claim that the defendants violated open-source license terms by (1) using materials governed by open-source licenses to train Copilot and republishing such materials without providing attribution, copyright notices, and a copy of the license terms; and (2) not making Copilot itself open source. This is a question of first impression.

What amount and type of human involvement is sufficient to register (copyright or trademark) AI-generated content?

The precise degree of human involvement necessary for copyright or trademark registration of AI-generated content remains a subject of legal uncertainty, as courts have yet to establish clear parameters. The Copyright Office's guidelines do offer some insight, though. They suggest that a work must fundamentally be of human creation, with technology serving as a tool rather than the creator. The interpretation of this guidance is contentious, as it raises questions about the extent to which the human element must be present. Some may interpret their language to mean human authorship predominates while others may argue for consistent application with the originality requirement (current copyright law mandates only a minimal level of creativity for originality, suggesting that even limited human contribution could be sufficient). In this evolving landscape, even the crafting of detailed text prompts or the setting of parameters for AI programs might meet the threshold of authorship. The key, it seems, is that human input must contribute to the expressive content of the work, not merely present an idea. The ongoing legal discourse will ultimately shape the requirements for registering AI-generated works, and it is a space that legal experts and AI developers are watching closely.

If an AI-generated work is able to be copyrighted, to whom does the copyright belong? Who is the author?

In the event that AI-generated content is deemed eligible for copyright, the pivotal question arises: Who claims the mantle of authorship and ownership? Is it the developers who crafted the AI platform, the entities responsible for training the AI's models, or the users who command the AI to produce the work? The determination of authorship, and consequently ownership, hinges on who contributed the human ingenuity deemed necessary for copyright. This is not a one-size-fits-all answer and will likely be assessed on a case-by-case basis, with nuances specific to each scenario dictating the outcome.

Do other countries recognize copyrights for AI-generated works?

In the evolving field of artificial intelligence, the question of who — or what — can claim copyright ownership over AI-generated works varies across the globe. While the United States, along with several other countries such as Australia, Brazil, Columbia, Germany, Mexico and Spain, maintains a traditional stance requiring human authorship for copyright, a number of jurisdictions including the UK, Hong Kong, Ireland, India, New Zealand and South Africa offer a contrasting view. These countries acknowledge copyright for computer-generated works, attributing authorship to the individuals who orchestrate the creation process. These laws often exclude 'moral rights,' for instance recognition as the author and the right to object to derogatory treatment of that work. As debates continue, the US has shown signs of reevaluation. Senators Thom Tillis and Chris Coons have prompted action, suggesting the formation of a national commission to deliberate potential adaptations of copyright law to better encompass AI innovations. The Copyright Office, acknowledging these discussions, has indicated plans to solicit public input on the matter later in the year, a move reminiscent of the US Patent and Trademark Office's 2020 invitation for public commentary on AI and intellectual property policy. Such developments signal an increasing willingness to reconsider current frameworks in response to the rapid advancement of AI technologies.

Special thanks to Perkins Cole LLP, The Fashion Law and K&L Gates whose websites and posts were leveraged to build this list of questions.

Free AI Risk Assessment

Answer a few simple questions to receive a customized action plan to reduce your AI risk.

Risk Assessment.png
bottom of page