Who Owns AI Outputs? Human Authorship, Registration, and the Future of AI Copyright Law
As generative AI tools like ChatGPT, Midjourney, DALL-E, and Stable Diffusion become central to content creation workflows, the question of who owns AI-generated outputs has become one of the most pressing legal issues of our time. This comprehensive FAQ covers the current state of U.S. copyright law as applied to AI-generated and AI-assisted works, landmark cases, Copyright Office guidance, fair use implications for training data, and practical strategies for protecting your AI-created content in 2026.
Under current U.S. copyright law, purely AI-generated content cannot be copyrighted. The U.S. Copyright Office has consistently held that copyright protection requires human authorship. In its February 2023 guidance on AI-generated works, the Copyright Office reaffirmed that works created entirely by artificial intelligence without meaningful human creative input are not eligible for copyright registration. This position stems from decades of case law establishing that copyright vests only in works created by human beings.
The critical distinction is between AI-generated content (where the AI autonomously produces the output) and AI-assisted content (where a human uses AI as a tool while exercising creative control). If you simply type a prompt into an AI system and receive an output, that output alone is unlikely to qualify for copyright protection. However, if you substantially modify, arrange, curate, or build upon AI outputs with your own creative expression, those human-authored elements may be protectable. The Copyright Office evaluates registrations on a case-by-case basis, examining whether a human author exercised sufficient creative control over the expressive elements of the work.
Thaler v. Vidal (Fed. Cir. 2022) is the landmark case confirming that an AI system cannot be listed as the author on a copyright registration. Stephen Thaler attempted to register a visual artwork titled "A Recent Entrance to Paradise" created entirely by his AI system called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience), naming DABUS as the sole author. The Copyright Office refused registration, and both the U.S. District Court for the District of Columbia and the Federal Circuit upheld this refusal.
The court held that the Copyright Act requires a human author, relying on the plain text of the statute and Supreme Court precedent establishing that copyright is designed to incentivize human creativity. The court noted that while the Copyright Act does not define "author" explicitly, centuries of settled understanding, reinforced by the Supreme Court in Burrow-Giles Lithographic Co. v. Sarony (1884), establish that authorship is inherently a human endeavor. Thaler's argument that AI-generated works should receive copyright protection with ownership vesting in the AI's creator or operator was rejected. The ruling does not address situations where humans use AI as a tool while contributing their own creative expression, leaving that question for future cases.
The Zarya of the Dawn case is a landmark Copyright Office decision illustrating how AI-assisted works are evaluated for registration. Kris Kashtanova created a graphic novel called "Zarya of the Dawn" using Midjourney to generate the images and then arranged, selected, and combined them with their own written text. Initially, the Copyright Office registered the entire work without knowing about the AI involvement. Upon learning that Midjourney generated the images, the Office initiated a review.
In February 2023, the Copyright Office issued a revised registration that granted copyright protection to the text written by Kashtanova and the overall selection and arrangement of the text and images, but denied copyright protection to the individual AI-generated images themselves. The Office concluded that because Midjourney generates images through an unpredictable process where users cannot precisely control the output through text prompts alone, the individual images did not reflect sufficient human authorship. However, the creative choices in selecting which images to use, arranging them in a specific sequence, and coordinating them with original text constituted copyrightable expression.
This decision established a practical framework for AI-assisted works: while raw AI-generated images may not be copyrightable on their own, a human's creative compilation, selection, and arrangement of those outputs alongside original content can receive protection. Creators working with AI should document their creative process to demonstrate the human-authored elements of their work.
On February 21, 2023, the U.S. Copyright Office issued official guidance titled "Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence" (88 Fed. Reg. 16190). This guidance establishes the framework for how the Office evaluates copyright applications for works that include AI-generated content. The key principles are: copyright protection requires human authorship, and works generated entirely by AI without human creative control are not registrable. Works that combine human-authored elements with AI-generated material may be registered, but applicants must disclose the use of AI and identify which portions are human-authored.
The guidance further explains that the Office evaluates whether AI contributions are the result of "mechanical reproduction" or reflect genuine human creative expression. When an AI technology receives solely a prompt from a human and produces complex works in response, the traditional elements of authorship are determined and executed by the technology, not the human user. Applicants must exclude AI-generated content from their registration claims and use the "Limitation of Claim" section to note non-human-authored material. The guidance also requires applicants to update pending and previously registered works to disclose AI-generated material, creating a mandatory duty of transparency in copyright filings. Failure to comply may result in cancellation of registration.
Ownership of AI tool outputs depends on a combination of the platform's terms of service and copyright law. Most major AI platforms assign output rights to the user in their terms of service. OpenAI's Terms of Use state that users own their ChatGPT and DALL-E outputs, subject to compliance with the terms. Midjourney grants paid subscribers ownership of generated images, with certain restrictions for free-tier users. Anthropic's terms similarly assign output rights to users. However, there is a critical distinction between contractual ownership and copyright ownership.
A platform can contractually assign you rights to the output, but that does not mean the output qualifies for copyright protection under federal law. If the output is purely AI-generated without sufficient human creative contribution, it may not be copyrightable at all, meaning you might "own" it contractually but have no ability to prevent others from copying it. This creates a practical gap: you can use and commercialize the output per the platform's terms, but you may lack the legal tools to enforce exclusivity against third parties who independently obtain or reproduce similar content. For maximum protection, use AI outputs as starting points and add substantial human creative expression through editing, modification, selection, and arrangement to establish a stronger copyright claim in the final work.
Whether using copyrighted works to train AI models constitutes fair use is one of the most actively litigated questions in copyright law. Multiple major lawsuits are pending, including cases brought by The New York Times, Getty Images, visual artists, and authors against companies like OpenAI, Microsoft, Stability AI, and Meta. No appellate court has definitively resolved this question as of early 2026. The fair use analysis under 17 U.S.C. Section 107 considers four factors.
First, the purpose and character of the use, including whether it is transformative. AI companies argue that training is transformative because the model learns abstract patterns rather than copying specific works. Second, the nature of the copyrighted works used. Training datasets often include highly creative works such as novels, photographs, and artworks, which weighs against fair use. Third, the amount and substantiality of the portion used. AI training typically ingests entire works, which generally disfavors fair use. Fourth, the effect on the market for the original works. If AI outputs compete with or substitute for the originals, this factor weighs heavily against fair use.
The Supreme Court's 2023 decision in Andy Warhol Foundation v. Goldsmith narrowed the transformative use doctrine by holding that commercial use of a copyrighted work in the same market as the original weighs against fair use, even if the new work adds some creative expression. This precedent potentially makes it harder for AI companies to claim fair use when their models produce outputs that compete with the original training data. The outcome of these pending cases will fundamentally shape the AI industry's legal and economic landscape.
Naruto v. Slater (9th Cir. 2018), commonly known as the "monkey selfie" case, is frequently cited in AI copyright discussions because it directly addresses whether non-human entities can hold copyrights. In this case, a crested macaque named Naruto took selfie photographs using wildlife photographer David Slater's camera. PETA sued on the monkey's behalf, claiming Naruto owned the copyright in the photographs. The Ninth Circuit held that animals lack statutory standing to bring copyright claims under the Copyright Act, and that the Act does not authorize copyright protection for works created by non-human authors.
While the case specifically addressed animal authorship, the reasoning extends naturally to AI systems. The court's analysis reinforced the principle that the Copyright Act's protections are limited to human authors, a principle the Copyright Office has explicitly applied to AI-generated works. Critics note that the analogy between animals and AI is imperfect: AI systems are tools created and directed by humans, whereas the monkey acted independently and unpredictably. Nevertheless, Naruto v. Slater remains a key precedent supporting the position that non-human-generated works fall outside the scope of copyright protection. Courts in AI authorship cases, including Thaler v. Vidal, have cited this reasoning approvingly.
The distinction between AI-generated and AI-assisted works is the single most important concept in AI copyright law, and the Copyright Office treats them very differently. AI-generated works are those where the artificial intelligence system determines the expressive elements of the output autonomously. The human's contribution is limited to providing a prompt or instruction, and the AI makes the creative decisions about composition, style, arrangement, and expression. These works are generally not copyrightable because they lack human authorship in the expressive elements.
AI-assisted works, by contrast, are those where a human author uses AI as a tool, much like a camera or a paintbrush, while retaining meaningful creative control over the final expression. The human makes creative decisions about selection, arrangement, modification, and expression, using the AI to execute or assist with those decisions. These works may qualify for copyright protection to the extent of the human-authored contributions.
The Copyright Office applies a spectrum analysis. At one end, a simple text prompt yielding a complete AI image is likely not copyrightable. At the other end, an artist who uses AI to generate raw elements and then substantially edits, arranges, layers, and transforms them into a final work likely has a valid copyright claim in the human-authored portions. The more creative control a human exercises over the final output, the stronger the copyright claim. Documentation of the creative process is essential to demonstrate where human authorship begins and AI generation ends.
Yes, you can register a copyright for AI-assisted works, but you must follow specific disclosure requirements established by the Copyright Office. When filing a registration application for a work that contains AI-generated material, you must: identify the AI-generated content in the application and exclude it from the claim of authorship using the "Limitation of Claim" or "Material Excluded" section; describe the human-authored elements that you are claiming, such as text, selection, arrangement, or modifications; and provide the name of the AI tool used if the AI contributions are more than de minimis.
If you previously registered a work without disclosing AI involvement, you are expected to file a supplementary registration to correct the record. The Copyright Office has stated that it may cancel registrations obtained without proper AI disclosure, which is significant because a valid registration is a prerequisite for filing copyright infringement lawsuits in federal court under 17 U.S.C. Section 411(a). Failure to disclose constitutes a material misrepresentation that can void the registration entirely.
Practically speaking, you should maintain detailed records of your creative process when working with AI tools, documenting which elements you created or substantially modified, which AI tool you used, what prompts you provided, and how you selected and arranged the outputs. This documentation will be essential if your copyright is ever challenged or if you need to enforce it against infringers in federal litigation.
The work-for-hire doctrine under 17 U.S.C. Section 201(b) presents complex questions when AI is involved in content creation. Under the traditional doctrine, when an employee creates a work within the scope of employment, or when a specially commissioned work falls into one of nine statutory categories and is subject to a written agreement, the employer or commissioning party is considered the author and copyright owner. The threshold question is whether AI-assisted output contains sufficient human authorship to trigger the doctrine at all.
When an employee uses AI tools as part of their job duties, the work-for-hire analysis focuses on the human employee's contributions. If the employee exercises sufficient creative control over AI-assisted outputs through selection, arrangement, editing, and modification, the resulting human-authored elements may qualify as works made for hire, with the employer owning the copyright in those elements. However, if the employee simply prompts an AI and submits the raw output, the employer may have a contractual right to the material but no enforceable copyright in the uncopyrightable AI-generated portions.
For independent contractors using AI tools, the situation is even more nuanced. The commissioned work must fit within one of the nine statutory categories enumerated in Section 101, and the human contractor must contribute copyrightable authorship. Companies should update their employment agreements and contractor agreements to address AI tool usage explicitly, including provisions about mandatory disclosure of AI involvement, ownership of AI-assisted outputs, and requirements for meaningful human creative contribution to all deliverables.
Using AI-generated content commercially without copyright protection carries several significant business risks. First, the lack of exclusivity means that because uncopyrightable AI outputs exist in the public domain, competitors can freely copy, reproduce, and distribute the same or substantially similar content without legal consequence. You cannot send cease-and-desist letters or file infringement lawsuits to protect purely AI-generated content. This is particularly problematic for businesses that rely on unique visual branding, original written content, or creative marketing materials.
Second, there is infringement liability from AI outputs. If an AI generates content that is substantially similar to a copyrighted work in its training set, the user who publishes that content could face infringement claims from the original rights holder. Several AI platforms include indemnification provisions in their enterprise plans, but consumer-tier users typically bear this risk themselves. Third, trade secret and confidentiality risks arise when proprietary business information is included in AI prompts, as submitted data may be used to train future models.
To mitigate these risks, businesses should layer additional protections: register trademarks for brand elements and logos, use trade secret protections for proprietary methods and prompting techniques, add substantial human creative input to all commercial outputs to establish copyrightability, maintain detailed records of the creative process, and always review AI outputs before publication for potential similarity to existing copyrighted works. Enterprise users should negotiate indemnification and data protection provisions in their AI platform agreements.
International approaches to AI copyright vary significantly, creating a fragmented global landscape for businesses operating across borders. The United Kingdom has a unique provision under Section 9(3) of the Copyright, Designs and Patents Act 1988 (CDPA) that grants copyright in "computer-generated works" to "the person who made the arrangements necessary for the creation of the work." This provision, enacted before modern generative AI, could potentially cover AI-generated content, though it has not been definitively tested with modern generative AI systems. The UK Intellectual Property Office considered but ultimately shelved proposals to modify this framework in 2023.
The European Union's AI Act focuses primarily on transparency and risk management rather than copyright ownership, but the EU Copyright Directive (Article 4) provides a text and data mining exception that affects AI training legality. China has seen courts grant copyright protection to AI-generated content in certain cases, with a Beijing court ruling in late 2023 that AI-generated images could receive copyright protection when a human exercised aesthetic choices through detailed prompting and selection. Japan has a notably broad text and data mining exception under Article 30-4 of its Copyright Act that permits using copyrighted works for AI training regardless of commercial purpose, making it one of the most permissive jurisdictions for AI development.
Australia, Canada, and India generally follow the human authorship requirement similar to the United States, though none has addressed the issue with the specificity of the U.S. Copyright Office's 2023 guidance. For businesses operating internationally, this patchwork means that AI-generated content may enjoy different legal protections depending on jurisdiction, requiring careful legal strategy when commercializing AI outputs globally.
Generate professional demand letters, licensing agreements, and IP protection documents in minutes.
Create Documents