AI-Generated Content and Copyright: Evolving Legal Boundaries in English Law
October 17, 2024
The rapid advancement of Artificial Intelligence (AI) in creative fields is presenting significant challenges to copyright law, particularly in the context of English law. As AI systems generate text, images, music, and code, questions arise about the originality of these outputs and the ownership of any resulting copyright. This article explores these complexities, examining recent legal cases that illustrate the evolving landscape of copyright and AI-generated content.
AI-Generated Content: Originality Under Scrutiny
Under English law, copyright protection is granted to original works, as outlined in the Copyright, Designs, and Patents Act 1988 (CDPA). Originality, as defined by case law, requires that a work be the author’s own intellectual creation and not merely copied from another source. However, AI-generated content is created by analysing vast datasets and generating outputs based on patterns identified within that data. This method challenges the notion of originality as it raises questions about whether such AI outputs meet the required threshold for copyright protection.
Case law, such as Green v. Broadcasting Corporation of New Zealand (1989), emphasises that originality requires not just labour, but creativity and intellectual effort. Applying this standard to AI-generated works is problematic since these outputs are inherently derivative, lacking the conscious creativity that copyright law seeks to protect. AI systems rearrange existing knowledge rather than create something entirely novel, challenging the idea that their outputs can be considered original under English law.
Ownership of AI-Generated Works: A Legal Quandary
Even if AI-generated content were deemed original, determining ownership is a complex issue. The CDPA 1988, in Section 9(3), states that the author of a computer-generated work is the person who made the necessary arrangements for its creation. However, this provision is ambiguous, particularly in scenarios where AI operates with minimal human intervention.
The case of Thaler v Comptroller-General of Patents, Designs and Trademarks (2021), although focused on patents, sheds light on the issue of AI and ownership. The UK Court of Appeal ruled that AI cannot be listed as an inventor on a patent application, highlighting the requirement for legal personhood in such matters. This ruling suggests that courts may be hesitant to attribute authorship or ownership of creative works to non-human entities, further complicating the status of AI-generated content under copyright law.
Recent AI Copyright Disputes: A Global Perspective
Recent legal disputes involving AI and copyright highlight the urgency of these issues. Notable cases include:
Authors v. OpenAI
In the case Tremblay v. OpenAI, Inc., a California federal court largely sided with OpenAI, dismissing most of the claims brought by a group of authors who alleged that OpenAI infringed their copyrights by using their works to train ChatGPT. The court dismissed claims of vicarious copyright infringement, as the plaintiffs failed to show that ChatGPT's outputs were substantially similar, to their works. Additionally, claims under the Digital Millennium Copyright Act (DMCA) were dismissed due to a lack of evidence that OpenAI removed or altered copyright management information (CMI) with intent to conceal or induce infringement.
The court also dismissed claims of negligence, unjust enrichment, and violations of California's unfair competition law (UCL), except for a claim under the "unfair" prong, which might still be dismissed later due to potential copyright pre-emption. This ruling is consistent with other recent cases involving generative AI, where courts have similarly dismissed copyright and related claims due to insufficient allegations of substantial similarity and failure to demonstrate CMI removal. Although the plaintiffs were granted leave to amend, it is uncertain if they will replead the dismissed claims.
Getty Images v. Stability AI
Getty Images Inc. has filed a lawsuit against Stability AI, the developers of the AI model Stable Diffusion, alleging that the company used millions of Getty's copyrighted images without permission to train their AI. Getty claims this unauthorized use resulted in AI-generated images that replicate their copyrighted works and incorporate their trademarks, constituting copyright and trademark infringement as well as passing off and database rights violations. Stability AI sought to dismiss the case through summary judgment, arguing the claims lacked merit, but the court ruled that the case should proceed to trial. This landmark case highlights the evolving legal challenges at the intersection of copyright law and AI technology, with potential implications for both AI development and intellectual property rights.
Artists v. Stability AI, Midjourney, and DeviantArt
Getty Images Inc. has filed a lawsuit against Stability AI, the developers of the AI model Stable Diffusion, alleging that the company used millions of Getty's copyrighted images without permission to train their AI. Getty claims this unauthorized use resulted in AI-generated images that replicate their copyrighted works and incorporate their trademarks, constituting copyright and trademark infringement as well as passing off and database rights violations. Stability AI sought to dismiss the case through summary judgment, arguing the claims lacked merit, but the court ruled that the case should proceed to trial. This landmark case highlights the evolving legal challenges at the intersection of copyright law and AI technology, with potential implications for both AI development and intellectual property rights.
Programmers v. GitHub
In a notable lawsuit against GitHub, Microsoft, and OpenAI, a California judge has dismissed most of the 22 claims brought by developers who accused these companies of copyright infringement through GitHub Copilot. The developers alleged that Copilot, an AI tool, used code from their repositories without proper attribution. The judge found that the code in question was not sufficiently similar to the original works and dismissed claims related to copyright violations and the DMCA. Only two claims remain: one for open-source license violations and another for breach of contract. These remaining claims suggest potential issues with how Copilot handled open-source code and contractual agreements. The case underscores the ongoing legal challenges at the intersection of AI technology and intellectual property.
Universal Music Group v. Anthropic
On October 18, 2023, Universal Music Publishing Group, Concord Music Group, and ABKCO filed a lawsuit against Anthropic in Tennessee federal court. The publishers accuse Anthropic’s AI chatbot, Claude, of unlawfully using copyrighted song lyrics to train its model. They argue that Claude’s outputs replicate these copyrighted works, including lyrics from the song “American Pie,” without permission.
Anthropic's defence includes claims of safeguards against full lyric reproduction, shifting liability to users for any infringement, and arguing that their use of copyrighted material falls under the “fair use” doctrine. With U.S. copyright law not yet addressing AI-specific issues, international regulations might influence the case. For instance, India's new law requires authorization for using copyrighted material in AI training, and the EU’s upcoming AI Act may further guide legal standards.
The European Union's AI Act, which is currently in progress, does not specifically address the use of copyrighted material in AI training. Instead, the AI Act focuses on regulating AI systems based on their risk levels, setting requirements for transparency, accountability, and data governance.
However, the Act is expected to require developers to provide detailed information about the data used to train AI models, particularly for high-risk AI systems. This includes disclosing the types of data used and ensuring that the training processes comply with relevant laws, including data protection regulations.
The Act’s broader focus on transparency and accountability may indirectly influence how copyrighted materials are used in AI training by encouraging clearer practices and better compliance with intellectual property laws. For specific regulations on the use of copyrighted content, separate legislative measures or regulations would need to address these concerns explicitly.
The case highlights the challenges of applying existing copyright laws to AI and could influence future legal frameworks regarding AI-generated content.
Tech Firms' Responses to Copyright Challenges
In response to these legal complexities, some tech firms have proactively taken steps to mitigate risks. For example, Microsoft has announced it will assume legal responsibility if customers are sued for copyright breaches while using its AI Copilot platform. Similarly, Google has pledged to defend users of its AI tools on Cloud and Workspace platforms against copyright claims. These actions demonstrate a growing awareness of the legal risks associated with AI-generated content and a commitment to protecting users from potential litigation.
The Future of Copyright and AI-Generated Content
The convergence of AI and copyright law in English jurisdiction is a rapidly evolving and complex area. As AI technology continues to advance, the legal frameworks governing originality and ownership are increasingly strained. The cases reviewed—spanning disputes over AI-generated content and copyright infringement—reveal a growing tension between technological innovation and traditional intellectual property protections.
The current legal standards, which require originality and human authorship, struggle to accommodate the unique characteristics of AI-generated works. While recent rulings provide some clarity, such as the dismissal of claims in the GitHub Copilot case and the progression of Getty Images’ lawsuit against Stability AI, significant ambiguities remain. These legal battles highlight the need for updated legislation and judicial interpretations that address the specific challenges posed by AI.
In this context, international developments, such as the European Union's AI Act, may offer valuable guidance, but they are not yet sufficient to fully address the issue. As tech companies take proactive measures to mitigate risks, the legal landscape will likely continue to shift. The resolution of these cases will be pivotal in shaping the future of AI and copyright law, potentially leading to a more nuanced understanding of how to balance innovation with the protection of intellectual property.
Comments