Copyright in the Age of Generative AI – Part I

How generative AI is forcing a global reckoning with authorship and ownership
Hands typing on a laptop with floating binary code and server racks in background, symbolizing AI-generated content and digital copyright frameworks.

As we stand on the cusp of the generative AI era, we face fundamental questions about the nature of creativity and the future of human expression.

Consider the case of AI-generated art. In 2018, artificial intelligence entered the rarefied world of fine art when a portrait created by an AI algorithm sold at Christie’s for $432,500. Today, tools like DALL·E and Midjourney can generate stunning visual artwork from text prompts—democratizing image creation while raising urgent questions about the long-term impact of AI on artists and creative industries.

Critics argue that generative systems, no matter how sophisticated, merely simulate human creativity rather than originate it. They contend that the essence of art lies in human experience, emotion, and intention—qualities that, they argue, a machine cannot authentically possess.

The issue, however, is less about creativity itself and more about who—if anyone—can claim ownership when a machine does at least part of the work. Can a creation be protected if no human authored it? Can the legal concept of copyright survive the rise of generative AI?

Thaler, Copyright, and Human Authorship

In Thaler v. Perlmutter (2023), Dr. Stephen Thaler attempted to register copyright for an AI-generated image, asserting that creative works produced by AI should qualify for protection. The U.S. District Court for the District of Columbia upheld the U.S. Copyright Office’s ruling that AI-generated works are not eligible for copyright without at least some level of human input.

The ruling reaffirmed the current U.S. position on AI and copyright law, affirming the primacy of human authorship. While providing near-term clarity for rights holders, this principle may prove increasingly difficult to sustain as generative models advance and human-machine collaboration deepens across creative industries.

In February 2023, the U.S. Copyright Office sought to clarify its evolving stance on AI copyright policy, stating that creative works produced with the assistance of AI technology may still qualify for protection—provided there is sufficient human authorship.

Meanwhile, ongoing litigation in Andersen v. Stability AI and related cases shows that courts are now grappling with the flip side of this legal equation: whether the use of copyrighted materials to train generative models constitutes fair use. These cases test the boundaries of AI fair use litigation and raise fundamental questions about data acquisition practices in machine learning systems.

Following its February guidance, the Copyright Office issued a formal Notice of Inquiry in August 2023, inviting broad public comment on a range of issues, including whether AI-generated works qualify for copyright protection, what level of human involvement is required for authorship, and who bears liability when AI systems produce infringing content. In effect, this marks the agency’s shift from issuing standard procedural guidance to reexamining the foundations of copyright law.

“Fair Use” Under Fire

Litigation over training data for generative models—exemplified by Andersen v. Stability AI—has gathered momentum. In October 2023, Judge Orrick issued a mixed order: several claims were dismissed, but the core allegation—direct copyright infringement—was allowed to proceed against Stability AI. The court deemed it plausible that copyrighted images were used to train the model, leaving the scope of fair-use defenses to be tested—a doctrine on which developers have relied but whose contours remain unsettled.

The stakes escalated in December 2023, when The New York Times filed a landmark lawsuit against OpenAI and Microsoft. Alleging that millions of its articles were used without permission to train ChatGPT and other large language models, the Times introduced one of the most consequential legal challenges to AI data scraping. Unlike earlier suits by individual artists or writers, this case is backed by a media heavyweight with extensive resources and a comprehensive archive of copyrighted content, substantially raising the legal and reputational stakes.

The Evolving International Response

On March 13, 2023, the European Union took a decisive step by passing the EU AI Act, the world’s first comprehensive legal framework for artificial intelligence. On March 13, 2023, the European Union took a decisive step by passing the EU AI Act, the world’s first comprehensive legal framework for artificial intelligence. While not a copyright law, it imposes substantial transparency requirements on developers of “general-purpose AI models,” including the disclosure of detailed summaries of all copyrighted data used in training.

This approach—centered on documentation, risk management, and transparency—stands in contrast to the piecemeal, litigation-driven development now unfolding in the U.S. The divergence has already given rise to thorny compliance issues facing companies that develop or deploy AI, including regulatory uncertainty, jurisdictional arbitrage, and forum-shopping. For AI companies operating globally, this scenario complicates product development and forces the adoption of fragmented and inconsistent strategies across markets.

Regional splits expose the debate’s deeper conceptual fault lines. As generative AI tools become increasingly sophisticated—and as the line between human creativity and machine assistance blurs—pressure will mount on legislators and courts to revisit basic, foundational assumptions about authorship in the digital age. Where is the line between an AI tool and an AI author? And can the act of “training” an AI on the vast expanse of human culture ever be considered “fair”?

Costs vs. Benefits

Whether the impact of generative AI on a particular creator or industry is positive or negative will depend largely on where they fall along the cost-benefit curve. On one hand, AI has democratized creativity, making powerful tools accessible to amateurs and hobbyists. On the other, these same systems threaten to disrupt established creative sectors and displace human workers across publishing, visual arts, music, and journalism.

As we navigate this new and uncertain era of AI-assisted creativity, it’s increasingly clear that we need robust legal, ethical, and philosophical frameworks to govern how generative AI is built and used. Harnessing AI in creative fields must go hand in hand with safeguarding creators’ rights—through licensing, attribution, and enforceable limits on appropriation. Without such protections, the economic foundations of creative industries risk erosion–and even eventual collapse.

The U.S. Copyright Office has already taken steps toward defining a legal framework for AI creativity, ruling that AI-generated content is not eligible for protection without at least some measure of human input. But this is only the beginning of what will be a long and complex process. With each new iteration of generative systems, legal and regulatory institutions will face increasing pressure to respond.

In sum, the accelerating rise of gen AI has forced us to confront a simple, two-pronged question: In an age of intelligent machines, how do we define and reward human creativity? Our responses—whether in courtrooms, legislatures, studios, or boardrooms—will influence not only copyright law and content quality, but also our understanding of authorship, originality, and what it means to be human in the 21st century.

Share the Post:

Related Posts

Mexican Flag Waving Against Modern City Skyscrapers: New Antitrust Law

Take note Big Tech: Mexico just rewrote the rules of competition

Mexico’s sweeping June 30th reforms to its Federal Economic Competition Law have ushered in a new era of antitrust enforcement, creating the powerful Comisión Nacional Antimonopolio (CNA) to challenge Big Tech’s dominance through stricter merger reviews, expanded data access powers, and significantly higher financial penalties.

Read More

AI Copyright Rulings Reshape the Fair Use Doctrine

Recent U.S. court wins for Meta and Anthropic strengthen fair use defenses in AI training, but deepen global copyright tensions. While enabling U.S. firms to accelerate innovation, the rulings raise legal, ethical, and diplomatic risks abroad—fueling debates over data rights, licensing models, and the future of generative AI regulation.

Read More