The publisher of Rolling Stone and The Hollywood Reporter just filed the first major U.S. lawsuit that challenges intellectual property violations (Google’s AI-powered “AI Overviews”) on anticompetitive grounds rather than copyright infringement. This marks a notable shift in how media companies confront AI’s threat to their business model in U.S. courts.
Penske Media alleges that Google’s “AI Overviews”—summary boxes that appear atop search results—force publishers to either surrender their content for AI training and extraction or face reduced visibility in search rankings. These summaries then intercept the clicks and ad revenue that would otherwise have gone to the publishers, undermining their ad-dependent business model.
The case was filed in the U.S. District Court for the District of Columbia (D.D.C.) before Judge Amit Mehta. A year earlier, in this same court, the DOJ and over 30 state attorneys general filed claims against Google (United States v. Google LLC), alleging the company maintained an illegal monopoly in the search engine and search advertising markets. In that case, Judge Mehta deemed Google an “illegal search monopolist,” finding violations of antitrust laws including Section 2 of the Sherman Act.
Since publishers have largely failed in U.S. courts to protect their IP on copyright-infringement grounds, Penske decided to base its claims on antitrust misconduct, framing it as an abuse of monopoly power to impose lopsided deals on publishers.
Strategic Legal Positioning
At its core, Penske’s lawsuit alleges that AI Overviews amounts to “illegal reciprocal dealing by a monopolist”—a concept rooted in classic U.S. antitrust doctrine that prohibits firms with monopoly power from coercing counterparties into unfair exchanges. In other words, Google exploits its search dominance to force publishers to either submit content to its AI summaries or face diminished search rankings and revenue losses.
Penske’s approach avoids publishers’ losing track record of fighting platforms such as Google on copyright grounds, where U.S. courts grant broad fair use protections and deny copyright protection for factual information or abstract ideas (see 17 U.S.C. § 102(b)). The reframing of AI Overviews as anticompetitive conduct by a monopolist would make it unlawful regardless of copyright infringement.
European Legal Counterpoint
European publishers operate under more robust legal protections than their U.S. counterparts. The EU Digital Single Market Directive grants publishers a “neighboring right”—a copyright-adjacent claim—over news snippets that platforms display, allowing them to demand licensing fees. The directive also permits publishers to block text-and-data mining (TDM) of their content when used for commercial AI training, giving them leverage to negotiate compensation or simply refuse access.
Italian publishers have already petitioned AGCOM to investigate AI Overviews as “traffic killers.” Brussels regulators can also deploy the Digital Services Act and Digital Markets Act to determine whether AI Overviews suppress referral traffic while favoring Google’s properties. The EU AI Act adds another enforcement lever, mandating transparency for general-purpose AI models, including training-data disclosure and risk documentation that regulators can audit.
Three Regulatory Pathways
Global technology companies face mounting pressure from regulators and courts through three distinct enforcement channels, each offering different tools to reshape how AI systems handle publisher content:
Antitrust remedies represent the most direct governmental intervention. Courts could order specific design modifications to search results and AI summaries—mandating that source links appear prominently above AI-generated text, restricting how much news content gets summarized for certain queries, or requiring platforms to drive a minimum percentage of users to click through to original articles. These structural remedies would treat AI summaries as potentially anticompetitive features rather than neutral innovations. See also the DOJ’s 2023–2025 antitrust proceedings against Google and FTC guidance on AI competition and consumer protection.
Licensing frameworks offer a market-based solution through collective bargaining between publishers and AI companies. This approach mirrors the “neighboring rights” deals that European publishers have secured with Google and Meta (formerly Facebook), where media companies receive compensation when platforms display their headlines and excerpts. While U.S. courts lack authority to impose mandatory licensing fees, litigation pressure from lawsuits by Penske, Chegg, and others could push AI companies toward voluntary payment agreements to avoid prolonged legal battles and potentially adverse judgments.
Compliance-by-design frameworks would require AI developers to build and maintain systematic technical controls throughout their development and deployment processes. These companies would need to implement auditable systems that track which content gets used for model training, ensure proper attribution in AI outputs, and prevent models from memorizing and regurgitating copyrighted text verbatim. This regulatory approach could create an “AI safe harbor” regime similar to the Digital Millennium Copyright Act’s protections for internet platforms—offering limited immunity to companies that follow prescribed technical standards and takedown procedures, while exposing non-compliant operators to enhanced liability.
Regulatory Divergence Across Jurisdictions
All of this means that technology executives operating globally should prepare for dramatically different enforcement approaches across major jurisdictions, each leveraging distinct legal tools and timelines that will create complex, multilateral compliance burdens.
United States enforcement will center primarily on antitrust rather than copyright claims. The Penske litigation exemplifies this strategic pivot—publishers have learned that copyright battles over AI training face an uphill climb in U.S. courts, where fair use doctrine remains expansive and factual information enjoys no protection. Antitrust cases, by contrast, can proceed without resolving thorny questions about transformative use or substantial similarity. However, these competition cases will likely unfold over years, given the complexity of proving market harm and the need for extensive economic analysis of traffic diversion and consumer welfare effects.
European enforcement offers regulators a more coordinated and potentially faster path to intervention. Brussels can deploy multiple legal frameworks simultaneously: the Digital Single Market Directive (2019/790) provides neighboring rights and text-and-data-mining opt-outs; the Digital Services Act and Digital Markets Act deliver competition-focused remedies for Very Large Online Platforms (VLOPs); and the EU AI Act imposes transparency obligations on general-purpose AI models that regulators can audit to trace how training data flows into AI summaries. This multi-pronged approach becomes particularly potent if traffic-impact studies—now being commissioned by several EU member states—demonstrate quantifiable harm to news publishers from AI-powered search features. Unlike the U.S., where individual companies must prove damages in court, European regulators can act on industry-wide data showing systematic referral loss.
United Kingdom regulation will likely chart a middle course balancing AI innovation with creator protections. The government has signaled possible expansion of text-and-data-mining exceptions beyond the current narrow research-focused carve-out, which could give AI developers greater certainty for training activities. Compared with the U.S., the UK’s fair dealing framework offers publishers stronger protections against commercial exploitation of their content. The pending Getty Images v. Stability AI case in UK courts will prove significant, as it may clarify AI companies’ secondary liability when their systems produce outputs that allegedly infringe copyright. The ruling could determine whether AI-generated content that closely mimics existing works qualifies as permissible “pastiche” or crosses into infringement territory—a decision that will influence how AI developers design output-filtering and attribution systems.
These jurisdictional differences create strategic complexity for global platforms, which must engineer AI products that satisfy divergent regulatory expectations while maintaining operational coherence across markets.
Shifting Business Models in the Age of AI Licensing
The zero-cost content model that powered early AI development is collapsing under litigation pressure and regulatory scrutiny, forcing a transition toward paid licensing frameworks. Technology companies must now budget for dual licensing obligations that cover both historical content used in AI model training and ongoing payments for real-time extraction used in AI summaries. At the same time, they must redesign AI interfaces to drive users back to original sources through visible attribution and traffic-directing features. These costs radically alter the economics that have driven AI system development to date—free access to high-quality training material.
Publishers confront equally complex adaptation requirements, even as they gain potential new revenue streams. They must implement machine-readable opt-out protocols (such as updated robots.txt exclusions), organize collective bargaining structures to overcome individual negotiating weaknesses, and develop comprehensive traffic measurement systems to document financial harm from AI extraction. The licensing shift favors large media companies over smaller ones, as major outlets are positioned to secure favorable early agreements while smaller publishers struggle with compliance costs and limited leverage—potentially accelerating media-industry consolidation.
For both sides, the window for voluntary agreements is narrowing as legal challenges multiply. Companies that establish comprehensive licensing partnerships before litigation advances will likely shape how courts and regulators define industry standards.
Two Legal Futures for AI-Powered Search
A Penske victory would establish hybrid remedies combining court-ordered antitrust constraints—such as mandated source attribution or click-through requirements—with negotiated access agreements between publishers and AI companies. This could create a regulatory framework that courts in other jurisdictions might emulate, similar to the way the EU’s Digital Markets Act influenced platform governance debates globally.
A Google victory would remove immediate legal barriers to the expansion of AI Overviews, shifting bargaining power decisively toward tech platforms and accelerating publisher consolidation as smaller outlets struggle to compete. Meanwhile, many surviving publishers may begin producing “AI-native” content—formats optimized to be summarized rather than read directly—mirroring the industry’s earlier adaptation to Google News and Facebook’s Instant Articles.
The dynamics at play transcend this single case. Internet search is being restructured at the AI layer, where summary interfaces increasingly mediate between users and sources. As these new rules crystallize across major markets, early strategic positioning will prove far more cost-effective than reactive litigation.
Looking Ahead: Building Legal Architecture for the AI Information Age
The Penske litigation marks a pivotal moment in which AI systems collide with legal frameworks built for the pre-digital era. Judge Amit Mehta’s ruling in the federal antitrust case against Google established the company’s search monopoly as legal fact, providing Penske with precedent that earlier copyright plaintiffs lacked. As courts and regulators across major jurisdictions grapple with how AI systems monetize content through summary interfaces, the collision between technological innovation and established IP protections exposes fundamental tensions that markets alone cannot resolve.
Solutions will likely remain jurisdictionally fragmented. In the U.S., antitrust doctrine defines the battlefield; in Europe, neighboring rights and AI transparency regimes dominate; and in the U.K., evolving text-and-data-mining exceptions shape how innovation and authorship intersect. Each system imposes its own compliance burden, leaving global platforms to navigate a patchwork of enforcement models while preserving product coherence.
The early movers will shape the rules. Companies that proactively establish licensing partnerships with content owners, implement transparent attribution systems to preserve traffic, and design user interfaces that encourage click-throughs will likely influence how courts and regulators define acceptable practice. Those that wait for legal clarity risk operating under frameworks defined by competitors and enforcers rather than by industry collaboration.
The Penske case may resolve specific claims about Google’s conduct, but the broader questions it raises—how AI systems should compensate creators, what constitutes fair use, and whether AI-generated summaries unlawfully divert economic value—will determine digital publishing’s trajectory. The window for strategic positioning is narrowing as litigation accelerates and regulatory scrutiny intensifies. The rules written today will govern the next generation of AI-powered information services.

