Meta Faces $359 Million Lawsuit Over Alleged Torrenting of Adult Content for AI Training
Bottom Line: Adult film producer Strike 3 Holdings has sued Meta for $359 million, alleging the tech giant torrented over 2,300 adult videos since 2018 to train AI models while using "stealth networks" to hide its activities—raising serious questions about corporate accountability in the age of AI.
In what could be one of the most explosive copyright battles in AI history, adult entertainment company Strike 3 Holdings has filed a federal lawsuit accusing Meta of systematically pirating pornographic content through BitTorrent networks to train its artificial intelligence models. The case, filed July 23, 2025, in California federal court, alleges that Meta downloaded and distributed at least 2,396 copyrighted adult films since 2018, potentially facing damages exceeding $359 million.
The "Tit-for-Tat" Strategy
The lawsuit centers on a particularly damaging allegation: that Meta didn't merely download pirated content but actively distributed it using BitTorrent's "tit-for-tat" mechanism. This protocol rewards users who share content with faster download speeds from other peers. According to the complaint, Meta allegedly continued sharing pirated files for this purpose. "Meta made the deliberate choice to seed Plaintiffs' motion pictures in order to capitalize on faster download speeds so it could infringe other content faster."
Strike 3 Holdings claims Meta specifically targeted highly popular adult content—"often within the most infringed files on BitTorrent websites" on "the very same day the motion pictures are released"—to exploit this system. By seeding these popular files, Meta allegedly gained priority access when downloading massive amounts of other data for training its AI models, including Movie Gen and LLaMA.
Corporate Infrastructure and Stealth Networks
Perhaps most concerning from a privacy and accountability perspective are allegations that Meta went to extraordinary lengths to conceal its torrenting activities. The lawsuit alleges that Meta attempted to "conceal its BitTorrent activities" through the use of "six Virtual Private Clouds," which constituted a "stealth network" of "hidden IP addresses."
The complaint documents 47 IP addresses owned by Facebook that allegedly engaged in copyright infringement, verified through MaxMind's database. But the stealth networks tell a different story—one of deliberate concealment rather than accidental infringement.
Meta allegedly used "six Virtual Private Clouds" forming a "stealth network" of "hidden IP addresses" in partnership with a "major third-party data center provider." The company found "at least one residential IP address of a Meta employee" infringing copyrighted works, suggesting Meta may have directed employees to torrent from home to obscure data trails.
Context: Part of a Larger Pattern
This lawsuit doesn't exist in isolation. Earlier in 2025, court records revealed that Meta staff torrented nearly 82TB of pirated books for AI training, with court records showing that researchers expressed concerns about the practice. One senior AI researcher said, "I don't think we should use pirated material. I really need to draw a line here."
Internal communications also showed employee discomfort with the practice. A research engineer with Meta, Nikolay Bashlykov, noted that "torrenting from a corporate laptop doesn't feel right," highlighting his discomfort surrounding the practice.
Despite these internal concerns, the documents suggest Meta employees discussed using VPNs to mask the company's IP address and create anonymity when downloading torrented material.
Privacy Implications and Broader Concerns
The allegations raise troubling questions about corporate behavior in the AI training data collection process:
Corporate Accountability: If proven, the lawsuit suggests Meta may have knowingly engaged in large-scale copyright infringement while taking active steps to conceal these activities from detection.
Employee Involvement: The allegation that Meta employees may have been directed to torrent from their home IP addresses suggests a concerning willingness to involve individual employees in potentially illegal activities to protect corporate interests.
Age Verification Failures: Strike 3 Holdings alleges Meta's tactics potentially distributed videos "to minors for free without age checks in states that now require them," undermining the company's reputation as an "ethical source" for adult content.
What Strike 3 Holdings Claims to Lose
Strike 3 Holdings, which operates adult brands including Vixen, Tushy, Blacked, and Deeper, argues that Meta's alleged training on their content could eliminate their competitive advantage. The adult producers fear that this training may ultimately result in AI models that can create similar "Hollywood grade" films at a lower cost. "By training so specifically on Plaintiffs' Works, Meta's AI Movie Gen may very well soon produce full length films with Plaintiffs' identical style and quality, which other real world adult studios cannot replicate"
The company characterizes its content as featuring "natural, human-centric imagery" featuring "parts of the body not found in regular videos" and "unique" forms of "human interactions and facial expressions"—precisely the type of content that could give AI video generators a competitive edge in adult entertainment.
The Legal Stakes
With statutory damages potentially reaching $150,000 per willfully infringed work, and 2,396 movies allegedly at stake, Meta faces potential damages of $359 million in this case alone. The lawsuit seeks not only financial compensation but also permanent injunctive relief requiring Meta to delete any allegedly stolen content from its AI training datasets and existing models.
Meta's Response
A Meta spokesperson told media outlets, "We're reviewing the complaint, but don't believe Strike's claims are accurate." The company has not yet filed a formal response to the lawsuit.
Industry Precedent
The lawsuit builds directly on legal groundwork laid by authors in the Kadrey v. Meta case, where Judge Vince Chhabria provided a critical roadmap for future litigants, arguing that the most potent legal challenge to AI training is the theory of "market dilution." He stated, "If you are dramatically changing, you might even say obliterating, the market for that person's work… I just don't understand how that can be fair use."
What This Means for Privacy and AI Governance
This case represents more than just a copyright dispute—it's a test of whether major tech companies will be held to the same legal standards as individual users when it comes to digital piracy. The allegations of deliberate concealment, employee involvement, and systematic infringement at corporate scale suggest a troubling approach to data acquisition for AI training.
For privacy advocates and digital rights observers, the case highlights critical questions: How far will companies go to acquire training data? What safeguards exist to prevent corporate-scale piracy? And when AI models trained on allegedly pirated content generate revenue, who bears responsibility for the original infringement?
As AI continues to reshape industries, the Strike 3 v. Meta case may set crucial precedents for how courts balance fair use, market harm, and corporate accountability in the digital age. With discovery likely to reveal more details about Meta's internal processes and decision-making around training data acquisition, this case could provide unprecedented insight into how one of the world's largest tech companies approaches the legal and ethical boundaries of AI development.
The case is Strike 3 Holdings, LLC v. Meta Platforms, Inc., Case No. 4:25-cv-06213-KAW, filed in the U.S. District Court for the Northern District of California.