Meta Sued by U.S. Virgin Islands Over Scam Ads and Risks to Children
Breaking Legal Action Targets $16 Billion in Alleged Fraudulent Ad Revenue While Expanding Multistate Child Protection Effort
January 2, 2026
The U.S. Virgin Islands has filed a groundbreaking lawsuit against Meta Platforms Inc., marking the first action by an attorney general specifically targeting the tech giant's alleged profits from fraudulent advertising while simultaneously addressing longstanding child safety concerns on Facebook and Instagram.
Attorney General Gordon C. Rhea announced the filing on December 30, 2025, in the Superior Court of the Virgin Islands on St. Croix, alleging that Meta "knowingly and intentionally exposes its users to fraud and harm" to maximize user engagement and revenue. The case builds upon a November 2024 Reuters investigation that revealed Meta internally projected approximately $16 billion—roughly 10% of its 2024 revenue—would come from advertisements promoting scams, illegal gambling, and banned products.

The Dual Crisis: Fraud and Child Safety
The Virgin Islands lawsuit represents a significant expansion of ongoing multistate litigation against Meta. While 42 other state attorneys general have sued Meta over child safety issues since October 2023, the Virgin Islands complaint uniquely combines both fraud accountability and child protection claims under the territory's Consumer Protection Law and Consumer Fraud and Deceptive Business Practices Act.
Fraudulent Advertising Allegations
According to the complaint, Meta's platforms have become integral infrastructure for the global fraud economy—a finding consistent with previous investigations into Meta's calculated tolerance for fraudulent advertising. The lawsuit alleges that the company:
- Detected fraudulent advertisements but charged scammers premium rates to continue running them rather than removing the content
- Used algorithms and user data to help fraudsters target vulnerable populations, including the elderly
- Privately acknowledged that its platforms were responsible for approximately one-third of all successful scams in the United States
- Projected that roughly $16 billion of its 2024 revenue would derive from fraudulent ads
Internal Meta documents reviewed by Reuters in November 2024 revealed that the company displayed an estimated 15 billion "higher risk" scam ads daily. Another document indicated Meta earned approximately $7 billion in annualized revenue from such advertisements. For compliance professionals, this represents a fundamental vendor risk that requires immediate assessment.
The Reuters investigation exposed Meta's enforcement threshold: the company only banned advertisers when automated systems reached 95% certainty of fraudulent behavior. Accounts below that threshold faced higher advertising costs through a "penalty bids" system—effectively monetizing suspected fraud rather than eliminating it.
"Meta knowingly profited from fraudulent advertising and projected that about 10% of its 2024 revenue would come from fraudulent ads," the Virgin Islands complaint states. "The filing also alleges the company detected ads it believed were fraudulent but allowed them to remain online while charging fraudsters more to advertise."
Child Safety and Mental Health Harms
The lawsuit expands upon allegations made by 42 state attorneys general in October 2023 regarding Meta's impact on youth mental health. The Virgin Islands complaint accuses Meta of:
- Adopting algorithms and platform designs that disabled teens from controlling their time on Meta's platforms, fostering addiction, problematic use, anxiety, depression, self-harm, and suicide
- Allowing its social media, messaging, and virtual reality platforms to become "breeding grounds for predators who groom, solicit, and sexually exploit children"
- Publicly touting platform safety while "consistently and intentionally" failing to implement written policies
- Violating the Children's Online Privacy Protection Act (COPPA) by collecting data from children under 13 without obtaining parental consent
The complaint notes that Meta's algorithm-driven features—including infinite scroll, constant notifications, and engagement mechanics—were specifically designed to maximize user attention and time spent on platforms, even when company research indicated these features could harm young users.
Meta's Response and Industry Context
Meta spokesperson Andy Stone dismissed the allegations as baseless, referring to previous company statements. "We aggressively fight fraud and scams because people on our platforms don't want this content, legitimate advertisers don't want it and we don't want it either," Stone told Reuters.
In a December 2024 statement, Meta claimed it removed more than 134 million scam ads from its platforms in 2024 and supported law enforcement efforts to identify and arrest scammers. However, the company declined to provide alternative revenue figures when challenged on the $16 billion projection, which Stone described as "rough and overly-inclusive."
Regarding child safety allegations, Meta representatives have repeatedly stated the company has introduced more than 30 tools to support teens and their families. When similar lawsuits emerged in 2023, Meta expressed disappointment that attorneys general pursued litigation rather than continued dialogue about platform safety measures.

The Broader Legal Landscape
The Virgin Islands action joins an unprecedented wave of state-level enforcement against Meta's business practices:
Multistate Child Safety Litigation (October 2023)
A bipartisan coalition of 42 attorneys general filed coordinated federal and state lawsuits alleging Meta:
- Deliberately designed addictive features targeting children and teens
- Misled the public about platform safety and risks
- Violated COPPA by collecting children's data without parental consent
- Contributed to what the U.S. Surgeon General characterized as a "youth mental health crisis"
The federal lawsuit was filed in the U.S. District Court for the Northern District of California by 33 states, while nine additional attorneys general filed parallel suits in their respective state courts. The coalition included attorneys general from across the political spectrum, reflecting broad bipartisan concern about social media's impact on young users.
International Regulatory Pressure
Meta faces escalating scrutiny beyond U.S. borders, including historic enforcement actions under the EU's Digital Services Act:
- European families have filed lawsuits over alleged failures to protect minors
- U.K. regulators have issued damning findings about Meta's handling of scam advertisements
- The U.S. Securities and Exchange Commission is investigating Meta for running financial scam ads
- Congressional lawmakers have called for Federal Trade Commission action following the Reuters investigation
Revenue Calculations and Internal Decision-Making
The Reuters investigation revealed disturbing internal calculations at Meta. According to leaked documents:
Revenue Guardrails: In early 2025, Meta established a 0.15% revenue "guardrail" ($135 million) as the maximum amount it was willing to forgo to crack down on suspicious advertisers—even while earning $3.5 billion every six months from ads deemed to carry "higher legal risk."
Risk-Reward Analysis: Internal assessments reportedly concluded that revenue from risky ads would "almost certainly exceed the cost of any regulatory settlement involving scam ads," effectively treating regulatory fines as a business expense rather than a deterrent.
Enforcement Constraints: When enforcement staff proposed shutting down fraudulent accounts, internal documents showed Meta sought assurance that growth teams would not object "given the revenue impact."
"Scammiest Scammers": Despite identifying the worst offenders, Meta was reportedly slow to act. Some "High Value Accounts" accumulated over 500 policy violations without being permanently banned.

Technical and Platform Design Issues
The lawsuit highlights specific platform features and algorithmic designs that allegedly facilitate both fraud and youth addiction:
Algorithmic Amplification
Meta's recommendation algorithms prioritize content that maximizes engagement, potentially exposing users—especially vulnerable populations—to harmful content including fraudulent advertisements and predatory interactions.
Infinite Scroll and Push Notifications
Product features designed to eliminate natural stopping points in user sessions, described by one developer as creating "behavioral cocaine" effects particularly harmful to developing adolescent brains.
Data Collection and Targeting
Meta's extensive data collection enables sophisticated microtargeting that allows fraudsters to identify and exploit vulnerable users, including elderly populations susceptible to financial scams.
Inadequate Verification Systems
The 95% certainty threshold for banning advertisers creates substantial room for fraudulent actors to operate with relative impunity, particularly sophisticated operations that can evade detection algorithms.
Implications for the Tech Industry
The Virgin Islands lawsuit represents several significant developments in tech regulation, coming at a time when AI-powered scams are driving record losses globally.
First Fraud-Focused AG Action
This marks the first time a state or territorial attorney general has specifically targeted a major platform's revenue from fraudulent advertising. If successful, it could establish a template for other jurisdictions to pursue similar claims.
Convergence of Safety Issues
By combining fraud accountability with child safety claims, the lawsuit treats platform harms holistically rather than as separate issues. This approach recognizes that common business practices—maximizing engagement at all costs—drive multiple types of user harm.
Consumer Protection Enforcement
The use of consumer protection statutes at the territorial level demonstrates that local jurisdictions can pursue accountability even when federal action remains limited. This could encourage other territories and municipalities to take similar action.
Discovery Process Potential
If the case proceeds, discovery could expose additional internal Meta documents regarding the company's knowledge of platform harms and decision-making around enforcement priorities. Previous litigation has revealed damaging internal communications, and the Virgin Islands case could produce similar disclosures.
The Financial Stakes
Meta's financial exposure extends beyond potential settlement costs. The company faces:
Civil Penalties: Violations of consumer protection laws can carry substantial per-violation penalties. With billions of fraudulent ad impressions, theoretical maximum penalties could be astronomical.
Disgorgement: The Virgin Islands seeks disgorgement of profits earned through unlawful advertising practices. The $16 billion revenue projection provides a potential benchmark for these calculations.
Injunctive Relief: Court-ordered changes to Meta's advertising verification systems, content moderation practices, and youth protection features could require significant technical investments and reduce advertising revenue from high-risk sources.
Reputational Impact: Ongoing revelations about Meta's internal decision-making regarding fraud and child safety continue to erode trust among users, advertisers, and regulators.
Meta's Defense Strategy
Based on public statements and responses to similar litigation, Meta's likely defense arguments include:
Scale Challenge: Meta processes billions of advertisements daily across multiple platforms, making perfect enforcement practically impossible despite significant investments in safety systems.
Evolving Threats: Fraudsters constantly develop new techniques to evade detection, requiring ongoing adaptation of enforcement systems rather than achieving complete elimination of bad actors.
User Reports: Meta has stated that user reports about scam content declined in 2024, suggesting improvements in enforcement effectiveness even as internal projections suggested otherwise.
Competitive Position: The company may argue that fraud exists across all major advertising platforms and that Meta's enforcement efforts compare favorably to competitors.
Youth Tools: Meta points to the 30+ tools introduced to support teens and families as evidence of good-faith efforts to address child safety concerns.
Expert Perspectives
The lawsuit and underlying Reuters investigation have drawn strong reactions from former Meta employees and industry observers:
Rob Leathern, former Meta business integrity executive, told Fortune that the findings expose "a stark tension between revenue growth and consumer harm." He characterized the documented approach as "disappointing" and noted that generative AI is making fraud easier to execute while platforms have not been sufficiently transparent about using AI tools to fight abuse.
Child safety advocates have long criticized Meta's approach. "Meta saw American kids as a 'valuable and untapped market'—nameless factors on a bottom line to maximize profits," Connecticut Attorney General William Tong stated when announcing the 2023 multistate lawsuit. "Their abusive practices have unleashed a youth mental health catastrophe."
What Comes Next
The Virgin Islands case faces a lengthy litigation process with several key stages:
Discovery Phase: Both sides will exchange documents and testimony. Meta may seek to seal certain internal documents as confidential business information, while the Virgin Islands will push for public disclosure.
Motion Practice: Meta will likely file motions to dismiss some or all claims. Success could narrow the case significantly, while failure would strengthen the Virgin Islands' position for settlement negotiations.
Potential Settlement: Most multistate actions against tech companies settle before trial. A settlement could include financial payments, operational changes, and ongoing monitoring—similar to privacy settlements involving other tech giants.
Precedent Setting: Regardless of outcome, the case establishes fraud-focused consumer protection claims as a viable avenue for platform accountability, potentially inspiring similar actions by other jurisdictions.
Implications for Users and Businesses
The lawsuit highlights persistent challenges for platform users:
Individual Users: Should maintain heightened skepticism of social media advertisements, particularly those promising investment opportunities, health products, or deals that seem too good to be true. Our comprehensive guide to social media scams and protection strategies provides detailed guidance for staying safe.
Businesses: Legitimate advertisers may face higher costs and stricter verification requirements as platforms respond to regulatory pressure, creating both compliance burdens and competitive advantages over fraudulent operators.
Elderly Populations: Face particular vulnerability to sophisticated scams that leverage Meta's targeting capabilities to identify and exploit senior citizens.
Parents: Continue navigating a challenging environment where platforms designed to be addictive actively compete for their children's attention, potentially undermining parental oversight and family relationships.
The Bigger Picture
The Virgin Islands lawsuit sits at the intersection of multiple critical issues in technology governance:
Platform Accountability
As digital platforms have become essential infrastructure for commerce and communication, questions about their responsibility for facilitating harm have intensified. The case asks whether platforms can knowingly profit from user harm or whether they bear responsibility for preventing foreseeable harms even when doing so reduces revenue.
Algorithmic Amplification
Meta's systems don't merely host content—they actively amplify it through recommendation algorithms optimized for engagement. This raises questions about when amplification crosses the line from neutral platform operation to active participation in harmful activity.
Youth Protection in Digital Spaces
With social media use nearly universal among American teens, protecting young users has become a paramount concern. The debate extends beyond Meta to fundamental questions about age verification, parental control, and whether certain business models are compatible with child safety.
Fraud in the Digital Economy
As more commerce and communication moves online, fraud has followed. The case highlights whether major platforms can be held accountable not just for hosting fraud, but for systems that enable fraudsters to efficiently target victims.
Conclusion
The U.S. Virgin Islands lawsuit against Meta represents a watershed moment in platform accountability. By explicitly connecting Meta's revenue models to both fraud proliferation and child safety harms, the case challenges the fundamental premise that platforms can maximize engagement and revenue without responsibility for the consequences.
The $16 billion figure—representing Meta's own projection of revenue from fraudulent advertising—provides stark evidence of how platform business incentives can diverge from user safety. Combined with evidence that Meta established specific revenue guardrails limiting enforcement against high-spending fraudulent advertisers, the case paints a picture of systematic choices that prioritized profit over protection.
For the cybersecurity community, the case underscores the extent to which major platforms have become critical infrastructure for fraud operations. The allegation that Meta platforms are "involved" in one-third of successful U.S. scams suggests that effective fraud prevention requires not just user education and law enforcement action, but fundamental changes to how platforms approach content moderation and advertiser verification.
As this case proceeds through the courts alongside the 42-state child safety litigation, Meta faces a crossroads. The company can continue defending its current practices and risk increasingly severe regulatory and legal consequences, or it can fundamentally reassess how it balances growth objectives against user protection. The outcome will likely influence not just Meta's operations, but the broader tech industry's approach to platform governance and accountability.

The stakes extend beyond Meta's bottom line to fundamental questions about digital platform responsibility in the 21st century. Can platforms that profit from attention and engagement meaningfully protect users from harm? What obligation do platforms have to prevent foreseeable criminal activity when prevention reduces revenue? And how can democratic societies protect vulnerable populations—especially children—while preserving the benefits of digital communication and commerce?
These questions will continue to shape technology policy debates long after the Virgin Islands case reaches resolution.
Related Reading from the CISO Marketplace Network
ScamWatchHQ Coverage:
- Meta's China Ad Fraud: When Platform Economics Trump User Safety - Deep dive into Reuters investigation revealing Meta's calculated tolerance for $3 billion in fraudulent Chinese advertising
- The 2025 Global Scam Landscape: A Year of AI-Powered Deception, Record Losses, and Human Trafficking - Comprehensive analysis showing Meta platforms drove 38% of reported crypto scam leads
- Social Media Scams: Protecting Yourself in the Digital Age - Practical protection strategies for users navigating fraudulent social media content
ComplianceHub.wiki Analysis:
- Meta's China Ad Fraud: The Compliance Nightmare Every CISO and GRC Professional Needs to Understand - Risk management implications for organizations with Meta advertising spend
- Europe Flexes Its Regulatory Muscle: Meta and TikTok Face Historic DSA Enforcement Action - EU enforcement actions with potential fines up to $9.87 billion
- A Deep Dive into Meta's World: Privacy, Power, and the Fight for Control - Comprehensive overview of Meta's data practices and regulatory challenges
This article is part of ongoing coverage of platform accountability and cybersecurity issues. As the Virgin Islands litigation and related cases develop, additional details about Meta's internal decision-making and the broader implications for platform governance will likely emerge through the discovery process.
For the latest cybersecurity news, regulatory developments, and expert analysis, visit ScamWatchHQ, ComplianceHub.wiki, Breached, and MyPrivacy.blog.






