Briefing on the 2025 Global Digital Privacy, AI, and Human Rights Landscape

Briefing on the 2025 Global Digital Privacy, AI, and Human Rights Landscape
Photo by Marija Zaric / Unsplash

Executive Summary

The digital landscape in 2025 is defined by a complex interplay of fragmented regulation, aggressive enforcement, and new technological threats to privacy and human rights. In the absence of a unifying federal framework, the United States is characterized by an expanding and increasingly complex patchwork of state privacy laws, each with varying thresholds, consumer rights, and enforcement mechanisms. Concurrently, the global approach to regulating artificial intelligence is diverging, pitting the European Union's comprehensive, binding AI Act against the U.S.'s state-led, sector-specific model, creating significant geopolitical and compliance friction for multinational corporations.

California remains a key driver of U.S. regulation, with its privacy agency and Attorney General pursuing aggressive enforcement actions that penalize not just policy failures but functional gaps in user experience and technical implementation. The state has also finalized first-in-the-nation rules for automated decision-making, risk assessments, and cybersecurity audits. This regulatory momentum is matched by a surge in digital privacy litigation, where courts are struggling to apply decades-old statutes like the California Invasion of Privacy Act (CIPA) and the federal Video Privacy Protection Act (VPPA) to modern online tracking technologies, resulting in contradictory rulings and deepening circuit splits.

Globally, the push for digital control is intensifying. The rise of national digital identity programs, data localization mandates, and the EU's Digital Services Act (DSA) poses significant risks to human rights, including exclusion from essential services, state-sponsored surveillance, and global censorship. A U.S. House Judiciary Committee investigation reveals the DSA is being used to compel platforms to censor constitutionally protected political speech, humor, and satire worldwide. This is underpinned by the pervasive business model of "surveillance capitalism," where the extraction of personal data for behavioral prediction is now being amplified by AI-driven "Answer Engines," threatening to replace open inquiry with a single, optimized "truth" and creating systemic risks of manipulation and what scholars term "epistemic chaos."

I. The Expanding U.S. Data Privacy Framework

In 2025, the U.S. continues to operate without a federal privacy law, resulting in a fragmented and complex regulatory landscape dominated by state-level legislation. By the end of the year, twenty states are expected to have comprehensive privacy laws in effect, with over a dozen more actively considering similar bills. While these laws share core principles like transparency and data minimization, they vary significantly in applicability, consumer rights, and enforcement.

A. The Patchwork of State Laws

Businesses must navigate a growing number of distinct state privacy regimes. By the end of 2025, eight new states will have enacted comprehensive privacy laws: Delaware, Iowa, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey, and Tennessee. This adds to the existing mosaic, forcing companies to develop scalable, principle-based privacy programs that can adapt to a shifting set of obligations. A unique provision in Tennessee's law offers an affirmative defense to enforcement actions for businesses that maintain a privacy program conforming to the NIST privacy framework, incentivizing proactive compliance.

B. Diverging Consumer Rights and Definitions of Sensitive Data

Most state laws grant a core set of consumer rights, including the ability to access, delete, correct, and obtain copies of personal data, as well as opt out of targeted advertising, data sales, and certain profiling. However, variations exist:

  • Iowa's law is more limited, lacking the right to correct inaccurate data or opt out of targeted advertising and profiling.
  • Minnesota's law is more expansive, allowing individuals to understand the basis of profiling decisions and request lists of third parties that have received their data.

All state laws impose heightened restrictions on "sensitive information," and several new 2025 laws expand this category to include:

  • National origin (Delaware, Maryland, New Jersey)
  • Transgender or non-binary status (Delaware, Maryland, New Jersey)
  • Biometric data (Maryland, Tennessee)
  • Certain financial account information (New Jersey)

Maryland’s law is particularly stringent, broadly defining "consumer health data" and prohibiting its processing unless strictly necessary for a consumer-requested service, even with consent. Many new laws also mandate Data Protection Impact Assessments (DPIAs) for processing sensitive data or other "high-risk" activities.

C. Varying Applicability Thresholds

Determining which laws apply to a business requires careful analysis of thresholds that differ by state. Most states use volume-based criteria, while others use revenue or a combination.

State

Applicability Threshold

California

Gross annual revenue > $25M; OR Processes data of 100,000+ residents; OR Derives 50%+ revenue from selling data.

Texas & Nebraska

Broadly apply to any business that is not a "small business" under SBA definitions, with no numerical data thresholds.

Montana

Processes data of 50,000+ residents; OR 25,000+ residents if >25% of revenue is from data sales.

MD, NH, DE, RI

Thresholds begin at 35,000 residents, with revenue qualifiers in DE and RI.

Florida

Only applies to for-profit companies with >$1B global revenue and specific tech-related operations (e.g., smart speaker services, large app stores).

Other States (CO, CT, IN, IA, KY, MN, NJ, OR, UT, VA)

Typically apply to businesses processing data of 100,000+ residents OR deriving a percentage of revenue from selling data of 25,000+ residents.

II. The Global Race to Regulate Artificial Intelligence

As AI moves to the center of business strategy, a global regulatory divergence has emerged, pitting the EU's comprehensive legal framework against the U.S.'s more fragmented, state-led approach.

A. The European Union's AI Act: A Comprehensive, Risk-Based Approach

The EU AI Act, which entered into force in August 2024, is the world's first comprehensive, binding legal framework for AI. It classifies AI systems by risk level—unacceptable, high, limited, and minimal—and imposes extensive obligations accordingly.

  • High-Risk AI Systems: Must undergo pre-market conformity assessments, maintain detailed technical documentation, and register in a public EU database.
  • General-Purpose AI (GPAI) Models: Face transparency, copyright, and cybersecurity obligations. Models exceeding certain scale thresholds (e.g., >10,000 EU business users) face stricter requirements.

B. The United States' State-Led Regulatory Model

In the absence of a federal AI law, U.S. states are creating a patchwork of regulations.

  • Colorado: The Colorado AI Act (SB 24-205), effective in 2026, is the first comprehensive U.S. framework for "high-risk" AI, imposing duties of reasonable care, impact assessments, and notice on developers and deployers.
  • California: Has enacted multiple laws, including AB 2885 to create a uniform definition of AI, AB 2013 requiring disclosure of training data, and AB 3030 mandating disclaimers for generative AI in healthcare communications.
  • Utah: The AI Policy Act (SB 149) requires disclosure when consumers interact with generative AI and sets guardrails for professional licensing.

C. Global AI Governance and Geopolitical Tensions

Other nations are advancing AI governance through a mix of regulation and standards, including China (mandating registration and labeling), Brazil (poised to pass a GDPR-style law), and the U.K. (favoring a principles-based approach). This global divergence, particularly between the U.S. and EU, has created geopolitical friction. U.S. officials have warned that the EU's upcoming GPAI Code of Practice could disproportionately burden American firms.

D. A Compliance Playbook for Businesses

To mitigate risk in this complex environment, companies should:

  1. Inventory AI Systems: Identify all AI tools in use, especially in high-risk sectors like HR, healthcare, and finance.
  2. Conduct Risk Assessments: Use frameworks like NIST’s AI RMF or the EU’s conformity checklist to assess training data, bias, and explainability.
  3. Build Cross-Functional Governance: Coordinate legal, compliance, technical, and product teams, and assign clear ownership for AI risk.
  4. Plan for EU Market Entry: Determine if EU-facing AI systems require local representation, registration, or conformity assessments.
  5. Audit Communications: Ensure public statements about AI capabilities and safety align with internal documentation to avoid "AI-washing."

III. California: A Regulatory Bellwether

California continues to lead U.S. privacy regulation through aggressive enforcement of the California Consumer Privacy Act (CCPA) and the adoption of new, detailed rules governing high-risk data processing activities.

A. Aggressive CCPA Enforcement: Key Lessons from Recent Actions

Enforcement actions in 2025 by the California Privacy Protection Agency (CPPA) and the Attorney General (AG) against American Honda, Todd Snyder, and Healthline Media reveal key regulatory priorities. Common violations include:

  • Oververification: Unlawfully requiring identity verification for opt-out and sensitive personal information limitation requests.
  • Poor UX and Dark Patterns: Designing cookie consent banners that make it harder to opt out than to opt in.
  • Technical Failures: Using broken or non-functional cookie banners and opt-out tools.
  • Ignoring GPC Signals: Failing to properly process Global Privacy Control (GPC) signals, including across known user profiles.
  • Missing Vendor Contracts: Disclosing personal data to ad-tech partners without CCPA-mandated contractual provisions.
  • Purpose Limitation Violations: The AG's $1.55 million settlement with Healthline, the largest to date, was the first major enforcement of the purpose limitation rule. The AG found that sharing article titles suggesting medical conditions with third parties for advertising went beyond what a reasonable consumer would expect, even if disclosed in a privacy policy.

B. Finalized Regulations on ADMT, Risk Assessments, and Cybersecurity Audits

In July 2025, the CPPA adopted final regulations on Automated Decision-Making Technology (ADMT), risk assessments, and cybersecurity audits, which may become operative as early as December 2025.

  • ADMT Scope Narrowed: The rules now apply only when technology "replaces or substantially replaces human decision making" for "significant decisions" affecting finance, employment, housing, education, or healthcare. Opt-out rights are focused on these core use cases.
  • Cybersecurity Audits Scaled: Businesses processing large volumes of data must conduct annual audits starting between 2028 and 2030, depending on revenue. Auditors must be structurally independent, and a certification of completion must be submitted to the CPPA.
  • Risk Assessments Streamlined: Businesses are no longer required to submit full risk assessments to the CPPA. Instead, starting in 2028, they must retain the assessment and file only a certification and a brief summary.

IV. Digital Privacy Litigation: Applying Old Laws to New Tech

Courts and legislatures are grappling with how decades-old privacy statutes apply to modern digital tracking, leading to legal uncertainty and a push for legislative reform.

A. The California Invasion of Privacy Act (CIPA): A Judiciary Divided

Courts in California remain split on whether common website tracking technologies like pixels and session replay tools violate CIPA's wiretapping and pen register provisions.

  • Some courts have allowed claims to proceed, reasoning that IP tracking that relays location data or TikTok scripts that capture user data can function like a pen register or trap-and-trace device.
  • Other courts have rejected such claims, holding that the statute was intended for telephone surveillance and does not extend to routine internet communications.
  • The Ninth Circuit has issued conflicting signals, reversing a dismissal in Mikulsky v. Bloomingdale’s by finding session replay could capture the "contents" of communications, while affirming dismissals in other cases. A concurrence by Judge Bybee in Guiterrez v. Converse questioned whether CIPA was ever intended to cover internet communications at all.

B. The Video Privacy Protection Act (VPPA): A Deepening Circuit Split

Federal appellate courts have adopted divergent interpretations of the VPPA, creating a circuit split on who qualifies as a "consumer" and what constitutes "personally identifiable information" (PII).

  • Narrow Interpretation (2nd, 3rd, 6th, 9th Circuits): In Solomon v. Flipps Media, the Second Circuit ruled that sending a Facebook ID and a video title to Meta does not constitute PII under an "ordinary person" standard. The Sixth Circuit in Salazar v. Paramount Global narrowed the definition of "consumer," holding a newsletter subscription was insufficient.
  • Broad Interpretation (7th Circuit): In Gardner v. Me-TV, the Seventh Circuit expanded VPPA liability, holding that users who created free accounts in exchange for personal data (email, zip code) qualified as "subscribers." The court noted that in the digital economy, "data can be worth more than money."

C. Legislative Response: California's SB 690

In response to the surge in CIPA litigation, the California legislature introduced SB 690, a bill to exempt the use of tracking technologies that serve a "commercial business purpose" from liability. The bill passed the Senate unanimously but has been designated a two-year bill, delaying potential enactment until 2026 and likely prompting a new wave of lawsuits before any new limitations take effect.

V. Regulation of Sensitive and Specialized Data Categories

Regulators are imposing stricter rules on specific types of data, including children's information, health data, and cross-border transfers.

A. Children's Privacy: Federal COPPA Updates and State AADC Laws

  • COPPA Rule (Federal, under 13): The FTC has implemented significant updates, including requiring separate verifiable parental consent for disclosing children's data to third parties for advertising, imposing strict data retention limits, expanding the definition of "personal information" to include biometric identifiers, and mandating a written information security program.
  • Age Appropriate Design Code (AADC) Laws (State, under 18): States like Vermont and Nebraska have adopted AADC laws, which are more stringent than COPPA. They focus on platform design, requiring high privacy settings by default, data minimization, and prohibitions on dark patterns and harmful profiling.

B. Health Data Privacy: HIPAA Developments and Regulatory Uncertainty

  • Reproductive Health Privacy Rule: A HIPAA Final Rule issued in April 2024 to strengthen privacy for reproductive health information was vacated by a federal court in Texas in July 2025. The rule is currently not enforceable nationwide, pending appeal.
  • HIPAA Security Rule NPRM: In December 2024, HHS proposed significant amendments to the Security Rule, including mandatory multi-factor authentication (MFA), encryption of ePHI, annual security evaluations, and a 24-hour breach notification requirement for business associates. A Final Rule has not yet been issued.

C. Cross-Border Data Transfers: Instability in the EU-U.S. Corridor

While the EU-U.S. Data Privacy Framework (DPF) remains in place, its long-term stability is in question. EU regulators have increased scrutiny, evidenced by a €290 million fine against Uber by the Dutch DPA for unlawful data transfers. Simultaneously, the U.S. is creating "reverse pressure" with the Department of Justice’s "Bulk Data Rule" (effective April 2025), which restricts onward transfers of sensitive personal data from the U.S. to "countries of concern" like China and Russia.

VI. Human Rights in the Digital Age: Identity, Censorship, and Surveillance

The global expansion of digital systems for identity, content moderation, and data governance is creating profound challenges for fundamental human rights.

A. National Digital Identity Programs: Progress vs. Exclusion

While proponents advocate for digital ID to enhance access to services, critics and civil society organizations warn of significant human rights risks.

  • Risks of Exclusion: Mandatory digital ID systems have led to the denial of essential services. In Uganda, a pregnant woman was turned away from medical care for lacking the required card, while in India, an elderly man was denied food rations because a biometric reader failed.
  • Centralization and Surveillance: Centralized national ID programs, especially those linked to biometrics, create a "single point of failure" for cybersecurity and pose severe risks to privacy and freedom of expression by enabling tracking and control.
  • Case Studies:
    • India's Aadhaar: Has been plagued by implementation failures causing exclusion from welfare programs, significant data breaches, and concerns over pervasive surveillance through authentication logs.
    • Estonia: Its advanced, non-biometric ID system still suffered a major cryptographic flaw, demonstrating that even sophisticated systems carry large-scale risks.
    • Tunisia: Civil society successfully campaigned to amend a draft biometric ID law, forcing the government to abolish a planned centralized national database.

B. Data Localization: A Global Threat to Online Freedoms

Governments worldwide are enacting data localization laws, requiring user data to be stored within national borders. While justified as a means to protect privacy and security, Freedom House research shows these laws are contributing to a global decline in internet freedom.

  • Censorship: In Vietnam, the government uses its data localization law to compel social media platforms to remove "illegal" speech, including criticism of the government.
  • Security Risks: Forcing data to be stored in countries with weaker security infrastructure makes it more vulnerable to breaches and misuse.

C. The Foreign Censorship Threat: The EU's Digital Services Act (DSA)

An interim staff report from the U.S. House of Judiciary Committee, released in July 2025, concludes that the EU's Digital Services Act functions as a global censorship regime that infringes on American free speech.

  • Global Reach: The DSA requires Very Large Online Platforms (VLOPs)—most of which are American—to mitigate "systemic risks" like "disinformation" and "hate speech." Because platforms typically use global terms of service, this forces them to apply EU censorship standards worldwide.
  • Broad Definitions of "Illegal" Speech: A confidential EU Commission workshop in May 2025 used a scenario that labeled the common political phrase "we need to take back our country" as "illegal hate speech." The workshop also targeted memes and satire for censorship.
  • Targeting of Conservative Speech: Takedown requests from EU member states show a pattern of targeting conservative viewpoints. Polish authorities flagged a TikTok post stating "electric cars are neither an ecological nor an economical solution," while French and German authorities targeted posts critical of immigration policies.

VII. The Economic Model of the Digital Age: Surveillance Capitalism

The dominant business model of the digital economy, termed "surveillance capitalism" by scholar Shoshana Zuboff, is built on the extraction and monetization of personal data, which poses fundamental threats to individual autonomy and democracy.

A. The Business of Behavioral Prediction

Surveillance capitalism claims human experience as a free source of raw material. Behavioral data, once considered "data exhaust," is captured, claimed as private property, and fabricated into "prediction products" that are sold in "human futures markets." This model has expanded beyond online search to encompass data from cars, homes, and even a child's experience in a Google Classroom. This practice challenges core data protection principles like data minimization and purpose limitation.

B. From Epistemic Inequality to Epistemic Chaos

This economic logic has created an "epistemic coup" — a takeover of knowledge and the power it confers.

  • Epistemic Inequality: A vast and growing abyss between what companies and governments can know about an individual and what that individual can know for themselves.
  • Instrumentarian Power: The use of this knowledge to tune, manipulate, and modify behavior at scale, often subliminally, to achieve commercial or political goals, as seen in the Cambridge Analytica scandal.
  • Epistemic Chaos: The inevitable result of algorithmic systems engineered to maximize engagement. Because inflammatory and corrupt information often generates the most engagement, it is amplified, splintering shared reality and weakening democratic institutions.

C. The Next Frontier: Answer Engine Optimization (AEO) and the Collapse of Reality

The shift from search engines that provide links to AI-driven "answer engines" that synthesize a single response creates a new, more dangerous paradigm.

  • AEO (Answer Engine Optimization): The practice of tailoring content to shape the single answer an AI generates. This moves the battlefield from ranking in a list of sources to defining the singular "truth."
  • Censorship by Design: As the inputs, weights, and reasoning of AI models are often proprietary and opaque, AEO allows truth to be "pre-filtered in darkness," making verification impossible.
  • The Liar's Dividend: The proliferation of synthetic media allows bad actors to dismiss authentic evidence as fake, evading accountability. When a verified video can be dismissed as "AI," as a recent political incident demonstrated, reality itself becomes contestable. This creates a world where truth is optional, and accountability collapses.

Read more

The White House Influencer Pipeline: How the Biden Administration Revolutionized Government Communications Through Social Media

The White House Influencer Pipeline: How the Biden Administration Revolutionized Government Communications Through Social Media

An investigation into unprecedented access, undisclosed payments, and the regulatory void governing political influencer marketing Executive Summary Between 2022 and 2024, the Biden administration pioneered an unprecedented strategy of engaging social media influencers to amplify its messaging to younger audiences. While the White House provided access rather than direct payments,

By Compliance Hub
Generate Policy Global Compliance Map Policy Quest Secure Checklists Cyber Templates