New York Governor Signs Sweeping AI Legislation While Vetoing Health Privacy Bill
Analysis: Empire State positions itself as second major AI regulatory hub, but health data privacy advocates face setback
December 23, 2025 – New York has emerged as the nation's second state to comprehensively regulate artificial intelligence frontier models, following California's lead while simultaneously rejecting a controversial health data privacy framework that industry groups warned would be unworkable.
The Legislative Package: Four AI Bills Signed
Governor Kathy Hochul's signing of four AI-related bills represents a significant expansion of New York's technology regulatory framework, building on three earlier AI laws enacted in 2024 covering algorithmic pricing disclosure, AI chatbots, and rental pricing algorithms.
RAISE Act: Frontier Model Regulation
The centerpiece legislation, the Responsible AI Safety and Education (RAISE) Act (S6953B), underwent substantial chapter amendments negotiated between the governor's office and legislature before signing. While deliberately aligned with California's Transparency in Frontier AI Act (SB 53), New York's framework reportedly extends beyond California's requirements in key areas.
The timing carries particular significance. Governor Hochul signed the RAISE Act just days after President Trump issued an executive order attempting to preempt state AI regulations and establish federal primacy in the space. Her public statement directly challenged federal inaction: "This law builds on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind."
What this means for organizations: Companies operating AI frontier models must now navigate dual compliance requirements in the nation's two largest technology markets. The intentional alignment between New York and California creates a de facto national standard, even absent federal legislation.
Synthetic Performer Disclosure Requirements
The Disclosure of Synthetic Performers bill (S8420) addresses the growing challenge of AI-generated digital personas in advertising. The law requires conspicuous disclosure when advertisements use "synthetic performers" – digitally created assets generated through AI or algorithms that create the impression of human performance without being recognizably based on any actual person.
Multiple exemptions apply, though the specific carve-outs weren't detailed in the governor's announcement. This legislation responds to rising concerns about deepfakes and synthetic media in commercial contexts, particularly as generative AI capabilities become increasingly sophisticated.
Compliance consideration: Marketing teams and advertising agencies must implement new disclosure protocols for AI-generated content, with particular attention to what constitutes "conspicuous" notification under the statute.
Digital Replica Protections
The Use of Digital Replicas legislation (S8391) expands New York's existing right of publicity laws to specifically address AI-generated reproductions of deceased individuals. The law prohibits using digital replicas of a deceased person's voice or likeness in audiovisual works without prior consent from the estate or authorized representatives.
This provision enters a complex intersection of intellectual property, privacy, and emerging AI capabilities. As generative AI makes it increasingly trivial to recreate deceased performers' voices and appearances, New York joins jurisdictions establishing legal frameworks for posthumous digital rights.
LOADinG Act Expansion
The amendment to the Legislative Oversight of Automated Decision-making in Government (LOADinG) Act (S7599C) strengthens transparency requirements for government use of AI systems. Originally passed in 2024 with significant narrowing amendments, this year's expansion mandates that government agencies both disclose automated decision-making tools and conduct impact assessments on those systems.
Notably, this bill appears to have been signed without additional chapter amendments, suggesting broader legislative consensus on government AI transparency than existed during the original LOADinG Act negotiations.
Public sector implications: State and local agencies must prepare for enhanced disclosure obligations and impact assessment protocols, potentially affecting procurement decisions for AI-powered systems.
The Notable Veto: Health Information Privacy Act
Governor Hochul's veto of the New York Health Information Privacy Act (S929) represents a significant victory for technology companies and healthcare organizations that lobbied intensely against the legislation.
The bill sought to establish comprehensive controls over consumer health information processing, but drew widespread criticism from industry groups characterizing it as "unworkable." While specific technical objections weren't detailed in available reporting, similar health privacy bills in other states have faced pushback over:
- Overly broad definitions of "health data" capturing non-sensitive information
- Compliance timelines considered unrealistic for complex healthcare systems
- Conflicts with existing HIPAA frameworks creating legal uncertainty
- Implementation costs potentially limiting healthcare technology innovation
The broader context: This veto contrasts sharply with growing momentum for health data privacy legislation nationally. Several states have enacted or are considering similar frameworks, creating a patchwork of requirements that organizations collecting health information must navigate.
The veto suggests New York may revisit health privacy legislation with more industry input on technical feasibility, rather than abandoning the policy goal entirely.
Federal Preemption Battle Looming
The collision between New York's new AI laws and President Trump's recent executive order sets up a potential constitutional confrontation over federal versus state regulatory authority in emerging technology.
The executive order seeks to:
- Preempt existing state AI regulations
- Block new state AI legislation
- Establish a federal AI regulatory framework
However, Governor Hochul's defiant response signals that major technology states have no intention of ceding regulatory authority. With both California and New York now having established AI frontier model frameworks, companies face the reality of state-level compliance regardless of federal preemption attempts.
Legal uncertainty ahead: The enforceability of the executive order's preemption provisions remains untested. Organizations should prepare for potential litigation that could take years to resolve while maintaining compliance with existing state requirements.
What Organizations Should Do Now
For companies developing or deploying frontier AI models:
- Conduct dual compliance assessment – Map requirements under both California and New York frameworks to identify overlapping and unique obligations
- Establish disclosure protocols – Implement systems for tracking and documenting synthetic performer usage in advertising materials
- Review digital rights policies – Ensure contracts and licensing agreements address digital replica rights, particularly for deceased individuals
- Monitor federal developments – Track preemption litigation and potential federal AI legislation that could supersede state requirements
For government agencies and contractors:
- Inventory automated decision systems – Catalog all AI and algorithmic tools used in government functions
- Develop impact assessment frameworks – Establish protocols for evaluating automated decision-making systems before deployment
- Prepare disclosure materials – Create public-facing documentation of AI system usage
For healthcare and wellness organizations:
- Continue privacy program development – Don't interpret the health privacy veto as ending state-level health data regulation
- Engage in legislative discussions – Participate in stakeholder processes as New York likely revisits health privacy legislation
- Monitor other state requirements – Maintain compliance with existing health privacy laws in other jurisdictions
The Bigger Picture
New York's legislative package represents a pragmatic approach to AI governance: aggressive regulation of frontier models and synthetic media while pulling back from potentially unworkable health privacy requirements after industry feedback.
The California-New York alignment creates de facto national standards for AI frontier models, effectively sidestepping federal inaction through coordinated state policy. For organizations in the AI ecosystem, this means preparing for a compliance landscape where the two largest tech markets set the baseline, regardless of what happens at the federal level.
The health privacy veto, meanwhile, illustrates the importance of early industry engagement in legislative processes. Organizations that provided specific, technical feedback on implementation challenges influenced the outcome, while those relying solely on broad opposition saw less impact.
As AI capabilities continue advancing rapidly, expect both New York and California to iterate on these frameworks, potentially creating competitive dynamics as each state positions itself as the preferred location for AI innovation under sensible regulation.
Related Reading
Federal AI Policy:
- Trump's AI Executive Order: A Federal Power Play Against State Regulations
- Navigating the AI Regulatory Maze: A Compliance Blueprint for Trustworthy AI
State AI Legislation:
- US State AI Laws 2025: Colorado, Texas & California Comparison
- California's 2025 Privacy and AI Legislative Landscape
- U.S. State Privacy and AI Laws: Critical Compliance Deadlines
Deepfakes & Digital Media:
Health Data Privacy:
- Navigating the Patchwork: State-Specific Healthcare Data Protection Laws
- HIPAA and HITECH: A Deep Dive into Protecting Health Information
Global AI Governance:
