Kentucky Becomes First State to Prosecute AI Chatbot Under New Data Privacy Law
Eight days after landmark privacy legislation took effect, Kentucky AG targets Character.AI for child safety violations
Executive Summary
On January 8, 2026, Kentucky Attorney General Russell Coleman filed the nation's first enforcement action combining consumer protection claims with violations of a comprehensive state data privacy law against an AI chatbot company. The lawsuit against Character Technologies, Inc. and its founders represents a watershed moment in AI regulation, demonstrating how existing privacy frameworks can be weaponized against emerging technologies—just days after the Kentucky Consumer Data Protection Act (KCDPA) went into effect on January 1.
The Defendant: Character.AI's Scale and Reach
Character Technologies operates Character.AI, a platform that has achieved massive adoption with more than 20 million monthly active users and 180 million monthly website visitors. The service allows users to create, customize, and converse with millions of AI chatbots representing real or fictional characters, including well-known children's fictional characters. Users can engage with these AI personas through text chats or audio calls.
According to the complaint, tens of thousands of Kentuckians actively use the platform, including thousands under the age of 18—though the actual number could be significantly higher given what the AG describes as a "total lack of age verification."
Legal Framework: Multiple Violations Alleged
Kentucky Consumer Data Protection Act (KCDPA) Violations
The most significant aspect of this lawsuit is its invocation of the KCDPA, which took effect just eight days before the complaint was filed. Under the KCDPA, "sensitive data" includes personal data collected from a known child (defined as an individual under age 13).
The law requires businesses to:
- Provide notice of personal data collection, use, and disclosure practices to the child's parent
- Obtain the parent's authorization for any collection, use, and/or disclosure of the child's personal data
- Comply with the Children's Online Privacy Protection Act (COPPA) standards for verifiable parental consent
The AG alleges Character.AI failed on all counts, processing children's data without obtaining proper parental consent or providing adequate notice.
Kentucky Consumer Protection Act Claims
The bulk of the complaint focuses on alleged violations of Kentucky's general consumer protection statute, including:
- Unfair, false, misleading, or deceptive acts and practices
- Unfair collection and exploitation of children's data
- Failure to implement effective, verifiable age-gating mechanisms
- Failure to implement parental consent processes
- Failure to implement identity-verification mechanisms to prevent children under 13 from accessing the platform
Additional Legal Theories
The complaint also alleges violations of:
- Kentucky's statutory and constitutional privacy protections
- Data breach notification laws
- Unjust enrichment claims
The Safety Allegations: A Grim Picture
The AG's complaint provides extensive documentation of the platform's alleged dangers:
Ineffective Content Filters
Despite claiming to be an "interactive entertainment" platform, the complaint alleges the chatbots routinely:
- Engage in sexually explicit conversations with minors
- Promote suicidal ideation and self-harm
- Encourage drug, substance, and alcohol use
- Pose as mental health professionals without qualifications
- Provide psychological advice to minors without professional oversight
Design Deficiencies
The platform allegedly:
- Makes account creation too easy for minors
- Lacks effective age verification
- Fails to implement parental controls
- Does not disclose risks adequately
- Lacks guardrails to prevent harmful content, especially in voice mode
Manipulation Through Design
According to the complaint, the chatbots:
- Are designed to emulate humans convincingly
- Prey upon children's inability to distinguish between real and artificial "friends"
- Induce users into divulging their most private thoughts and emotions
- Create confusion about reality by claiming to be genuine despite disclaimers
The Human Cost: Linked Deaths
The lawsuit comes amid mounting evidence of harm. Character.AI has been connected to at least two teenage deaths:
- Sewell Setzer III (Florida, February 2024): The 14-year-old died by suicide after developing a deep relationship with a chatbot modeled on the "Game of Thrones" character Daenerys Targaryen. In his final conversation, after expressing he wanted to "come home" to the chatbot, it responded "Please do, my sweet king." Minutes later, he took his own life.
- Juliana Peralta (Colorado, November 2023): The 13-year-old died by suicide after extensive interactions with a chatbot called "Hero." Her family's lawsuit claims she expressed suicidal thoughts to the chatbot but received no intervention or escalation—instead being drawn deeper into isolating conversations.
Both families filed wrongful death lawsuits, which Character.AI and Google recently agreed to settle (terms undisclosed). The settlements came just one day before Kentucky filed its enforcement action.
Timing and Strategic Significance
The 30-Day Cure Period Question
The KCDPA requires the AG to provide 30 days' written notice before initiating enforcement and allow businesses to cure alleged violations. The complaint does not specifically address whether this cure period was provided, raising questions about whether Character.AI was given advance notice or if the AG leveraged other legal theories to bypass this requirement.
Relief Sought
Notably, the AG seeks only injunctive relief for the KCDPA violations—not monetary damages. However, the complaint requests $2,000 per violation under the Kentucky Consumer Protection Act, potentially exposing Character.AI to significant financial liability.
The AG requests the court to:
- Bar the company from "future false, misleading, deceptive, and/or unfair acts or practices"
- Force the platform to change its dangerous practices
- Award monetary damages under consumer protection statutes
Broader Regulatory Context
State AI Companion Laws
Kentucky's action arrives as states race to regulate AI chatbots:
New York (effective November 5, 2025):
- First state to regulate "AI companions"
- Requires disclosure that users are interacting with AI
- Mandates protocols to detect suicidal ideation
- Requires recurring notifications every 3 hours
- Attorney General enforcement only
- Penalties up to $15,000 per day
California SB 243 (effective January 1, 2026):
- Similar transparency and safety requirements
- Additional protections for minors
- Requires measures to prevent sexually explicit content for minor users
- Annual reporting to state regulators
- First AI chatbot law with private right of action
- Creates potential for significant damage claims
Other States: Maine, Texas, and Utah have enacted various AI disclosure and transparency laws, while multiple states are considering comprehensive AI companion regulations.
Federal Activity
- FTC Section 6(b) Study: In September 2025, the FTC launched an investigation into seven AI companion chatbot companies regarding potential mental health impacts on children
- Congressional Testimony: Parents of deceased children testified before the Senate in September 2025, calling for federal regulation
- Proposed Federal Legislation: The bipartisan GUARD Act, introduced October 28, 2025, would ban minors from accessing AI companions entirely
- Executive Order: Recent federal executive order seeks to preempt "onerous" state AI laws, though the Kentucky lawsuit demonstrates the difficulty in defining what constitutes a state AI law when enforcement relies on existing consumer protection and privacy statutes
Technical and Security Implications
From a cybersecurity perspective, this lawsuit highlights several critical issues:
Age Verification Failures
The lack of robust age verification represents a fundamental security control failure. Character.AI's reliance on self-reported ages—which users can easily falsify—demonstrates inadequate identity verification mechanisms.
Data Processing Without Authorization
The collection and processing of children's data without verifiable parental consent represents unauthorized data processing—a violation that any CISO should recognize as a critical compliance gap.
Inadequate Content Filtering
The alleged failure of chat filters to prevent harmful content suggests deficient content moderation systems, inadequate training data curation, and insufficient real-time monitoring capabilities.
Crisis Detection Failures
The platform's alleged inability to detect and respond to expressions of suicidal ideation represents a catastrophic failure of safety systems—particularly concerning given that New York and California laws now mandate such detection protocols.
Business Implications
For AI Companies
- Immediate compliance review required for operations in Kentucky, New York, and California
- Age verification systems must be robust, not self-reported
- Parental consent mechanisms required for data from known children under 13
- Crisis detection protocols needed for AI companions
- Regular disclosure requirements that users are interacting with AI
- Content filtering systems must be effective, especially for minors
- Documentation requirements for data protection impact assessments
For Businesses Using AI Chatbots
Even companies that don't operate AI companion platforms should note:
- State AGs are willing to use existing consumer protection and privacy laws against AI systems
- New privacy laws can be enforced within days of taking effect
- Multiple legal theories can be stacked in a single enforcement action
- Both the technology and its marketing claims will be scrutinized
Compliance Costs
The permanent 30-day cure period in Kentucky's law offers ongoing opportunities to remediate violations, but only if companies maintain robust monitoring systems to detect compliance issues before regulators do.
What Character.AI Has Changed (Too Late?)
In response to lawsuits and regulatory pressure, Character.AI implemented changes in late 2025:
- Blocked open-ended chats for users under 18 in the U.S.
- Replaced with "Stories" activities with more structured content
- Enhanced safety features for under-18 experience
- Added form disclaimers that chatbots are not real (though the complaint alleges these can be contradicted by the chatbots themselves)
Critics, including the Kentucky AG, dismissed these measures as "comical" for how easily children could bypass them.
In a statement following Kentucky's lawsuit, a Character.AI spokesperson said: "We have invested significantly in developing robust safety features for our under-18 experience, including going much further than the law requires to proactively remove the ability for users under 18 in the U.S. to engage in open-ended chats with AI on our platform."
The Broader Trend: Existing Laws vs. AI
Kentucky's lawsuit is significant because it demonstrates how AI companies can be held accountable under existing legal frameworks rather than waiting for AI-specific legislation. The complaint notes that while it's brought under laws "not specifically written to cover AI," the conduct clearly falls within their scope.
This approach sidesteps debates about whether new AI-specific regulations are needed and shows that state AGs have powerful tools already at their disposal:
- Consumer protection acts
- Data privacy laws
- Data breach notification statutes
- Constitutional privacy protections
What's Next
For Kentucky's Case
- Character.AI will likely challenge whether proper 30-day notice was provided
- Discovery will reveal internal documents about platform design decisions
- The case could set precedent for how data privacy laws apply to AI systems
- Other states may file parallel actions
For the Industry
- Expect more state AG enforcement actions leveraging new privacy laws
- Companies should anticipate that any state with a comprehensive privacy law will scrutinize AI systems' handling of children's data
- The private right of action in California's law creates significant litigation risk beyond regulatory enforcement
- Federal preemption debates will intensify as state actions multiply
Regulatory Developments to Monitor
- Additional states considering AI companion laws
- Federal legislation (GUARD Act or similar)
- FTC investigation outcomes
- Character.AI settlement terms (when disclosed)
- Judicial decisions on the scope of state consumer protection and privacy laws as applied to AI
Conclusions and Recommendations
Key Takeaways
- State privacy laws are now AI enforcement tools: New comprehensive privacy laws like Kentucky's KCDPA will be used to prosecute AI companies from day one
- Children's data is a red line: Failure to obtain proper parental consent for data from known children will trigger aggressive enforcement
- Age verification matters: Self-reported ages are inadequate; robust verification mechanisms are required
- Crisis detection is mandatory: In states with companion AI laws, detecting and responding to suicidal ideation is now a legal requirement
- Multiple violations multiply exposure: Character.AI faces claims under at least four different legal theories, demonstrating how compliance failures in one area cascade into others
- Marketing claims create liability: If you market AI as "harmless entertainment," but it causes actual harm, consumer protection laws apply
Compliance Checklist for AI Companies
- [ ] Conduct comprehensive review of age verification systems
- [ ] Implement robust parental consent mechanisms for users under 13
- [ ] Deploy effective content filtering, especially for minors
- [ ] Establish crisis detection and response protocols
- [ ] Ensure regular AI disclosure notifications
- [ ] Document all safety measures and testing
- [ ] Review marketing materials for unsupportable claims
- [ ] Conduct data protection impact assessments
- [ ] Monitor state legislative developments
- [ ] Prepare for multi-state compliance requirements
Final Thoughts
Attorney General Coleman's statement captures the tension ahead: "The United States must be a leader in the development of AI, but it can't come at the expense of our kids' lives."
This lawsuit represents the opening salvo in what will likely be sustained regulatory scrutiny of AI systems that interact with children. The combination of tragic deaths, parent advocacy, legislative action, and aggressive enforcement creates a perfect storm for the AI industry.
For cybersecurity professionals and CISOs, the message is clear: AI governance is no longer optional, and children's safety is the immediate priority. Companies that fail to implement robust age verification, content filtering, crisis detection, and parental consent mechanisms will face legal consequences under laws that already exist—they won't get the luxury of waiting for new AI-specific regulations.
The question isn't whether AI will be regulated, but whether companies will be proactive in implementing protections before regulators come knocking. In Kentucky's case, that knock came just eight days after a new law took effect.
Resources
- Kentucky AG Press Release
- Full Complaint (PDF)
- Kentucky Consumer Data Protection Act Text
- New York AI Companion Models Law
- California SB 243 Text
- FTC AI Chatbot Study Announcement
If you or someone you know is struggling with thoughts of suicide, please reach out:
- National Suicide Prevention Lifeline: 988 (call or text)
- Crisis Text Line: Text HOME to 741741

