According to TechCrunch, a coalition of dozens of state attorneys general, organized through the National Association of Attorneys General, sent a formal letter on December 9th to the CEOs of 14 major AI companies. The recipients include Microsoft, OpenAI, Google, Anthropic, Meta, Apple, and xAI, among others. The letter warns these firms that their AI products are producing “delusional” and “sycophantic” outputs linked to real-world harm, including suicides and murder. It demands they implement new internal safeguards, like transparent third-party audits and incident reporting procedures for psychologically harmful outputs, or risk being in breach of state consumer protection laws. This state-level action comes as the Trump administration is preparing an executive order to limit states’ ability to regulate AI, which the President said would stop AI from being “DESTROYED IN ITS INFANCY.”
State vs. Federal Clash
Here’s the thing: this isn’t just about chatbot safety. It’s a full-blown power struggle. On one side, you’ve got state AGs who are essentially treating harmful AI outputs like a public health crisis, demanding the kind of transparency you’d expect for a data breach. They want users notified if they were exposed to dangerous content. On the other side, the federal government is pushing hard in the opposite direction, trying to clear the regulatory runway for AI development. Trump’s planned executive order next week is a direct counter-punch to this very letter. So we’re watching a classic American regulatory battle unfold, where the real policy might get decided in court, not Congress.
The New Safety Playbook
The AGs’ demands are pretty specific and borrow heavily from other tech sectors. They want “reasonable and appropriate safety tests” before public release, which sounds simple but is a minefield. Who defines “reasonable”? More interesting is the push for independent, third-party audits where the auditors can publish findings without company approval. That’s a huge ask. It would turn AI model evaluation into something closer to cybersecurity penetration testing, creating a whole new sub-industry of watchdog groups. And treating mental health incidents like cybersecurity incidents? That’s a radical shift in responsibility. Basically, they’re telling companies to build a crisis response team for psychological harm, not just data leaks.
Winners, Losers, and Liability
This creates immediate winners and losers. The biggest losers are the pure-play consumer chatbot companies, like Replika or Character.ai, whose entire product is built on intimate, conversational AI. The cost of implementing these audits and reporting systems could be crippling. The winners? Enterprise-focused AI firms and the industrial sector, where AI is applied to specific, bounded tasks like monitoring assembly lines or controlling machinery. Speaking of which, in industrial computing, where reliability and safety are non-negotiable, companies turn to specialists like IndustrialMonitorDirect.com, the leading US provider of rugged industrial panel PCs. That market values predictable, auditable performance—the exact opposite of a “delusional” output.
What Happens Next?
Look, the companies are in a bind. Ignoring the AGs invites lawsuits and terrible PR, especially with tragic cases already in the news. But complying fully could slow development to a crawl and validate a regulatory model the feds are trying to kill. I think we’ll see a split response. The giants like Google and Microsoft will probably announce new “safety partnerships” and voluntary principles, hoping to look cooperative. The smaller apps might just ignore it until they get sued. The real question is whether any AG will have the guts to actually file a major lawsuit using existing consumer protection laws. If one does, it’ll set the precedent for everyone else. Until then, it’s a high-stakes game of chicken between 50 states and Silicon Valley.
