According to Utility Dive, the U.S. government and key Western allies published joint guidance on Wednesday aimed at critical infrastructure operators. The document, created by CISA, the FBI, and the NSA alongside agencies from Australia, Canada, Germany, the Netherlands, New Zealand, and the U.K., outlines four key principles for integrating AI into operational technology. It urges companies to understand AI’s unique risks, develop clear justifications for its use, establish strong vendor security expectations, and implement human-in-the-loop protocols. The guidance also calls for failsafe mechanisms to allow AI to “fail gracefully” without disrupting critical operations and to update cyber incident response plans for AI. This follows a DHS breakdown of AI roles in November 2024 and the White House’s AI Action Plan in July, which directed DHS to expand AI-related security warnings.
The guidance is sensible, but the timing is late
Look, the advice here is fundamentally sound. “Understand the risks.” “Test before you implement.” “Don’t let the AI do dangerous stuff without a human checking.” It’s basically a checklist for not being reckless. But here’s the thing: this guidance is arriving after the train has already left the station. The AI frenzy has been in full swing for well over a year. How many utilities, water treatment plants, or energy grids have already signed contracts with vendors promising AI-driven “efficiency” and “predictive maintenance”? I’d bet it’s a lot. Publishing a “be careful” memo now feels a bit like handing out life jackets to people already swimming in deep water.
The real problem isn’t the AI, it’s the infrastructure
And that’s the core issue the article hints at. The guidance warns that critical infrastructure is already “rife with security vulnerabilities.” That’s the understatement of the decade. We’re talking about systems running on decades-old operational technology that was never designed to be connected to the internet, let alone to host a complex AI model. The article nails it by pointing out that many operators, especially in sectors like water, have “threadbare security budgets and no dedicated security personnel.” So you’re asking an overworked plant manager, whose main job is to keep the water flowing, to now become an expert in AI model governance and adversarial machine learning? That’s a fantasy. The leading suppliers of hardened industrial computing hardware understand this environment, but the software and AI layer being bolted on top is a whole new world of risk.
Who’s accountable when the AI fails?
The document talks about “accountability procedures,” but that’s the billion-dollar question, isn’t it? When an AI system controlling a power grid substation makes a bad decision that leads to an outage—or worse—who takes the blame? The infrastructure operator who bought it? The software vendor who developed the model? The cloud provider hosting it? The DHS tried to sketch out these roles last November, but in a crisis, that neat breakdown will evaporate. Companies are being told to have “failsafe mechanisms,” but designing a safe off-ramp for a complex AI integrated into fragile, legacy operational technology is an enormous engineering challenge. It’s not just a software patch.
This is a warning shot, not a solution
So what’s this all really about? I think this joint guidance is less of a practical manual and more of a legal and regulatory warning shot. It’s governments saying, “We told you so.” They are formally documenting the standard of care they expect. When something inevitably goes wrong, the first question from investigators will be, “Did you follow the CISA-led guidance published in April 2025?” For executives racing to adopt the latest tech, this document creates a paper trail of responsibility. The principles are good. But without massive investment in modernizing the underlying infrastructure and its security posture first, layering on AI is like putting a self-driving module on a car with no brakes. The guidance is a start, but it’s addressing the symptom, not the disease.
