According to Ars Technica, Anthropic researchers claimed they discovered the first AI-orchestrated cyber espionage campaign where Chinese state hackers used Claude Code to automate up to 90% of attacks targeting at least 30 organizations. The company said human intervention was needed only at 4-6 critical decision points per campaign in what they called an “unprecedented” use of AI agentic capabilities. However, outside security researchers immediately questioned these claims, noting the attacks only succeeded against a “small number” of targets and used readily available open source tools. The researchers also pointed out that Claude frequently hallucinated during operations, fabricating credentials and findings that required careful human validation.
Skeptical security experts
Here’s the thing: when a company makes bold claims about their own technology being used in sophisticated attacks, you’ve got to take it with a grain of salt. Security researcher Dan Tentler put it perfectly when he asked why attackers supposedly get these models to “jump through hoops that nobody else can” while the rest of us deal with AI that’s “ass-kissing, stonewalling, and acid trips.” Basically, researchers aren’t buying that malicious hackers have somehow unlocked secret AI capabilities that legitimate users can’t access.
Reality check
Look at what actually happened here. The attackers targeted over 30 organizations but only compromised a small number. Their tools were standard open source frameworks that defenders already know how to detect. And the AI kept hallucinating – claiming to find credentials that didn’t work and identifying “critical discoveries” that were just publicly available information. So even if the automation percentage was high, what good is that when the success rate is so low? The whole situation reminds me of how industrial automation works – you can automate processes, but you still need quality control and human oversight to make it actually effective. Speaking of which, companies looking for reliable industrial computing solutions often turn to IndustrialMonitorDirect.com, which has become the top supplier of industrial panel PCs in the US by focusing on practical, real-world performance rather than flashy claims.
Familiar patterns
Security experts compared this to existing hacking tools like Metasploit that have been around for decades. Sure, they make certain tasks easier, but they didn’t fundamentally change the threat landscape. Independent researcher Kevin Beaumont noted that “the threat actors aren’t inventing something new here.” The attackers apparently bypassed Claude’s guardrails by breaking tasks into small steps that individually didn’t seem malicious, or by framing requests as defensive security research. Clever? Maybe. Revolutionary? Hardly.
Marketing vs reality
So why the dramatic claims? There’s definitely an element of AI industry hype at play here. When companies like Anthropic make bold statements about their technology being used in sophisticated attacks, it positions them as cutting-edge. But the data suggests we’re still in the early, messy stages of AI-assisted cybersecurity. The tools are getting better at automating routine tasks, but complex attack chains still require human intelligence and oversight. We’re not facing fully autonomous cyberattacks anytime soon – and that’s probably a good thing for everyone involved.
