According to Gizmodo, the AI industry is facing a massive scaling problem as companies like OpenAI build billion-dollar centralized data centers, but a decentralized “swarm” approach is emerging as an alternative. Startup Fortytwo published benchmarks showing its method of running small AI models on personal computers outperformed OpenAI’s GPT-5, Google’s Gemini 2.5 Pro, Anthropic’s Claude Opus 4.1, and DeepSeek’s R1 in reasoning tests. The company’s theory is that large models get stuck in reasoning loops while smaller models provide multiple answers that can be ranked for the best solution. Meanwhile, robotics researchers are developing similar swarm approaches where simple robots work collectively on tasks like wildfire monitoring or clearing blockages in artificial blood vessels. The basic concept applies across both fields—many simple units working together can outperform complex centralized systems.
Sounds Too Good to Be True?
Okay, let’s be real here. A startup claiming it can beat OpenAI’s latest model using computers sitting in people’s homes? That’s a pretty bold claim that deserves some serious skepticism. We’ve seen this movie before with distributed computing projects that promised revolutionary results but often delivered… well, not much. Remember when everyone was going to cure cancer with their PlayStation? Exactly.
The crypto-style reward system Fortytwo is using raises immediate red flags too. Basically, they’re offering cryptocurrency to people who run models as part of their swarm. That feels suspiciously like trying to bootstrap a user base with financial incentives rather than proving the technology actually works at scale. And let’s be honest—when crypto gets involved, things tend to get messy.
The Real Challenges Nobody’s Talking About
Here’s the thing about distributed systems: they’re incredibly difficult to manage reliably. Coordinating thousands or millions of individual devices? Dealing with varying hardware capabilities, network speeds, and uptime? It’s a logistical nightmare. Centralized data centers exist for a reason—they’re predictable and controllable.
And what about security? Running AI models across random personal computers means you’re essentially trusting that infrastructure to strangers. Could malicious actors join the swarm and poison the results? Probably. Could sensitive data get exposed? Almost certainly. These aren’t theoretical concerns—they’re the exact reasons most companies hesitate to use distributed computing for critical tasks.
Robotics Might Actually Make Sense
Now, the robotics side of this swarm concept feels more plausible. Simple robots working together to monitor wildfires or perform medical procedures? That actually makes intuitive sense. We see similar behavior in nature all the time—ants, bees, birds all operate in swarms to accomplish things no individual could manage.
The researchers demonstrated that robots with just three basic abilities could navigate obstacles collectively that they couldn’t handle alone. That’s compelling evidence that swarm intelligence has real potential in physical applications. But even here, we’re talking about controlled environments and specific use cases. Scaling this to general-purpose robotics? That’s a much bigger leap.
Is This Actually the Future?
Look, the swarm approach is definitely interesting, and Fortytwo’s claims are attention-grabbing enough that we should pay attention. But color me skeptical until we see independent verification and real-world deployment at scale. The history of technology is littered with “revolutionary” approaches that sounded great in theory but failed in practice.
Maybe the future is some hybrid approach—centralized systems for certain tasks, swarms for others. But the idea that we’re going to replace massive AI data centers with networks of home computers? I’ll believe it when I see it working reliably outside of carefully controlled benchmarks.
