According to DCD, Google is launching its TPU AI chips into space through Project Suncatcher in partnership with Planet Labs, with the first two satellites scheduled for launch by early 2027. The company published a research paper detailing its vision for 81-satellite clusters forming 1km-radius orbital data centers using solar arrays and optical inter-satellite links. Google CEO Sundar Pichai acknowledged the project faces “complex engineering challenges” while revealing their Trillium-generation TPUs survived radiation testing simulating low-earth orbit conditions. The initiative comes as multiple companies including SpaceX and Blue Origin announce similar space data center ambitions, with Jeff Bezos predicting gigawatt orbital data centers within 10+ years.
<h2 id="why-space“>Why even bother with space?
Here’s the thing – everyone’s chasing the same limited resources on Earth. We’re running out of power, dealing with cooling challenges, and facing regulatory headaches. Space offers virtually unlimited solar power and natural vacuum cooling. But is it worth the astronomical launch costs? Google‘s paper suggests maybe – if SpaceX’s Starship can drive costs down to $200/kg by 2035, the economics start making sense.
The massive technical challenges
Look, this isn’t just launching some servers and calling it a day. Google’s facing three huge problems: networking, radiation, and cooling. Their terrestrial data centers use custom optical connections pushing hundreds of gigabits per second between chips. In space, they’d need to maintain satellites in formations tighter than anything ever attempted – just hundreds of meters apart instead of kilometers. And radiation? Their TPUs survived proton beam testing, but the high-bandwidth memory showed vulnerability. Basically, they can probably handle inference workloads, but training AI models in space? That’s still questionable.
Everyone wants in on space computing
This isn’t just Google’s wild idea. Elon Musk says SpaceX “will be doing” space data centers. Jeff Bezos is talking gigawatt-scale orbital computing within a decade. There’s even a startup called Starcloud that just launched a satellite with an Nvidia H100 chip. The race is on, but Google’s taking a different approach – instead of massive single structures that need space assembly, they’re betting on swarms of smaller satellites working together. It’s basically the microservices architecture approach applied to orbital infrastructure.
What this actually means
If this works? We’re talking about being able to scale AI computing almost infinitely without worrying about terrestrial power grids or real estate. The Google research paper mentions “terawatts of compute capacity” fitting in low-earth orbit. That’s insane scale. But here’s the real question: who’s going to pay for computing that happens 650km above their heads? And what happens when these satellite clusters start interfering with astronomy or create more space debris? The technical challenges are massive, but the regulatory and economic questions might be even bigger.
