According to Fortune, venture capitalist and Trump advisor David Sacks claims public distrust of AI isn’t genuine but rather manufactured by what he calls the “Doomer Industrial Complex.” Sacks points to research by tech scholar Nirit Weiss-Blatt showing hundreds of groups promoting strict AI regulation or moratoriums are funded by a small circle of Effective Altruism donors including Facebook’s Dustin Moskovitz, Skype’s Jaan Tallinn, Ethereum’s Vitalik Buterin, and convicted FTX founder Sam Bankman-Fried. These philanthropists have collectively poured over $1 billion into studying AI’s “existential risks,” with Moskovitz’s Open Philanthropy being the largest donor. Sacks cites polling showing 83% of Chinese respondents view AI’s benefits as outweighing harms compared to just 39% in the U.S., arguing this proves “propaganda money” has reshaped American debate. The Effective Altruism movement, founded by Oxford philosophers William MacAskill and Toby Ord, encourages using data to prevent future catastrophes including rogue AI.
The so-called doomer industrial complex
Here’s the thing about Sacks’ argument – it’s both compelling and wildly reductive. On one hand, he’s right that there’s serious money behind AI safety advocacy. When you trace the funding for many AI risk organizations, you do find connections to the Effective Altruism community and its wealthy backers. But calling it a “Doomer Industrial Complex” suggests way more coordination than actually exists.
Weiss-Blatt’s own research shows hundreds of separate entities in what she calls the “AI existential risk ecosystem.” These range from university labs to nonprofits to individual bloggers. They might share similar concerns about AI safety, but they’re not taking orders from some central doomer headquarters. The reality is probably messier and less sinister than Sacks portrays.
What people actually fear about AI
And here’s where Sacks’ argument really falls apart for me. He’s suggesting that all this AI anxiety is manufactured by billionaires worried about sci-fi scenarios. But is that what regular people are actually concerned about?
Matthew Adelstein, a college student who writes about Effective Altruism, nailed it when he told Fortune that most people’s AI fears are much more immediate. They’re worried about cheating, bias, job loss – you know, the stuff that might actually affect their lives next month or next year. Not some theoretical existential risk decades down the line.
Think about it – when’s the last time you heard someone at a coffee shop worrying about AI causing human extinction? Probably never. But you’ve definitely heard people worried about AI taking their job or their kid using ChatGPT to write essays.
The Effective Altruism defense
Now, the Effective Altruism movement pushes back hard against Sacks’ characterization. Open Philanthropy, the main organization he targets, says they believe technology has “drastically improved human well-being” and they’re just trying to manage risks while realizing AI’s “huge potential upsides.”
Adelstein makes a pretty reasonable point too – if there’s even a small chance that advanced AI could pose existential risks, shouldn’t we take that seriously? Even many AI developers themselves think there’s a non-zero chance of catastrophic outcomes. The fact that some wealthy people agree that’s worth studying doesn’t automatically make it some sinister plot.
He also had a great line about EA being a “cult” – “I’d like to see the cult that’s dedicated to doing altruism effectively and saving 50,000 lives a year. That would be some cult.” Fair point.
The political context matters
But we can’t ignore the political angle here. Sacks isn’t just some random commentator – he’s a Trump advisor with clear political and industry interests. His venture capital firm, Craft Ventures, invests in AI companies, and he’s long advocated for minimal regulation of technology.
His China comparison is particularly telling. Pointing to that 83% vs 39% approval gap between China and the U.S. ignores the rather important fact that China has, you know, a completely different political system and media environment. Maybe the difference isn’t just about “propaganda money” but about, oh I don’t know, decades of different cultural and political contexts?
Basically, this whole debate feels like two extremes talking past each other. On one side, you have people like Sacks dismissing legitimate concerns as billionaire-funded panic. On the other, you have some Effective Altruists so focused on long-term existential risks that they might be missing the more immediate harms that actually worry people.
The truth probably lies somewhere in the middle. Yes, there’s serious money behind AI safety advocacy. No, it’s probably not some coordinated “industrial complex” brainwashing the public. And most importantly, the fears that real people have about AI – job displacement, privacy, bias – are completely legitimate regardless of what billionaires on either side are saying.
