If it wasn’t already clear, Elon Musk and Sam Altman hate one another.
Whereas the 2 males had been as soon as cofounders of OpenAI, they’re now locked in a vicious feud, taking part in out in all its theatrics in entrance of a decide and jury in a California courtroom. Musk is suing, alleging that Altman and OpenAI president Greg Brockman tricked him into forming and funding the group as a non-profit earlier than they subsequently restructured it to have a for-profit entity. OpenAI says Musk was effectively conscious of these plans and frames the lawsuit as an try to derail a competitor.
I do know this story all too effectively. I’ve been reporting on OpenAI since 2019, embedding inside its workplace for 3 days shortly after Musk stepped away and Altman formally took up the CEO place. If there’s something I’ve realized from my years of following this firm and the AI business, it’s that this world breeds bitter rivalries.
It’s not a coincidence that almost all of OpenAI’s authentic founders left the corporate beneath acrimonious situations, nor that each tech billionaire has a largely similar AI firm. The frenetic AI race is inseparable from the petty, clashing egos of the unfathomably wealthy, hellbent on dominating each other.
Certainly, if Musk had been to win his bid, that may very well be devastating for OpenAI, particularly because it prepares this yr for a possible preliminary public providing. Musk seeks $150bn in damages from the corporate and considered one of its prime buyers, Microsoft. He additionally seeks to return OpenAI to a non-profit, to take away Altman and Brockman as leaders of the for-profit, and in addition Altman off the non-profit board.
But, to imagine that the way forward for AI improvement might be decided by a persona contest misses the purpose. Sure, Brockman’s diary entries are revealing, as was former OpenAI chief know-how officer Mira Murati’s testimony about Altman pitting executives towards one another, confirming my earlier reporting.
However fixating on questions of whether or not Altman is untrustworthy, or whether or not Musk is even much less so distracts from a far deeper drawback. If OpenAI misplaced its footing because the AI business frontrunner, one other barely distinguishable competitor – Musk’s xAI or different – would merely change it. That features firms like Anthropic, who take pleasure in a greater fame but interact in lots of comparable behaviors like compromising, careful decision-making for speed, disregarding intellectual property, aggressively scaling their computing infrastructure to the detriment of communities.
Nothing about this trial or OpenAI’s monetary construction will change the imperial drive of those firms to consolidate ever-more knowledge and capital, terraform the earth, exhaust and displace labor, and embed themselves deep throughout the state to achieve leverage over its apparatuses of violence. We might nonetheless exist in a world wherein a tiny few have the profound energy to forged it of their picture and dictate how billions of individuals stay.
As a lot as Silicon Valley would want you to consider it, AI doesn’t necessitate imperial conquest, nor may broad-based profit from the know-how ever emerge from such a basis. Earlier than the business made a tough pivot into creating terribly resource-intensive AI fashions, a full breadth of different varieties of AI flourished: small, specialised programs for detecting most cancers, for reviving disappearing languages, for forecasting excessive climate occasions, for accelerating drug discovery. So, too, did concepts to develop new AI applied sciences, together with those that didn’t need much data at all, and those who required only mobile devices, not huge supercomputers, to coach.
Even now with massive language fashions, an abundance of research and examples akin to DeepSeek already present that totally different methods can produce the identical capabilities with a tiny fraction of the size that AI firms use to justify their planet-consuming ambitions.
“Scaling is an affordable method for getting extra efficiency, but it surely’s additionally a extremely imprecise method,” Sara Hooker, the previous vice-president of analysis at Canadian AI firm Cohere, as soon as instructed me. “We adore it a lot as a result of it form of matches predictable planning cycles. It’s simpler to say ‘throw extra compute on the drawback’ than to design a brand new technique.”
However these myriad paths wither within the empires’ shadow. Within the first quarter of final yr, nearly half of all venture money went to simply two firms: OpenAI and Anthropic. That’s the tip of the iceberg to a yearslong capital consolidation that has hollowed out academia and starved analysis counter to, or just out of step with, the company agenda. From 2004 to 2020, the proportion of AI PhD graduates who selected to affix business jumped from 21 to 70%, in response to a study by MIT researchers in Science. And it’s not simply the range in AI improvement that’s struggling. In 2024, funding for local weather tech plunged 40% as buyers redirected their {dollars} partly to the brute-force scaling of the AI empires.
It doesn’t need to be that method. And over the previous yr, as I’ve traveled to dozens of cities across the US and globally, I’ve seen this realization dawning. Individuals in all places are choosing up the mantle of collective resistance. Most seen and vibrant have been the information middle protests popping up in communities throughout geographies and political divides. In New Mexico, I met with residents keen to coach themselves in regards to the AI business over potluck, to demand transparency and accountability for native tasks, akin to a large multi-billion greenback OpenAI supercomputing campus being proposed within the state as a part of the corporate’s $500bn Stargate computing infrastructure buildout.
As a lot as Silicon Valley would want you to consider it, AI doesn’t necessitate imperial conquest, nor may broad-based profit from the know-how ever emerge from such a basis
At a gathering in New York, I listened as KeShaun Pearson, a pacesetter within the battle in Memphis, Tennessee, towards Musk’s Colossus supercomputers, gave a heartfelt reminder of the toll that the ability’s dozens of methane fuel generators had been having on his neighborhood. “Take two deep breaths,” he mentioned to the viewers. “That’s a human proper” that was being taken from them. As of this month, Anthropic is using Colossus.
On the similar occasion, Kitana Ananda, one other neighborhood chief from Tucson, Arizona, mobilizing towards Challenge Blue, an Amazon hyperscale AI facility, described the deep-seated feeling that she and her fellow residents shared: that they fought not only for their very own neighborhood however for each neighborhood being steamrolled by the AI business. And on a 114F day, as they packed into metropolis corridor in a present of pressure and watched the council vote 7-0 to pause the challenge in its present kind, they whooped and cried with the elation that their victory was each neighborhood’s victory.
Employees are additionally putting throughout sectors and international locations: in northern California, greater than 2,000 healthcare professionals at Kaiser Permanente walked out over the specter of AI getting used to automate their work or degrade affected person outcomes. In Kenya, knowledge employees and content material moderators contracted by AI firms to coach and clear up their fashions are organizing to deliver worldwide consideration to their exploitation and demand higher working situations.
In additional than 30 international locations, cultural employees from voice actors to screenwriters to manga illustrators are mobilizing to denounce points starting from the coaching on their work to using AI programs to tear their likeness or change them, in response to the Worker Mobilizations around AI database, a analysis effort led by the Inventive Labour & Important Futures group on the College of Toronto.
Educators and students are pressuring their establishments. Victims and their families are suing. Tech employees themselves are campaigning. Group chats for extra organizing abound. Persons are marching.
The upwelling of collective pushback appears to be forcing the AI business to downsize its ambitions. Already, greater than $150bn price of infrastructure tasks had been blocked or stalled in 2025, in response to Knowledge Heart Watch, an effort monitoring the opposition by AI analysis agency 10a Labs. Buyers are taking note and starting to low cost their projections of how a lot AI firms can ship on their guarantees.
OpenAI shuttered its video-generation app Sora, as soon as lauded by firm executives as considered one of its most necessary merchandise and a brand new frontier in AI improvement. As the Wall Street Journal reported, Sora’s demise in the end stemmed from a number of intersecting issues formed by grassroots motion: flatlining utilization, rocky public notion, tightening financials, and heavy constraints on computational sources.
Right here’s the factor about empires. They don’t simply search to devour every little thing – they rely on it for his or her survival. In different phrases, the very factor that seems to provide them paramount power is their best vulnerability. When even a fraction of the sources they want are withheld, the giants start to stumble. So for those who’re questioning what’s going to ship actual accountability to the AI business and a unique imaginative and prescient of the know-how’s improvement, look past the billionaire mudfight. The true work is going on in all places else.









