Disaster Altruism and the Weekend Revolution at OpenAI
An underpowered board blows up the A.I. industry spearhead, Microsoft swoops in, employees revolt. What does this episode say about tech's relationship with decels and doomers?
On Friday afternoon, the board of the non-profit OpenAI, which sits atop a convoluted array of for-profit A.I. businesses and its best known product, ChatGPT, abruptly fired CEO Sam Altman and removed his top lieutenant Greg Brockman from the board.
The thunderbolt sent shockwaves through Silicon Valley, with dozens of OpenAI employees scurrying for new jobs and new ventures. The board’s erratic actions stunned its major investors and partners, such as Microsoft, and left most in technology aghast. Had Altman engaged in some sort of horrible malfeasance or operational failure? No, was the quick answer.
When the board refused to explain the firings beyond a vague charge of poor communication, it looked more and more like amateur hour. OpenAI was close to completing a tender offer, valuing the company at $86 billion, which would have made large payouts to employees. Now that gigantic investment was probably dead.
The blowback was so severe that by Sunday, the OpenAI board was considering stepping down and bringing Sam Altman back. Altman appeared at OpenAI headquarters with a “Guest” pass. The talks, however, fell apart, and by late Sunday night, OpenAI had hired yet another new CEO, Emmett Shear, who until nine months ago had been CEO of Twitch.
After much speculation Altman and Brockman would launch a new A.I. company, Microsoft CEO Satya Nadella announced at 2 a.m. this morning he had hired both Altman and Brockman to lead a new A.I. research division. Nadella also reiterated his support for OpenAI, which Microsoft had backed with more than $10 billion and most of its computing infrastructure.
But then….! Later Monday morning, around 8:55 Eastern U.S. time, some 500 OpenAI employees released a scathing letter to the OpenAI board, demanding their resignation and the reinstatement of Altman and Brockman, and threatening to join the duo at Microsoft. “We are unable to work for or with people who lack competence, judgment and care for our mission and employees,” they wrote to the board.
Among the 500 were, conspicuously, Mira Murati and Ilya Sutskever. Sutskever is still on the board, which on Friday had appointed Murati interim CEO. At 8:15 a.m. today, just before the longer employee letter came out, Sutskever wrote on X, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI.” Within a couple hours, 700 out of the 770 employees had apparently signed the letter.
Enough with the minute-by-minute news. This thing could flip yet again as we write. So what about the bigger implications of this wild series of events?
OpenAI’s relatively inexperienced and potentially ideologically motivated board panicked. Over what, we still don’t know for sure, but it may have been their worries Altman was moving too fast. OpenAI’s governance structure and mission, which always looked complicated, conflicted, and misaligned at best, proved to be so. Elon Musk, Sam Altman, and others founded OpenAI in 2015 as a non-profit to research existential A.I. risks. Along the way, however, Altman and company wanted to build products and become an investible tech company. It seems they never resolved the conflicts. The board thought its job was to pump the breaks while Altman and nearly all 770 employees wanted to move full-speed ahead.
An adult named Satya Nadella stepped in and may have not only partially saved OpenAI from itself but also improved Microsoft’s position. Dozens of OpenAI employees, including Altman and Brockman, may now continue their work at Microsoft, which already operates most of OpenAI’s compute infrastructure. To the extent OpenAI continues, Microsoft will participate in the upside. The damage to OpenAI, however, reduce the chances OpenAI will grow into a multi-trillion-dollar colossus, at which point Microsoft would have yielded its large interest. Elon Musk had been critical of OpenAI because he thought Microsoft effectively owned them. Once again, Elon looks prescient.
The wild weekend appears to have awakened more of Silicon Valley to the bizarre class of people who are policy-obsessed, panic-oriented, and fantasy-prone. They are known variously as “safetyists,” “decels,” “doomers,” or “effective altruists.” These groups do not overlap 100%, and not every board member could be so labeled. But they share relationships and philosophical foundations. Sam Bankman-Fried of FTX fame is a well-known adherent of effective altruism, or EA. They profess to be laser focused on the most efficient ways to help the world, and many would count themselves technology enthusiasts. Increasingly, however, the common denominators among them appear to be detachment from reality, grandiosity, catastrophism, government rent seeking, and poor judgment. (Here’s one speculation about several EA board members, their links to EA Dustin Moskovitz, and his battle with Altman.)
Disaster Altruism
Climate, pandemics, and now A.I. Each of these potential disasters is an opportunity for good works to promote safety at scale, to save the world. Eventually, each becomes not just a philanthropy or policy arena but an industry. ESG, Covid, and A.I. safety are all gigantic stews of think tanks, consultants, boards, public-private partnerships, philanthropies, and academic research and advocacy.
But do they do more harm than good?
Bill Gates and dozens of foundations spent a decade or more obsessed with pandemics. They pumped in tens of billions of dollars. Governments and NGO partners performed dangerous research meant to predict and prevent a viral outbreak. OOPS!
When their experiments unleashed SARS2, they couldn’t wait to deploy their fancy countermeasures to keep the world safe. The pandemic they’d been trying to predict and prevent was exhilarating.
Lockdowns, school closures, mandates, censorship, trillions in inflationary spending, and the initial GOF research itself — this safetyism left the world far less healthy and far less safe.
Climate is of course the OG of global safetyism. A prospective geo apocalypse requires centralized prophylactic energy lockdowns, suppressing technologies that work in favor of those that don’t.
A.I. is the new catastrophic threat. We need global coordination, microchip supervision, White House limits on computing emissions, a non-profit board coup of the industry spearhead, and, if all else fails, air strikes on data centers.
In September, Shear, the new new OpenAI CEO, at least as of tonight, said he’s “in favor of slowing down” A.I. “If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.”
Just what every company investor and employee wants to hear – slow your roll from a 10 to a 2!
Covid, FTX, and now OpenAI. Silicon Valley and the world now more fully understand the political movement pushing for government and disguised quasi-private control of the most important technologies, and the censorship required to mask and perpetuate these bad ideas.
Marc Andreessen offered the obvious alternative and antidote to this dismal scientism:
Series on Artificial Intelligence
How Generative Pre-Training Will Transform the Economy: Part 1
Censors Target Internet Talkers with A.I. Truth Scores: Part 2
A License to Compute Would Concentrate Computation and Speech, Stifling Both: Part 4
Disaster Altruism and the Weekend Revolution at OpenAI: Part 5
While it certainly was an excess of safetyism that led to the lockdowns, one could easily argue it was a sore lack of it that caused the pandemic to spread in the first place and then the vaccines to be massively deployed while ignoring every safety signal.
The "excess optimism kills" lens works just as well if not better than the "excess safetyism kills" lens to interpret what happened in this specific scenario. To each their bias and we'll balance each other out, I hope.
The Wall Street Journal reports that indeed Altman thought the board had been "taken over by people overly concerned with safety and influenced by effective altruism."
"The specter of effective altruism had loomed over the politics of the board and company in recent months...."
"Some of those fears centered on [Helen] Toner, who previously worked at Open Philanthropy" – an EA outfit – and who in October wrote a paper praising OpenAI competitor Anthropic's more conservative safetyist approach.