OpenAI Takes Its Mask Off
5 min readThere’s a story about Sam Altman that has been repeated often enough to become Silicon Valley lore. In 2012, Paul Graham, a co-founder of the famed start-up accelerator Y Combinator and one of Altman’s biggest mentors, sat Altman down and asked if he wanted to take over the organization.
The decision was a peculiar one: Altman was only in his late 20s, and, at least on paper, his qualifications were middling. He had dropped out of Stanford to found a company that ultimately hadn’t panned out. After seven years, he’d sold it for roughly the same amount that his investors had put in. The experience had left Altman feeling so professionally adrift that he’d retreated to an ashram. But Graham had always had intense convictions about Altman. “Within about three minutes of meeting him, I remember thinking, ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham once wrote. Altman, too, excelled at making Graham and other powerful people in his orbit happy—a trait that one observer called Altman’s “greatest gift.” As Jessica Livingston, another YC co-founder, would tell The New Yorker in 2016, “There wasn’t a list of who should run YC and Sam at the top. It was just: Sam.” Altman would smile uncontrollably, in a way that Graham had never seen before. “Sam is extremely good at becoming powerful,” Graham said in that same article.
The elements of this story—Altman’s uncanny ability to ascend and persuade people to cede power to him—have shown up throughout his career. After co-chairing OpenAI with Elon Musk, Altman sparred with him for the title of CEO; Altman won. And in the span of just a few hours yesterday, the public learned that Mira Murati, OpenAI’s chief technology officer and the most important leader at the company besides Altman, is departing along with two other crucial executives: Bob McGrew, the chief research officer, and Barret Zoph, a vice president of research who was instrumental in launching ChatGPT and GPT-4o, the “omni” model that, during its reveal, sounded uncannily like Scarlett Johansson. To top it off, Reuters, The Wall Street Journal, and Bloomberg reported that OpenAI is planning to depart from its nonprofit roots and become a for-profit enterprise that could be valued at $150 billion. Altman reportedly could receive 7 percent equity in the new arrangement—or the equivalent of $10.5 billion if the valuation pans out. (The Atlantic recently entered a corporate partnership with OpenAI.)
In a post on X yesterday, Altman said that the leadership departures were each independent of one another and amicable, but that they were happening “all at once, so that we can work together for a smooth handover to the next generation of leadership.” In regards to OpenAI’s restructuring, a company spokesperson gave me a statement it has given before: “We remain focused on building AI that benefits everyone, and as we’ve previously shared, we’re working with our board to ensure that we’re best positioned to succeed in our mission.” The company will continue to run a nonprofit, although it is unclear what function it will serve.
I started reporting on OpenAI in 2019, roughly around when it first began producing noteworthy research. The company was founded as a nonprofit with a mission to ensure that AGI—a theoretical artificial general intelligence, or an AI that meets or exceeds human potential—would benefit “all of humanity.” At the time, OpenAI had just released GPT-2, the language model that would set OpenAI on a trajectory toward building ever larger models and lead to its release of ChatGPT. In the six months following the release of GPT-2, OpenAI would make many more announcements, including Altman stepping into the CEO position, its addition of a for-profit arm technically overseen and governed by the nonprofit, and a new multiyear partnership with, and $1 billion investment from, Microsoft. In August of that year, I embedded in OpenAI’s office for three days to profile the company. That was when I first noticed a growing divergence between OpenAI’s public facade, carefully built around a narrative of transparency, altruism, and collaboration, and how the company was run behind closed doors: obsessed with secrecy, profit-seeking, and competition.
I’ve continued to follow OpenAI closely ever since, and that rift has only grown—leading to repeated clashes within the company between groups who have vehemently sought to preserve their interpretation of OpenAI’s original nonprofit ethos and those who have aggressively pushed the company toward something that, in their view, better serves the mission (namely, launching products that get its technologies into the hands of more people). I am now writing a book about OpenAI, and have spoken with dozens of people within and connected to the company in the process.
In a way, all of the changes announced yesterday simply demonstrate to the public what has long been happening within the company. The nonprofit has continued to exist until now. But all of the outside investment—billions of dollars from a range of tech companies and venture-capital firms—goes directly into the for-profit, which also hires the company’s employees. The board crisis at the end of last year, in which Altman was temporarily fired, was a major test of the balance of power between the two. Of course, the money won, and Altman ended up on top.
Murati and the other executives’ departures follow several leadership shake-ups since that crisis. Greg Brockman, a co-founder and OpenAI’s president, went on leave in August, and Ilya Sutskever, another co-founder and the company’s chief scientist, departed along with John Schulman, a founding research scientist, and many others. Notably, Sutskever and Murati had both approached the board with concerns about Altman’s behavior, which fed into the board’s decision to exercise its ousting power, according to The New York Times. Both executives reportedly described a pattern of Altman manipulating the people around him to get what he wanted. And Altman, many people have told me, pretty consistently gets what he wants. (Through her lawyer, Murati denied this characterization of her actions to the Times.)
The departure of executives who were present at the time of the crisis suggests that Altman’s consolidation of power is nearing completion. Will this dramatically change what OpenAI is or how it operates? I don’t think so. For the first time, OpenAI’s public structure and leadership are simply honest reflections of what the company has been—in effect, the will of a single person. “Just: Sam.”