In brief
- Ilya Sutskever prepared a 52-page case against Sam Altman based almost entirely on unverified claims from one source—CTO Mira Murati
- OpenAI came within days of merging with competitor Anthropic during the crisis, with board member Helen Toner arguing that destroying the company could be “consistent with the mission”
- The board was “rushed” and “inexperienced,” according to Ilya himself, who had been planning Altman’s removal for at least a year while waiting for favorable board dynamics
Ilya Sutskever sat for nearly 10 hours of videotaped testimony in the Musk v. Altman lawsuit, back on October 1 of this year.
The co-founder who helped build ChatGPT and became infamous for voting to fire Sam Altman in November 2023 was finally under oath and compelled to answer. The 365-page transcript was released this week.
What it reveals is a portrait of brilliant scientists making catastrophic governance decisions, unverified allegations treated as facts, and ideological divides so deep that some board members preferred destroying OpenAI rather than letting it continue under Altman’s leadership.
The Musk v. Altman lawsuit centers on Elon Musk’s claim that OpenAI and its CEO, Altman, betrayed the company’s original nonprofit mission by turning its research into a for-profit venture aligned with Microsoft—raising high-stakes questions about who controls advanced AI models and whether they can be developed safely in the public interest.
For those following the OpenAI drama, the document is an eye-opening and damning read. It’s a case study in how things go wrong when technical genius meets organizational incompetence.
Here are the five most significant revelations.
1. The 52-page dossier the public hasn’t seen
Sutskever wrote an extensive case for removing Altman, complete with screenshots, and organized into a 52-page brief.
Sutskever testified that he explicitly said in the memo: “Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another.”
He sent the memo to independent directors using disappearing email technology “because I was worried that those memos will somehow leak.” The full brief has not been produced via discovery.
“The context for this document is that the independent board members asked me to prepare it. And I did. And I was pretty careful,” Sutskever testified, saying that portions of the memo exist in screenshots made by OpenAI CTO Mira Murati.
2. A year-long game of board chess
When asked how long he’d been considering firing Altman, Sutskever answered: “At least a year.”
Asked what dynamics he was waiting for, he said: “That the majority of the board is not obviously friendly with Sam.”
A CEO who controls board composition is functionally untouchable. Sutskever’s testimony shows he understood this perfectly and adjusted his strategy accordingly.
When board member departures created that opening, he moved. He was playing long-term board politics, despite how close Altman and Sutskever seemed publicly.
3. The weekend OpenAI almost disappeared
On Saturday, November 18, 2023—within 48 hours of Altman’s firing—there were active discussions about merging OpenAI with Anthropic.
Helen Toner, a former OpenAI board member, was “the most supportive” of this direction, according to Sutskever.
If the merger had happened, OpenAI would have ceased to exist as an independent entity.
“I don’t know whether it was Helen who reached out to Anthropic or whether Anthropic reached out to Helen,” Sutskever testified. “But they reached out with a proposal to be merged with OpenAI and take over its leadership.”
Sutskever said he was “very unhappy about it,” adding later that he “really did not want OpenAI to merge with Anthropic.”
4. “Destroying OpenAI could be consistent with the mission”
When OpenAI executives warned that the company would collapse without Altman, Toner responded that destroying OpenAI could be consistent with its safety mission.
This is the ideological heart of the crisis. Toner represented a strand of AI safety thinking that views rapid AI development as existentially dangerous—potentially more dangerous than no AI development at all.
“The executives—it was a meeting with the board members and the executive team—the executives told the board that, if Sam does not return, then OpenAI will be destroyed, and that’s inconsistent with OpenAI’s mission,” Sutskever testified. “And Helen Toner said something to the effect that it is consistent, but I think she said it even more directly than that.”
If you genuinely believed that OpenAI posed risks that outweighed its benefits, then a pending employee revolt was irrelevant. The statement helps explain why the board held firm even as 700+ employees threatened to leave.
5. Miscalculations: One source for everything, an inexperienced board and cult-like workforce loyalty
Nearly everything in Sutskever’s 52-page memo came from one person: Mira Murati.
He didn’t verify claims with Brad Lightcap, Greg Brockman, or other executives mentioned in the complaints. He trusted Murati completely, and verification “didn’t occur to (him).”
“I fully believed the information that Mira was giving me,” Sutskever said. “In hindsight, I realize that I didn’t know it. But back then, I thought I knew it. But I knew it through secondhand knowledge.”
When asked about the board’s process, Sutskever was blunt about what went wrong.
“One thing I can say is that the process was rushed,” he testified. “I think it was rushed because the board was inexperienced.”
Sutskever also expected OpenAI employees to be indifferent to Altman’s removal.
When 700 of 770 employees signed a letter demanding Altman’s return and threatening to leave for Microsoft, he was genuinely surprised. He’d fundamentally miscalculated workforce loyalty and the board’s isolation from organizational reality.
“I had not expected them to cheer, but I had not expected them to feel strongly either way,” Sutskever said.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source: https://decrypt.co/347349/inside-deposition-showed-openai-nearly-destroyed-itself


