Kyle Orland – Dec 5, 2023 9:31 pm UTC
Expand / Sam Altman, president of Y Combinator and co-chairman of OpenAI, seen here in July 2016.
Drew Angerer/ Getty Images News
When Sam Altman was unexpectedly removed as CEO of OpenAI– before reinstatement days later– the company’s board publicly justified the move by stating Altman “was not consistently honest in his communications with the board, impeding its ability to exercise its duties.” In the days since, there has been some reporting on possible reasons for the attempted board coup, but little in the way of follow-up on what specific information Altman was supposedly less than “honest” about.
Now, in a comprehensive piece for The New Yorker, author Charles Duhigg– who was embedded inside OpenAI for months on a separate story– suggests that some board members found Altman “manipulative and conniving” and took particular issue with the way Altman allegedly tried to manipulate the board into firing fellow board member Helen Toner.
Board “manipulation” or “ham-fisted” maneuvering?
Toner, who serves as director of strategy and fundamental research grants at Georgetown University’s Center for Security and Emerging Technology, apparently drew Altman’s unfavorable attention by co-writing a paper on different ways AI companies can “signal” their commitment to security through “ostentatious” words and actions. In the paper, Toner compares OpenAI’s public launch of ChatGPT in 2015 with Anthropic’s “deliberate deci[sion] not to productize its technology in order to avoid stoking the flames of AI hype.”
She also wrote that, “by delaying the release of [Anthropic chatbot] Claude until another company put out a similarly capable product, Anthropic was demonstrating its willingness to avoid exactly the kind of frenzied corner-cutting that the release of ChatGPT seemed to encourage.”
Toner reportedly apologized to the board for the paper, Duhigg writes that Altman nevertheless began to approach specific board members advocating her removal. In those talks, Duhigg says Altman “misrepresented” how other board members felt about the proposed removal, “playing them off against each other by lying about what other people thought,” according to one source “familiar with the board’s discussions.” A separate “person familiar with Altman’s perspective” suggests instead that Altman’s actions were simply a “ham-fisted” attempt to remove Toner, and not manipulation.
That telling would align with OpenAI COO Brad Lightcap’s statement soon after the firing that the decision “was not made in response to impropriety or anything related to our financial, business, security, or safety/privacy practices. This was a breakdown in communication between Sam and the board.” It may also explain why the board wasn’t willing to elaborate publicly about arcane discussions of board politics for which there was little hard evidence.
At the same time, Duhigg’s piece also lends some credence to the idea that the OpenAI board felt it needed to be able to hold Altman “accountable” to fulfill its mission to “ensure AI benefits all of humanity,” as one unnamed source put it.