For OpenAI, last week was chaotic to say the least.
OpenAI fired one of its founders and CEO Sam Altman. But that was only the beginning of a week-long debacle ending with Altman back on top and all but one board member who ousted him resigning from the board.
If you have whiplash from all the headlines, here's a full timeline of what happened, where OpenAi stands today, and what this situation tells us about AI safety.
The Timeline of Events After Altman Was Fired
Thursday, Nov. 16
Ilya Sutskever, an OpenAI co-founder and chief scientist, texts an unsuspecting Altman about joining a meeting the next day at noon.
Friday, Nov. 17
Noon comes, Altman is fired and soon after, OpenAI’s Board of Directors releases a statement announcing that Altman has been ousted as CEO and member of the board.
Mira Murati, who served as chief technology officer and found out about the firing on Thursday, is appointed interim CEO.
The explanation the board provides for the firing is that Altman “was not consistently candid in his communications with the board.”
This eyebrow-raising line is all the information we’ll get from the board throughout this whole debacle.
On Friday afternoon, the company holds an all-hands meeting led by Sutskever, who doesn’t say much, except emphasizing that the firing was necessary to protect humanity, according to a NY Times article.
The same day, another co-founder, Greg Brockman, has his own meeting, where he learns he’s been removed from the board as chairman. They invite him to retain his role as president of OpenAI.
His response: “Thanks but no thanks.”
Saturday, Nov. 18
Microsoft, a key partner and investor in OpenAI, releases a statement reaffirming their partnership with OpenAI and commitment to new interim CEO, Murati,
Here’s what wasn’t in their statement: They were blindsided by the firing, only learning about it right before it was shared publicly. And they weren't happy about it.
Meanwhile, chief operating officer Brad Lightcap attempts to give OpenAI employees some clarity on what happened.
In an internal memo, he shares that Atlman's firing wasn’t due to malfeasance or something that would compromise their security, privacy or safety practices. But he doesn't know much more.
Sunday, Nov. 19
News spreads that Altman and the Board are in discussion to revisit his role in the company, but then the Board announces former Twitch exec, Emmett Shear, as the new CEO.
In a surprising turn of events, Microsoft hires Altman and Brockman to lead a new AI research unit – A.K.A OpenAI.2.0.
We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett…— Satya Nadella (@satyanadella) November 20, 2023
Monday, Nov. 20
Remember that domino effect I mentioned?
Following Brockman and the three senior researchers’ departure, over 500 of OpenAI’s 700 employees sign a letter by Monday morning demanding the board’s resignation. Or else, they’ll all join Altman at Microsoft.
That list of signatures will grow to 738 by end of day, representing over 90% of the company.
Here’s the real kicker: Sutskever, who seemingly orchestrated the whole thing, has also signed the letter. Right before the letter is published, Sutskever shares his regret on X, formerly Twitter.
Why did you take such a drastic action?— Elon Musk (@elonmusk) November 21, 2023
If OpenAI is doing something potentially dangerous to humanity, the world needs to know.
Elon Musk replies to this tweet, demanding the reason why Altman was fired, and says, “If OpenAI is doing something potentially dangerous to humanity, the world needs to know.”
While this may be true, this chaos at OpenAI benefits the billionaire, who launched his own AI chatbot Grok earlier in November. The same chatbot Atlman mocked after its release.
GPT-4? More like GPT-Snore!— Elon Musk (@elonmusk) November 10, 2023
When it comes to humor, GPT-4 is about as funny as a screendoor on a submarine.
Humor is clearly banned at OpenAI, just like the many other subjects it censors.
That’s why it couldn't tell a joke if it had a goddamn instruction manual. It's like…
The more chaos OpenAI faces, the better it is for competitors like X, Meta, and Google.
But back to the real drama.
The letter is pretty scathing, stating the Board “did not have the competence to oversee OpenAI” and “negotiated [with Altman] in bad faith.”
Well, well, well, how the turn tables.
Tuesday, Nov. 21
While OpenAI’s employees wait for an answer from the Board, Salesforce joins the conversation.
CEO Marc Benioff shares on X that the company will match any resigned employee’s compensation if they join Salesforce’s AI research team.
Take the red pill. ❤️💊— Marc Benioff (@Benioff) November 22, 2023
Meanwhile, Musk shared a letter – yet to be substantiated – supposedly written by former OpenAI employees accusing Altman and Brockman of financial exploitation, manipulation, and discrimination.
These efforts aren’t enough to stop Altman's momentum.
The board releases a statement later that day, announcing that Altman will return and a new board will be assembled.
Wednesday, Nov. 22 - Today
The gang's back together. Altman’s back on top. Musk is somewhere punching the air.
we are so back pic.twitter.com/YcKwkqdNs5— Greg Brockman (@gdb) November 22, 2023
As for the board, it was originally made up of six members, a size many critics deemed too small and divided.
For now, the board only has three members, with only one former board member remaining, Adam D’Angelo. D’Angelo was rumored to have been the main blocker to Altman’s return and was a supporter of his ouster, so his place on the board is surprising.
All other former members who contributed to his firing, with the exception of Sutskever, were pushed out.
What led to Altman's firing?
There’s a lot of speculation about the reasons behind Altman’s firing but it all centers around this main theory: Altman and his board members had ideological clashes regarding AI.
In the last year, researchers like Sutskever have reportedly been increasingly worried that Altman is putting innovation ahead of safety. One board member in particular, Helen Toner, voiced some concerns in a research paper she co-authored.
According to The NY Times, Altman, who has publicly stressed OpenAI’s mission of building AI that benefits humanity, reprimanded Toner.
Altman reportedly said her paper was dangerous to the company and discussed removing her from the board with other OpenAI execs, in the weeks leading up to the firing.
Both Toner and Sutskever are rumored to have beliefs closely tied to the “Effective Altruism” movement, which is concerned with AI destroying humanity.
This concern came to the forefront when at a summit, Altman alluded to a powerful, new AI model from OpenAI called Q*.
The fast development of this new model alarmed top researchers at the company, who felt the technology hadn’t yet met safety standards for commercial use.
Why does this all matter?
OpenAI is the current leader in the AI space – how they approach AI has a trickle down effect on other big players as well as consumers.
The internal conflict within the company speaks to a larger concern regarding the power and risk of artificial intelligence.
With Altman reinstated, it suggests the company will forge ahead, taking an aggressive approach to AI development and innovation.
The question is whether they’ll stick to their mission or get blinded by the innovation of it all.