The San Francisco-based company said late Tuesday that it
“reached an agreement in principle” for co-founder Sam Altman to return as CEO
under a different board of directors.
The board will be led by former Salesforce co-CEO Bret
Taylor, who chaired Twitter’s board before Elon Musk took over the platform
last year. The other members will be former U.S. Treasury Secretary Larry
Summers and Quora CEO Adam D’Angelo.
OpenAI’s previous board, which included D’Angelo, had
refused to give specific reasons for firing Altman, leading to a weekend of
internal conflict at the company and growing outside pressure from the
startup’s investors.
The turmoil also accentuated the differences between Altman
— who’s become the face of generative AI’s rapid commercialization since
ChatGPT’s arrival a year ago — and board members who have expressed deep
reservations about the safety risks posed by AI as it gets more advanced.
“The OpenAI episode shows how fragile the AI ecosystem is
right now, including addressing AI’s risks,” said Johann Laux, an expert at the
Oxford Internet Institute focusing on human oversight of artificial
intelligence.
Before the board was replaced, venture capitalist Vinod
Khosla, a vocal Altman supporter whose firm is an OpenAI investor, wrote in an
opinion column at The Information that board members had set back the
“tremendous benefits” of AI by misapplying their “religion of ‘effective
altruism.’”
Some of OpenAI’s board members over the years have had ties
to effective altruism, the philanthropic social movement that prioritizes
donating to projects that will have the greatest impact on the largest number
of people, including humans in the future.
While many effective altruists believe AI could offer
powerful benefits, they also advocate for mitigating the technology’s potential
risks.
Helping to drive Altman’s return and the installation of a
new board was Microsoft, which has invested billions of dollars in OpenAI and
has rights to its existing technology.
After Altman’s dismissal, the software giant quickly moved
to hire him, as well as another OpenAI co-founder and former president, Greg
Brockman, who quit in protest after Altman’s removal. That emboldened a threat
to resign by hundreds of OpenAI employees, who signed a letter calling for the
board’s resignation and Altman’s return.
One of the four board members who participated in Altman’s
ouster, OpenAI co-founder and chief scientist Ilya Sutskever, later expressed
regret and joined the call for the board’s resignation.
While promising to welcome OpenAI’s fleeing workforce,
Microsoft CEO Satya Nadella also made clear in a series of interviews Monday
that he was open to the possibility of Altman returning to OpenAI as long as
the startup’s governance problems were solved.
“We are encouraged by the changes to the OpenAI board,”
Nadella posted on X late Tuesday. “We believe this is a first essential step on
a path to more stable, well-informed and effective governance.”
In his own post, Altman said that with the new board and
with Satya’s support, he was “looking forward to returning to OpenAI and
building on our strong partnership” with Microsoft.
Gone from the OpenAI board are its only two women: tech
entrepreneur Tasha McCauley and Helen Toner, a policy expert at Georgetown’s
Center for Security and Emerging Technology, both of whom have expressed
concerns about AI safety risks.
“And now, we all get some sleep,” Toner posted on X after
the announcement.
The leadership drama offers a glimpse into how big tech
companies are taking the lead in governing AI and its risks, while governments
scramble to catch up. The European Union is working to finalize the world’s
first comprehensive AI rules.
In the absence of regulations, “companies decide how a
technology is rolled out,” Laux said.
“Regulation and corporate governance sound very
technocratic, but in the end, it’s humans making decisions,” he said,
explaining that’s why it matters so much who’s on a company board or at a
regulatory body.
Co-founded by Altman as a nonprofit with a mission to safely
build AI that outperforms humans and benefits humanity, OpenAI later became a
for-profit business — but one still run by its nonprofit board of directors.
This was not OpenAI’s first experience with executive
turmoil. Past examples including a 2018 falling out between board co-chairs
Altman and Musk that led to Musk’s exit, and a later exodus of top leaders who
started the competitor Anthropic.
It’s not clear yet if the board’s structure will change with
its new members.
Under the current structure, all profit beyond a certain cap
is supposed to go back to its mission of helping humanity. The board is also
tasked with deciding when AI systems have become so advanced that they are
better than humans “at most economically valuable work.” At that point,
Microsoft’s intellectual property licenses no longer apply.
“We are collaborating to figure out the details,” OpenAI
posted on social media. “Thank you so much for your patience through this.”
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
— OpenAI (@OpenAI) November 22, 2023
We are collaborating to figure out the details. Thank you so much for your patience through this.
Nadella said Brockman, who was OpenAI’s board chairman until
Altman’s firing, also will have a key role to play in ensuring the group
“continues to thrive and build on its mission.”
As for OpenAI’s short-lived interim CEO Emmett Shear, the
second temporary leader in the days since Altman’s ouster, he posted on X that
he was “deeply pleased by this result” after about 72 “very intense hours of
work.”
“Coming into OpenAI, I wasn’t sure what the right path would
be,” wrote Shear, the former head of Twitch. “This was the pathway that
maximized safety alongside doing right by all stakeholders involved. I’m glad
to have been a part of the solution.”