Over the past year, Sam Altman has led OpenAI to the tech industry’s adult table. Thanks to its wildly popular ChatGPT chatbot, the San Francisco startup was at the center of an artificial intelligence boom, and Mr. Altman, OpenAI’s chief executive, had become one of the most recognizable people in the field. world of technology.
But this success caused tensions within the company. Ilya Sutskever, a respected AI researcher who co-founded OpenAI with Mr. Altman and nine others, grew increasingly concerned that OpenAI’s technology could be dangerous and that Mr. Altman was not paying enough attention to this risk, according to three people close to his thinking. Mr. Sutskever, a member of the company’s board of directors, has also opposed what he sees as a reduction in his role at the company, according to two of the people interviewed.
This conflict between rapid growth and AI safety came to the fore on Friday afternoon, when Mr. Altman was ousted from his post by four of OpenAI’s six board members, led by Mr. Sutskever. The move shocked OpenAI employees and the rest of the tech industry, including Microsoft, which invested $13 billion in the company. Some in the industry believe the split is as big as when Steve Jobs was forced to leave Apple in 1985.
The ouster of Mr. Altman, 38, drew attention to a long-standing divide within the AI community between those who believe AI is the greatest business opportunity of a generation and others who fear that acting too quickly could be dangerous. And this ousting showed how a philosophical movement devoted to the fear of AI had become an essential part of technological culture.
Since the release of ChatGPT almost a year ago, artificial intelligence has captured the public imagination, with hopes that it could be used for important work like drug research or to help teach children. But some AI scientists and policy leaders worry about its risks, such as the disappearance of automated jobs or autonomous warfare that escapes human control.
The fear that AI researchers will build something dangerous is a fundamental part of OpenAI’s culture. Its founders believed that because they understood these risks, they were the right people to build it.
The OpenAI board did not give a specific reason why it expelled Mr. Atman, other than saying in a blog post that it did not believe he was communicating honestly with them . OpenAI employees were informed Saturday morning that his dismissal had nothing to do with “malfeasance or anything related to our financial, business, safety or security/privacy practices,” according to a message seen by the New York Times.
Greg Brockman, another co-founder and president of the company, resigned in protest Friday evening. So does OpenAI’s research director. By Saturday morning, the company was in chaos, according to a half-dozen current and former employees, and its roughly 700 employees were struggling to understand why the board had made this decision.
“I’m sure you’re all feeling confusion, sadness, and maybe a little fear,” Brad Lightcap, OpenAI’s chief operating officer, said in a memo to OpenAI employees. “We are fully focused on dealing with this issue, seeking a solution and clarity, and getting back to work. »
Mr. Altman was invited to join a video board meeting at noon Friday in San Francisco. There, Mr. Sutskever, 37, read a script that looked very similar to the blog post the company published minutes later, according to a person familiar with the matter. The message stated that Mr. Altman “was not always candid in his communications with the Board of Directors, which hindered his ability to carry out his responsibilities.”
But in the hours that followed, OpenAI employees and others focused not only on what Mr. Altman might have done, but also on how the San Francisco startup is structured and what Extreme views on the dangers of AI have been incorporated into the company’s work since it was established in 2015.
Mr. Sutskever and Mr. Altman could not be reached for comment on Saturday.
In recent weeks, Jakob Pachocki, who helped oversee GPT-4, the technology at the heart of ChatGPT, was promoted to director of research at the company. After previously serving in a lower position than Mr. Sutskever, he was elevated to a position alongside Mr. Sutskever, according to two people familiar with the matter.
Mr. Pachocki left the company Friday evening, the sources said, shortly after Mr. Brockman. Earlier today, OpenAI said Mr Brockman had been removed as chairman of the board and would report to new interim CEO Mira Murati. Other allies of Mr. Altman, including two senior researchers, Szymon Sidor and Aleksander Madry, also left the company.
Mr. Brockman said in a post on, formerly Twitter, that even though he was chairman of the board, he did not attend the board meeting at which Mr. Altman was ousted. That left Mr. Sutskever and three other board members: Adam D’Angelo, chief executive of the question-and-answer site Quora; Tasha McCauley, senior associate scientist at the RAND Corporation; and Helen Toner, director of strategy and basic research grants at the Center for Security and Emerging Technologies at Georgetown University.
They could not be reached for comment Saturday.
Ms. McCauley and Ms. Toner have ties to the rationalist and effective altruist movements, a community deeply concerned that AI could one day destroy humanity. Current AI technology cannot destroy humanity. But this community believes that as technology becomes more and more powerful, these dangers will arise.
In 2021, a researcher named Dario Amodei, who also has ties to this community, and around 15 other OpenAI employees left the company to start a new AI company called Anthropic.
Mr. Sutskever was increasingly aligned with these beliefs. Born in the Soviet Union, he spent his formative years in Israel and emigrated to Canada as a teenager. As an undergraduate at the University of Toronto, he helped create a breakthrough in an AI technology called neural networks.
In 2015, Mr. Sutskever left his job at Google and helped found OpenAI alongside Mr. Altman, Mr. Brockman and Tesla Chief Executive Elon Musk. They built the lab as a nonprofit, saying that unlike Google and other companies, it would not be driven by commercial incentives. They set out to build what’s called artificial general intelligence, or AGI, a machine that can do anything the brain can do.
Mr. Altman transformed OpenAI into a for-profit company in 2018 and negotiated a billion-dollar investment from Microsoft. Such huge sums of money are essential to creating technologies like GPT-4, launched earlier this year. Since its initial investment, Microsoft has invested an additional $12 billion into the company.
The company was still governed by a nonprofit board of directors. Investors like Microsoft receive profits from OpenAI, but their profits are capped. Any money over the cap is donated to the nonprofit.
Seeing the power of GPT-4, Mr. Sutskever helped create a new Super Alignment team within the company that would explore ways to ensure that future versions of the technology would do no harm.
Mr. Altman was open to these concerns, but he also wanted OpenAI to stay ahead of its much larger competitors. In late September, Mr. Altman traveled to the Middle East for a meeting with investors, according to two people familiar with the matter. It has sought up to $1 billion in funding from SoftBank, the Japanese technology investor led by Masayoshi Son, for a possible OpenAI venture that would build a hardware device to run AI technologies such as ChatGPT.
OpenAI is also in talks for “tender offer” financing that would allow employees to cash in company shares. The deal would value OpenAI at more than $80 billion, nearly triple its value about six months ago.
But the company’s success appears to have only increased fears of a problem with AI.
“It doesn’t seem at all implausible that we have computers – data centers – that are much smarter than humans,” Sutskever said during a Nov. 2 podcast. “What would such AI do? I don’t know.”
Kevin Roose And Tripp Mickle reports contributed.
Gn En bus