Ilya Sutskever, co-founder and chief scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv, Israel, June 5, 2023. Jack Guez | AFP | Getty Images

Ilya Sutskever, a co-founder of OpenAI, departed from the artificial intelligence startup in May and has successfully secured $1 billion in funding for his new venture, Safe Superintelligence (SSI).

The company revealed in a post on X that its investors comprise notable firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, along with NFDG, an investment partnership co-managed by SSI executive Daniel Gross.

"We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product," Sutskever wrote on X in May to announce the new venture.

Ilya Sutskever served as OpenAI's chief scientist and co-directed the Superalignment team alongside Jan Leike, who departed in May to join competitor Anthropic.

Following their exits, OpenAI dissolved the team, merely a year after its formation. According to a source familiar with the matter, some members of the team were reassigned to different departments within the organization, as reported by CNBC at that time.

Leike wrote in a post on X at the time that OpenAI's "safety culture and processes have taken a backseat to shiny products."

Sutskever co-founded SSI with Mr. Daniel Gross, who previously supervised Apple’s AI and search initiatives, and Mr. Daniel Levy, a former employee of OpenAI. The company maintains offices in Palo Alto, California, and Tel Aviv, Israel.

"SSI is our mission, our name, and our entire product roadmap, because it is our sole focus," the company posted on X. "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Sutskever was among the members of the OpenAI board who played a role in the temporary removal of co-founder and CEO Sam Altman in November.

The board stated that Altman had not been "consistently candid in his communications with the board." However, the situation soon appeared to be more intricate.

Reports from The Wall Street Journal and other media indicated that Sutskever was primarily focused on ensuring that artificial intelligence would not pose risks to humanity, while others, including Altman, were more inclined to advance the development of new technologies.

Nearly all employees at OpenAI signed an open letter expressing their intention to resign in response to the board's decision.

Shortly thereafter, Altman was reinstated. Following the unexpected removal and his swift return, Sutskever issued a public apology for his involvement in the situation.

"I deeply regret my participation in the board's actions," Sutskever wrote in a post on X on Nov. 20. "I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."