Photo: TechCrunch
Ilya Sutskever, the renowned AI scientist and OpenAI co-founder, announced on Thursday that he will officially assume the role of CEO at Safe Superintelligence (SSI), the AI startup he co-founded in 2023. This change in leadership follows the departure of former CEO Daniel Gross, who left the company on June 29 amidst increasing competition for talent in the AI space.
Sutskever made the announcement in a post on X (formerly Twitter), stating that Gross’ role had been “winding down,” and confirmed that Daniel Levy, another co-founder, will now serve as President of the company. The technical team will report directly to Sutskever, as SSI sharpens its focus on building aligned and secure artificial general intelligence.
Gross’ exit comes in the wake of a massive AI hiring spree led by Meta CEO Mark Zuckerberg, who has allocated over $14 billion toward AI infrastructure and talent acquisition in the past year. Part of this investment included bringing in Alexandr Wang, founder of Scale AI, along with top engineers, to join Meta’s newly formed Meta Superintelligence Labs.
Though Gross was reportedly poached by Meta, his name was not included in Zuckerberg’s official announcement of new lab members. Still, his move underscores a broader trend: tech giants aggressively recruiting AI leaders to gain an edge in the superintelligence race.
Meta had also attempted to acquire Safe Superintelligence outright, according to earlier CNBC reports, but those efforts were firmly rejected by Sutskever, who reaffirmed SSI’s commitment to remain an independent organization.
Safe Superintelligence has garnered significant investor attention since its inception. In its latest fundraising round in April, the startup was reportedly valued at $32 billion, reflecting strong confidence in its technical roadmap and vision for safety-focused AI development.
Sutskever emphasized the company's commitment to responsible AI progress, stating:
“You might have heard rumors of companies looking to acquire us. We are flattered by their attention but are focused on seeing our work through. We have the compute, we have the team, and we know what to do. Together we will keep building safe superintelligence.”
His comments signal not just leadership stability but also a strategic refusal to be absorbed into larger tech ecosystems — a stance increasingly rare in today’s acquisition-heavy AI landscape.
Sutskever brings a wealth of experience to his new role. Before launching SSI, he served as Chief Scientist at OpenAI, where he co-led the Superalignment team with Jan Leike, who later moved to rival firm Anthropic. Under his leadership, OpenAI helped define the modern generative AI era with tools like GPT and DALL·E.
Now at SSI, Sutskever is once again at the forefront of AI research — this time with an explicit focus on building safe, aligned superintelligence. With a strong technical foundation, a world-class team, and substantial backing, SSI is positioning itself as one of the few credible challengers to AI incumbents like OpenAI, Google DeepMind, Anthropic, and now Meta.
As the global AI arms race accelerates, the story of Safe Superintelligence under Sutskever’s leadership will be one to watch closely.