Safe Superintelligence (SSI), an AI startup co-founded by former OpenAI chief scientist Ilya Sutskever, has reached a $32 billion valuation after raising an additional $2 billion in funding. The company, founded in June 2024, has yet to release a product or disclose technical details about its approach to building ‘safe superintelligence.’ The funding round was led by Greenoaks Capital, with participation from Lightspeed Venture Partners and Andreessen Horowitz. The staggering valuation highlights investor confidence in AI safety as a priority, even for early-stage, product-less ventures.
From OpenAI to Safe Superintelligence
SSI’s founding followed Sutskever’s dramatic exit from OpenAI in May 2024 after his involvement in a boardroom coup attempt that sought to oust CEO Sam Altman. Alongside co-founders Daniel Gross (ex-Apple) and Daniel Levy (former OpenAI researcher), Sutskever aims to develop AI systems where safety is the foundational principle, not an afterthought. His new venture emerges amid escalating debates over AI risks, including reports like the International AI Safety Summit’s 2025 study on mitigating dangers from frontier AI models.
What Does $32B Say About AI’s Future Priorities?
The valuation raises questions: Why would investors commit billions to a company with no product? The answer lies in the growing emphasis on AI safety as a competitive and regulatory necessity.
Key Factors Behind SSI’s Valuation | |
---|---|
Founder credibility (Sutskever’s OpenAI legacy) | High investor confidence in leadership |
Rising regulatory focus on AI safety | Governments pushing for stricter safeguards |
Scarcity of ‘safety-first’ AI ventures | Few peers in the space |
According to TechCrunch, SSI’s backers are betting on Sutskever’s ability to translate safety research into scalable systems—a challenge even OpenAI has struggled with publicly.
The funding surge also reflects a broader trend: capital is flowing toward AI safety initiatives as governments worldwide draft stricter regulations. The EU’s AI Act and U.S. executive orders on AI regulation have made compliance a priority, incentivizing startups to embed safety early.
Still, skepticism persists.