Former OpenAI chief scientist Ilya Sutskever has embarked on a new venture, announcing the establishment of Safe Superintelligence Inc. (SSI) alongside Daniel Gross and Daniel Levy. This move marks Sutskever’s decisive shift towards prioritizing AI safety in his career.
SSI aims to pioneer the development of advanced AI systems that prioritize safety alongside capabilities. In his announcement, Sutskever emphasized a singular focus on “safe superintelligence,” outlining SSI’s commitment to navigating AI development without compromising on safety.
The company’s approach contrasts with larger AI firms, which often balance innovation with commercial pressures. Sutskever’s departure from OpenAI earlier this year, amidst internal conflict, shaped his resolve to pursue a more stable and undistracted path at SSI.
Drawing from lessons at OpenAI, where he co-led the Superalignment team, Sutskever underscores the intertwined challenges of safety and technological advancement. SSI’s model, designed to insulate long-term goals from short-term commercial pressures, positions it uniquely in the AI landscape.
SSI, headquartered in Palo Alto and Tel Aviv, is actively recruiting technical talent to bolster its mission. With a clear for-profit strategy from inception, the company seeks to secure capital to drive ambitious AI development goals.
Sutskever’s vision for SSI underscores a dedicated commitment to shaping the future of AI responsibly, setting a course focused squarely on safe and powerful AI innovation.