AGI Strategies

person

Ilya Sutskever

Ilya Sutskever

OpenAI co-founder; now CEO of Safe Superintelligence Inc (SSI)

Co-led GPT-era scaling at OpenAI, participated in the 2023 board ouster of Sam Altman over alleged safety concerns, then left to found Safe Superintelligence Inc as a single-product lab focused explicitly on aligned superintelligence.

current CEO and Co-founder, Safe Superintelligence Inc (SSI)
past Co-founder and Chief Scientist, OpenAI

Profile

expertise

Frontier builder

Currently or recently led training, architecture, or safety work on a frontier model. Hands on the loss curve.

Co-author of AlexNet (2012). Co-founder and former Chief Scientist of OpenAI; co-led the Superalignment team. Co-founded Safe Superintelligence Inc. (2024). Hands-on technical lead on most major OpenAI training runs through GPT-4.

recognition

Household name

Name recognition outside the AI/CS community. Featured by mainstream press, a Wikipedia page in many languages, a published bestseller, or holds a position the lay public knows.

Featured by NYT, The Atlantic, podcasts. Central character in the November 2023 OpenAI board episode, name-recognised beyond the field.

vintage

Deep-learning rise

Came up post-AlexNet. ImageNet, AlphaGo, transformer paper. DeepMind, Google Brain, FAIR establish the modern lab template.

AlexNet co-author 2012, the paper that named the era. Sequence-to-sequence 2014. His career is the deep-learning era.

Hand-classified. See the board for the criteria and the full grid.

Strategy positions

Existential primacyendorses

Extinction/disempowerment risk overrides ordinary cost-benefit

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Race to aligned SIendorses

Build aligned superintelligence first, before adversaries

Founded SSI on the explicit thesis that building safe superintelligence is one technical problem to be solved in a single push, insulated from commercial product pressure.

We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.
articleSafe Superintelligence Inc. launch announcement· Safe Superintelligence Inc· 2024-06-19· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Ilya Sutskever's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Dario Amodei

    Dario Amodei

    shared 2 · J=0.67

    CEO of Anthropic; 'Machines of Loving Grace' author

  • Alan Robock

    Alan Robock

    shared 1 · J=0.50

    Rutgers climate scientist; nuclear winter researcher

  • Alex Karp

    Alex Karp

    shared 1 · J=0.50

    CEO of Palantir

  • Alex Wang

    Alex Wang

    shared 1 · J=0.50

    Founder of Scale AI; data infrastructure for frontier models

  • Andy Jones

    shared 1 · J=0.50

    Anthropic researcher; scaling inference laws

  • Avital Balwit

    shared 1 · J=0.50

    Anthropic communications lead; public-facing AI safety voice

Record last updated 2026-04-24.