AGI Strategies

person

Roman Yampolskiy

Roman Yampolskiy

University of Louisville professor; argues AI safety is impossible

Formal argument AI-safety impossibilist: has published papers arguing alignment is undecidable and that superintelligent AI cannot be controlled. Cites the highest publicly stated p(doom) among serious researchers.

current Associate Professor of Computer Science, University of Louisville

Profile

expertise

Deep technical

Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.

University of Louisville professor. Author of 'AI: Unexplainable, Unpredictable, Uncontrollable' (2024). Long publication record on AI safety, but more theoretical than applied to current frontier systems.

recognition

Field-leading

Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.

Frequent podcast guest including Lex Fridman; recognised in safety circles. Less public visibility than the Bostrom/Yudkowsky tier.

vintage

Pre-deep-learning

Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.

Active publishing on AI safety from ~2010. Pre-deep-learning frame; his arguments are system-agnostic.

Hand-classified. See the board for the criteria and the full grid.

p(doom)

Strategy positions

Abandon superintelligenceendorses

Reject superintelligence as a goal entirely; narrow AI only

Publicly argues humanity should not build superintelligence at all, on the grounds that control is technically impossible.

“p(doom) ≈ 99.99%”
tweetTweet from Roman Yampolskiy· X/Twitter· 2024-03-13· direct quote

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Roman Yampolskiy's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Avi Loeb

    Avi Loeb

    shared 1 · J=1.00

    Harvard astrophysicist; Galileo Project director

  • Bill Joy

    Bill Joy

    shared 1 · J=1.00

    Sun Microsystems co-founder; 'Why the Future Doesn't Need Us'

  • Hans Moravec

    shared 1 · J=1.00

    Robotics pioneer; 'Mind Children' (1948–)

  • Samuel Butler

    Samuel Butler

    shared 1 · J=1.00

    Victorian novelist; proto-AI-risk thinker (1835–1902)

  • Richard S. Sutton

    Richard S. Sutton

    shared 1 · J=0.50

    RL pioneer; 2024 Turing Award recipient

Record last updated 2026-04-24.