person

Roman Yampolskiy
University of Louisville professor; argues AI safety is impossible
Formal argument AI-safety impossibilist: has published papers arguing alignment is undecidable and that superintelligent AI cannot be controlled. Cites the highest publicly stated p(doom) among serious researchers.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
University of Louisville professor. Author of 'AI: Unexplainable, Unpredictable, Uncontrollable' (2024). Long publication record on AI safety, but more theoretical than applied to current frontier systems.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
Frequent podcast guest including Lex Fridman; recognised in safety circles. Less public visibility than the Bostrom/Yudkowsky tier.
vintage
Pre-deep-learning
Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.
Active publishing on AI safety from ~2010. Pre-deep-learning frame; his arguments are system-agnostic.
Hand-classified. See the board for the criteria and the full grid.
p(doom)
- 100%2024-03-13
Definition used: Explicit Twitter statement.
Tweet from Roman Yampolskiy · X/Twitter
Strategy positions
Abandon superintelligenceendorses
Reject superintelligence as a goal entirely; narrow AI onlyPublicly argues humanity should not build superintelligence at all, on the grounds that control is technically impossible.
“p(doom) ≈ 99.99%”
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Roman Yampolskiy's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.