person

Nate Soares
President of MIRI; co-author of 'If Anyone Builds It, Everyone Dies'
Runs the Machine Intelligence Research Institute. Co-authored the 2025 NYT bestseller with Eliezer Yudkowsky arguing superhuman AI kills everyone under default conditions.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
President of MIRI. Long body of agent-foundations work; co-author of 'If Anyone Builds It, Everyone Dies' (2025) with Yudkowsky.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
Book launch generated mainstream coverage. Recognised in safety circles.
vintage
Pre-deep-learning
Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.
MIRI from 2014 as President. Frame inherits Yudkowsky-era priors; his work is pre-deep-learning rationalist tradition adapted forward.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Pauseendorses
Halt frontier training until alignment catches upArgues the only sane response to current AI development is an unconditional global halt until alignment is solved.
Whichever external behaviors we set for AIs during training, we will almost certainly fail to give them internal drives that remain aligned with human well-being outside the training environment.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Nate Soares's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.