person

Scott Alexander
Astral Codex Ten / Slate Star Codex blogger
Widely-read rationalist-adjacent writer whose AI posts have been influential in the EA/rationalist community. Has staked out a moderate-doom position: takes AI risk seriously but argues against full Yudkowskian pessimism.
Profile
expertise
Commentator
Engages publicly on AI without specialised technical or domain credentials. Writers, executives commenting outside their lane, public intellectuals.
Psychiatrist; writer of Astral Codex Ten (formerly Slate Star Codex). Significant influence on rationalist AI discourse but no formal AI training.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
Vast readership within rationalist/AI circles; controversial NYT profile (2021); less mainstream-name recognition.
vintage
Pre-deep-learning
Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.
LessWrong from ~2009, SSC 2013. His AI risk frame is rationalist-pre-AlexNet; he engages new systems through that lens.
Hand-classified. See the board for the criteria and the full grid.
p(doom)
- 33%2023-03-14
Why I Am Not As Much Of A Doomer As Some People · Astral Codex Ten
Strategy positions
Existential primacymixed
Extinction/disempowerment risk overrides ordinary cost-benefitTreats AI risk as serious but rejects certainty-of-doom framing; tends to support alignment research plus governance but is skeptical of a full halt.
I think the probability that AI causes a catastrophe is about 33%. That's not the 95% or higher that some people say, but it's also much higher than the probabilities we accept for other risks.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Scott Alexander's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.