AGI Strategies

person

Scott Alexander

Scott Alexander

Astral Codex Ten / Slate Star Codex blogger

Widely-read rationalist-adjacent writer whose AI posts have been influential in the EA/rationalist community. Has staked out a moderate-doom position: takes AI risk seriously but argues against full Yudkowskian pessimism.

current Author, Astral Codex Ten

Profile

expertise

Commentator

Engages publicly on AI without specialised technical or domain credentials. Writers, executives commenting outside their lane, public intellectuals.

Psychiatrist; writer of Astral Codex Ten (formerly Slate Star Codex). Significant influence on rationalist AI discourse but no formal AI training.

recognition

Field-leading

Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.

Vast readership within rationalist/AI circles; controversial NYT profile (2021); less mainstream-name recognition.

vintage

Pre-deep-learning

Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.

LessWrong from ~2009, SSC 2013. His AI risk frame is rationalist-pre-AlexNet; he engages new systems through that lens.

Hand-classified. See the board for the criteria and the full grid.

p(doom)

Strategy positions

Existential primacymixed

Extinction/disempowerment risk overrides ordinary cost-benefit

Treats AI risk as serious but rejects certainty-of-doom framing; tends to support alignment research plus governance but is skeptical of a full halt.

I think the probability that AI causes a catastrophe is about 33%. That's not the 95% or higher that some people say, but it's also much higher than the probabilities we accept for other risks.
blogWhy I Am Not As Much Of A Doomer As Some People· Astral Codex Ten· 2023-03-14· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Scott Alexander's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Alan Robock

    Alan Robock

    shared 1 · J=1.00

    Rutgers climate scientist; nuclear winter researcher

  • Andy Jones

    shared 1 · J=1.00

    Anthropic researcher; scaling inference laws

  • Avital Balwit

    shared 1 · J=1.00

    Anthropic communications lead; public-facing AI safety voice

  • Bill McKibben

    Bill McKibben

    shared 1 · J=1.00

    Environmental writer; Middlebury scholar

  • Cade Metz

    shared 1 · J=1.00

    NYT AI reporter; Genius Makers author

  • Clay Graubard

    shared 1 · J=1.00

    Forecaster; RAND and Good Judgment contributor

Record last updated 2026-04-24.