AGI Strategies

person

Scott Aaronson

Scott Aaronson

UT Austin computer scientist; ex-OpenAI AI safety visitor

Quantum computing theorist at UT Austin. Took leave in 2022-2023 to work on OpenAI's Superalignment team, developing watermarking technology. Publicly skeptical of 'Yudkowskian' doom framings but engaged with alignment work.

current Schlumberger Centennial Chair of Computer Science, UT Austin
past Research scientist (safety, visiting), OpenAI

Profile

expertise

Deep technical

Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.

UT Austin computer-science professor. Foundational work on quantum complexity. 2022–2024 stint at OpenAI on watermarking and cryptographic safety. Long-running Shtetl-Optimized blog covers AI rigorously.

recognition

Field-leading

Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.

Major figure in theoretical computer science. Recognised broadly online; less mainstream press than the lab CEOs.

vintage

Pre-deep-learning

Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.

MIT PhD 2004. Quantum-complexity foundational work spans 2000s. AI engagement on top of pre-deep-learning theoretical foundations.

Hand-classified. See the board for the criteria and the full grid.

Strategy positions

Alignment firstmixed

Solve technical alignment before capability thresholds close

Argues alignment is genuinely hard, that doomers are not crazy, but the productive response is more theoretical work and alignment-focused engineering rather than panic or pause; works on practical alignment tools like watermarking.

I'm now persuaded that the alignment problem is real, that there is no king's road to it, and that humanity is in a much worse position than we should be. I am working on it because the alternative is shrugging.
blogWhy I'm joining OpenAI· Shtetl-Optimized· 2022-06· faithful paraphrase
AI safety is finally becoming a field where you can make clear, legible progress.
podcastScott Aaronson: Against AI Doomerism· The Gradient· 2023· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Scott Aaronson's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Aaron Courville

    shared 1 · J=1.00

    Université de Montréal; Deep Learning textbook co-author

  • Adam Jermyn

    shared 1 · J=1.00

    Anthropic; previously astrophysics

  • Adam Kalai

    shared 1 · J=1.00

    Microsoft Research; AI fairness and safety

  • Agnes Callard

    Agnes Callard

    shared 1 · J=1.00

    University of Chicago philosopher; aspiration theorist

  • Ajeya Cotra

    shared 1 · J=1.00

    Open Philanthropy researcher; 'biological anchors' forecaster

  • Alan Turing

    Alan Turing

    shared 1 · J=1.00

    Founder of theoretical computer science (1912–1954)

Record last updated 2026-04-25.