person

Scott Aaronson
UT Austin computer scientist; ex-OpenAI AI safety visitor
Quantum computing theorist at UT Austin. Took leave in 2022-2023 to work on OpenAI's Superalignment team, developing watermarking technology. Publicly skeptical of 'Yudkowskian' doom framings but engaged with alignment work.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
UT Austin computer-science professor. Foundational work on quantum complexity. 2022–2024 stint at OpenAI on watermarking and cryptographic safety. Long-running Shtetl-Optimized blog covers AI rigorously.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
Major figure in theoretical computer science. Recognised broadly online; less mainstream press than the lab CEOs.
vintage
Pre-deep-learning
Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.
MIT PhD 2004. Quantum-complexity foundational work spans 2000s. AI engagement on top of pre-deep-learning theoretical foundations.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Alignment firstmixed
Solve technical alignment before capability thresholds closeArgues alignment is genuinely hard, that doomers are not crazy, but the productive response is more theoretical work and alignment-focused engineering rather than panic or pause; works on practical alignment tools like watermarking.
I'm now persuaded that the alignment problem is real, that there is no king's road to it, and that humanity is in a much worse position than we should be. I am working on it because the alternative is shrugging.
AI safety is finally becoming a field where you can make clear, legible progress.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Scott Aaronson's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-25.