person

Stuart Ritchie
Psychologist and science journalist; AI-risk skeptic
KCL psychologist and author of Science Fictions. Publicly skeptical of high-confidence existential risk framings, arguing the base-rate evidence for AI-caused extinction is thin.
Profile
expertise
External-domain expert
Recognised expert outside AI (philosophy, economics, biology, journalism) who weighs in on AI consequences from that vantage.
Psychologist. 'Science Fictions' (2020) on the replication crisis. Engages AI from a psychometrics/science-quality angle.
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Recognised in academic-psychology and replication-crisis community.
vintage
Post-ChatGPT
Entered the AI strategy debate in or after 2023. ChatGPT was already public when their voice became influential. Often shaped by Pause letter, AISIs, AI 2027.
Science Fictions 2020 was about replication. AI commentary post-ChatGPT through Substack.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
AI skepticmixed
AGI risk narratives overstated; real harms are mundane and currentTreats the existential risk literature sympathetically but pushes back on specific numerical claims.
I take AI risk seriously, but I'm not sure the quantitative arguments for high p(doom) are as rigorous as they present.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Stuart Ritchie's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.