person

Richard Ngo
AI safety researcher; 'AGI safety from first principles'
Researcher who moved from DeepMind to OpenAI's governance team, then to independent work. Author of AGI safety from first principles (2020), one of the most cited consolidations of the technical case for AI risk.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Former DeepMind / OpenAI safety researcher. AGI Safety Fundamentals course author. Active publisher on alignment theory.
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Universally read within field; lower public profile.
vintage
Scaling era
Worldview formed during GPT-2/3, scaling laws, Anthropic's founding. Pre-ChatGPT but post-deep-learning. The 'scale is all you need' debate is live.
AGI Safety Fundamentals course 2021. DeepMind/OpenAI safety ~2018+. Career anchors in the scaling era.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closePresents alignment as the most compelling lens for existential risk: by default, competent goal-directed systems pursue instrumental convergence away from human values.
The development of AGI may be one of the most consequential events in history, with the potential to either drastically increase or decrease the chances that humanity survives and flourishes.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Richard Ngo's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.