AGI Strategies

person

Stuart Armstrong

Stuart Armstrong

Aligned AI co-founder; ex-FHI; value-extrapolation approach

Philosopher and AI safety researcher who spent over a decade at the Future of Humanity Institute. Co-founded Aligned AI; his research centres on value extrapolation, the hypothesis that solving how to extend human values across contexts is necessary and nearly sufficient for alignment.

current Co-founder and CEO, Aligned AI
past Senior Research Fellow, Future of Humanity Institute, Oxford

Profile

expertise

Deep technical

Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.

Founded Aligned AI (now Aligned). Long FHI safety publication record on value learning, corrigibility, indifference.

recognition

Established

Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.

Recognised within alignment field; little mainstream profile.

vintage

Pre-deep-learning

Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.

FHI from 2010. Long body of agent-foundations safety work pre-AlexNet; Aligned AI is a continuation of that frame.

Hand-classified. See the board for the criteria and the full grid.

Strategy positions

Alignment firstendorses

Solve technical alignment before capability thresholds close

Argues alignment is solvable through value-extrapolation techniques; publicly optimistic about the tractability of the problem after a decade of theoretical AI safety research.

“The challenge of getting AIs to follow human values not only must be solved, but can be solved, and will be solved.”
articleWe're Aligned AI, we're aiming to align AI· EA Forum· 2022· direct quote
Humans are weak agents in a strong sense: we can describe the world, but not always our values. Alignment has to work with that.
articleAligned AI, Research overview· Aligned AI· 2022· loose paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Stuart Armstrong's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Aaron Courville

    shared 1 · J=1.00

    Université de Montréal; Deep Learning textbook co-author

  • Adam Jermyn

    shared 1 · J=1.00

    Anthropic; previously astrophysics

  • Adam Kalai

    shared 1 · J=1.00

    Microsoft Research; AI fairness and safety

  • Agnes Callard

    Agnes Callard

    shared 1 · J=1.00

    University of Chicago philosopher; aspiration theorist

  • Ajeya Cotra

    shared 1 · J=1.00

    Open Philanthropy researcher; 'biological anchors' forecaster

  • Alan Turing

    Alan Turing

    shared 1 · J=1.00

    Founder of theoretical computer science (1912–1954)

Record last updated 2026-04-25.