AGI Strategies

person

Nora Belrose

EleutherAI alumni; optimistic alignment researcher

Former EleutherAI researcher who has publicly challenged the alignment-pessimism consensus. Argues alignment is less difficult than assumed and that 'doom' reasoning is often circular.

past Former researcher, EleutherAI

Strategy positions

Alignment firstmixed

Solve technical alignment before capability thresholds close

Argues practical alignment progress is real and that doom-scenario reasoning is often philosophically loaded.

Doom arguments tend to hinge on underdefined intuitions about 'optimization pressure' that I don't think survive engagement with real systems.
tweetNora Belrose on AI alignment· X/Twitter· 2024· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Nora Belrose's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Aaron Courville

    shared 1 · J=1.00

    Université de Montréal; Deep Learning textbook co-author

  • Adam Jermyn

    shared 1 · J=1.00

    Anthropic; previously astrophysics

  • Adam Kalai

    shared 1 · J=1.00

    Microsoft Research; AI fairness and safety

  • Agnes Callard

    Agnes Callard

    shared 1 · J=1.00

    University of Chicago philosopher; aspiration theorist

  • Ajeya Cotra

    shared 1 · J=1.00

    Open Philanthropy researcher; 'biological anchors' forecaster

  • Alan Turing

    Alan Turing

    shared 1 · J=1.00

    Founder of theoretical computer science (1912–1954)

Record last updated 2026-04-24.