person
Nora Belrose
EleutherAI alumni; optimistic alignment researcher
Former EleutherAI researcher who has publicly challenged the alignment-pessimism consensus. Argues alignment is less difficult than assumed and that 'doom' reasoning is often circular.
past Former researcher, EleutherAI
Strategy positions
Alignment firstmixed
Solve technical alignment before capability thresholds closeArgues practical alignment progress is real and that doom-scenario reasoning is often philosophically loaded.
Doom arguments tend to hinge on underdefined intuitions about 'optimization pressure' that I don't think survive engagement with real systems.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Nora Belrose's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.