person
Nat McAleese
OpenAI researcher; ex-DeepMind reliability
AI reliability and alignment researcher at OpenAI; previously at DeepMind working on debate-style oversight and reward modelling.
current Researcher, OpenAI
past Research engineer, Google DeepMind
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closeWorks on reward modelling and debate-style oversight; publicly engaged with alignment research.
Teaching language models to support answers with verified quotes is a concrete alignment sub-problem we can make progress on.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Nat McAleese's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-25.