person
Daniel Dewey
Former AI risk program officer at Open Philanthropy
Helped shape Open Philanthropy's early AI risk grantmaking and now works on AI policy at the US AI Safety Institute. One of the original in-field alignment grantmakers.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
FHI / Open Phil safety researcher. Long publication record on value learning and ideal advisor problems. Less active publicly in recent years.
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Recognised inside the safety community.
vintage
Pre-deep-learning
Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.
FHI safety researcher from late 2000s. Open Phil. His value-learning frame predates deep learning.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closeFocused on funding alignment research and evaluations.
If we want AI to be broadly beneficial, we need to invest in alignment research well before systems are capable of world-changing impact.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Daniel Dewey's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.