person

David Krueger
Cambridge professor; AI extinction risk advocate
Computer scientist who moved from mainstream ML research to AI existential risk advocacy. Signatory to the Statement on AI Risk and a leading academic voice arguing the field has drifted into capability-first incentives.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Université de Montréal / Mila AI safety researcher. Co-author of multiple safety surveys; coordinator of the 'Managing Extreme AI Risks' paper (2024).
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Recognised inside safety field.
vintage
Scaling era
Worldview formed during GPT-2/3, scaling laws, Anthropic's founding. Pre-ChatGPT but post-deep-learning. The 'scale is all you need' debate is live.
Mila PhD; Cambridge faculty 2021. Safety surveys and 'Managing Extreme AI Risks' 2024 anchor him in scaling era.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Governance firstendorses
Lead with regulation, treaties, liability regimesCalls for binding international governance and argues that voluntary commitments from frontier labs are structurally insufficient.
Voluntary commitments from frontier labs are structurally unreliable. We need binding external constraints.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with David Krueger's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.