person
Allan Dafoe
DeepMind Frontier Safety and Governance lead
Political scientist who directs Google DeepMind's Frontier Safety and Governance team. Author of foundational AI governance papers; frames AI governance as a strategic and political-economy problem.
Profile
expertise
Policy / meta
Specialises in AI policy, regulation, governance, philanthropy, or movement strategy. Reads the technical literature but does not produce it.
Director of Frontier Safety and Governance, Google DeepMind. Earlier founded Centre for the Governance of AI (GovAI). Political scientist applied to AI governance.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
Recognised in AI-governance and policy circles. Less mainstream press.
vintage
Scaling era
Worldview formed during GPT-2/3, scaling laws, Anthropic's founding. Pre-ChatGPT but post-deep-learning. The 'scale is all you need' debate is live.
GovAI founded 2018. Now DeepMind frontier safety/governance. Career anchored in scaling-era AI policy.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Governance firstendorses
Lead with regulation, treaties, liability regimesArgues AI governance must reckon with strategic incentives: lab races, great-power competition, and institutional path dependence.
AI governance needs to be treated as a political-economy problem, not only a technical compliance problem.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Allan Dafoe's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.