AGI Strategies

person

William MacAskill

William MacAskill

Oxford philosopher; What We Owe The Future

Moral philosopher and co-founder of the effective altruism movement. Author of What We Owe The Future (2022), which frames AI risk as part of a longtermist moral agenda.

current Associate Professor of Philosophy, Oxford University; Founder, Forethought Foundation

Profile

expertise

Policy / meta

Specialises in AI policy, regulation, governance, philanthropy, or movement strategy. Reads the technical literature but does not produce it.

Oxford philosopher; co-founded GWWC, 80,000 Hours, EA movement infrastructure. 'What We Owe the Future' (2022). Not a technical AI contributor.

recognition

Field-leading

Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.

Defining EA-movement figure; book NYT-bestseller; some mainstream press.

vintage

Pre-deep-learning

Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.

Founded GWWC 2009 and 80,000 Hours 2011. EA-movement intellectual frame is pre-deep-learning.

Hand-classified. See the board for the criteria and the full grid.

Strategy positions

Existential primacyendorses

Extinction/disempowerment risk overrides ordinary cost-benefit

Argues preserving humanity's long-term potential is a primary moral imperative; AI risk is the most pressing longtermist concern.

We live at an unusual time in history: we have the power to influence the lives of beings who will exist for millions of generations.
bookWhat We Owe The Future· Basic Books· 2022-08-16· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with William MacAskill's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Alan Robock

    Alan Robock

    shared 1 · J=1.00

    Rutgers climate scientist; nuclear winter researcher

  • Andy Jones

    shared 1 · J=1.00

    Anthropic researcher; scaling inference laws

  • Avital Balwit

    shared 1 · J=1.00

    Anthropic communications lead; public-facing AI safety voice

  • Bill McKibben

    Bill McKibben

    shared 1 · J=1.00

    Environmental writer; Middlebury scholar

  • Cade Metz

    shared 1 · J=1.00

    NYT AI reporter; Genius Makers author

  • Clay Graubard

    shared 1 · J=1.00

    Forecaster; RAND and Good Judgment contributor

Record last updated 2026-04-24.