AGI Strategies

person

Katja Grace

Katja Grace

Lead researcher at AI Impacts

Runs AI Impacts, the body that runs periodic surveys of AI researcher opinion on timelines and risk. The results of these surveys are the single most cited data point for 'what AI researchers actually think'.

current Lead researcher, AI Impacts

Profile

expertise

Policy / meta

Specialises in AI policy, regulation, governance, philanthropy, or movement strategy. Reads the technical literature but does not produce it.

AI Impacts founder. Runs the influential 'Expert Survey on AI' (timelines/p(doom) source for many citations). Forecaster; not a technical researcher.

recognition

Established

Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.

Survey is widely cited; her own profile is moderate.

vintage

Pre-deep-learning

Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.

AI Impacts founded 2014 (MIRI seed). The AI Expert Survey methodology is set in the pre-deep-learning x-risk frame.

Hand-classified. See the board for the criteria and the full grid.

Strategy positions

Existential primacyendorses

Extinction/disempowerment risk overrides ordinary cost-benefit

Has publicly argued that even conservative survey estimates put AI extinction probability above 5%, high enough for serious action.

The median respondent gave a 5% chance of AI causing an outcome as bad as human extinction. Five percent is not a reassuring number.
§ paper2023 AI Impacts Expert Survey on Progress in AI· AI Impacts· 2023-08· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Katja Grace's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Alan Robock

    Alan Robock

    shared 1 · J=1.00

    Rutgers climate scientist; nuclear winter researcher

  • Andy Jones

    shared 1 · J=1.00

    Anthropic researcher; scaling inference laws

  • Avital Balwit

    shared 1 · J=1.00

    Anthropic communications lead; public-facing AI safety voice

  • Bill McKibben

    Bill McKibben

    shared 1 · J=1.00

    Environmental writer; Middlebury scholar

  • Cade Metz

    shared 1 · J=1.00

    NYT AI reporter; Genius Makers author

  • Clay Graubard

    shared 1 · J=1.00

    Forecaster; RAND and Good Judgment contributor

Record last updated 2026-04-24.