AGI Strategies

person

Laura Weidinger

Google DeepMind ethics and safety researcher

DeepMind researcher whose 'Taxonomy of Risks Posed by Language Models' is widely cited as the canonical risk taxonomy for LLM deployment.

current Ethics and Safety researcher, Google DeepMind

Strategy positions

Evals-drivenendorses

Capability/risk evals gate deployment; evals are the load-bearing artefact

Argues systematic risk taxonomies are the foundation of practical evaluation and governance.

We cannot evaluate risks we haven't named. A shared taxonomy is the precondition of shared governance.
§ paperEthical and social risks of harm from Language Models· arXiv· 2021· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Laura Weidinger's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Aleksander Mądry

    shared 1 · J=1.00

    MIT; ex-OpenAI head of preparedness

  • Alex Meinke

    Alex Meinke

    shared 1 · J=1.00

    Apollo Research; deceptive alignment evaluations

  • Ali Rahimi

    Ali Rahimi

    shared 1 · J=1.00

    Google Brain ML researcher; 'Alchemy' speech

  • Anna Rogers

    Anna Rogers

    shared 1 · J=1.00

    IT University of Copenhagen; LLM benchmarking critique

  • Arati Prabhakar

    Arati Prabhakar

    shared 1 · J=1.00

    White House OSTP director (2022–2025)

  • Beth Barnes

    Beth Barnes

    shared 1 · J=1.00

    Founder of METR; dangerous capability evaluations

Record last updated 2026-04-24.