person
Laura Weidinger
Google DeepMind ethics and safety researcher
DeepMind researcher whose 'Taxonomy of Risks Posed by Language Models' is widely cited as the canonical risk taxonomy for LLM deployment.
current Ethics and Safety researcher, Google DeepMind
Strategy positions
Evals-drivenendorses
Capability/risk evals gate deployment; evals are the load-bearing artefactArgues systematic risk taxonomies are the foundation of practical evaluation and governance.
We cannot evaluate risks we haven't named. A shared taxonomy is the precondition of shared governance.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Laura Weidinger's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.