person

Victoria Krakovna
Google DeepMind AI safety researcher; FLI co-founder
Co-founded the Future of Life Institute and leads AI safety research at Google DeepMind. Maintains the specification-gaming reference list that has become the canonical source of failure examples.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Google DeepMind safety researcher. Co-founder Future of Life Institute. Long publication record on specification gaming, side effects, AI safety taxonomy.
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Recognised in alignment community.
vintage
Deep-learning rise
Came up post-AlexNet. ImageNet, AlphaGo, transformer paper. DeepMind, Google Brain, FAIR establish the modern lab template.
DeepMind safety from 2017. FLI earlier. Career bridges pre-deep-learning x-risk into deep-learning era safety.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closeDocuments specification-gaming failures as empirical evidence that goal-directed AI does not always do what we mean.
The specification gaming reference list is a catalogue of failures, and a reminder that we cannot rely on getting objectives right by default.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Victoria Krakovna's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.