person

Neel Nanda
Mechanistic interpretability team lead at Google DeepMind
Pedagogical mechanistic interpretability researcher who runs one of the largest interpretability research teams. Publishes extensively on how to do mech interp research and trains the next generation of researchers.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Leads mechanistic-interpretability work at Google DeepMind. TransformerLens library author; large public corpus of interpretability tutorials and papers.
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Recognised name in interpretability circles; less prominent than Olah or Anthropic leadership.
vintage
Scaling era
Worldview formed during GPT-2/3, scaling laws, Anthropic's founding. Pre-ChatGPT but post-deep-learning. The 'scale is all you need' debate is live.
Active interpretability publishing from ~2021; TransformerLens during scaling era. His priors are post-GPT-2.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Interpretability betendorses
Mechanistic interpretability is necessary and sufficient to know models are safeAdvocates mechanistic interpretability as a scalable safety tool; also writes accessible tutorials to grow the research field.
Interpretability is, I think, the most promising general-purpose alignment approach.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Neel Nanda's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.