person
Percy Liang
Stanford CRFM director; HELM benchmark author
Director of Stanford's Center for Research on Foundation Models. Leads the HELM benchmarking effort and argues transparency and open evaluation are preconditions of trustworthy AI.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Stanford professor, director of CRFM. HELM benchmark, foundation-model definition (with Bommasani). Long publication record across NLP and ML.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
Recognised in academic AI; less mainstream press.
vintage
Deep-learning rise
Came up post-AlexNet. ImageNet, AlphaGo, transformer paper. DeepMind, Google Brain, FAIR establish the modern lab template.
Stanford from 2012. CRFM founded 2021. Career is deep-learning era benchmark and foundation-model work.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Evals-drivenendorses
Capability/risk evals gate deployment; evals are the load-bearing artefactArgues rigorous, public benchmarking is the infrastructure that lets governance judgments be made at all.
Transparency is not a nice-to-have. It is the precondition for any serious AI governance.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Percy Liang's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.