person
Yi Zeng
Chinese Academy of Sciences; Brain-inspired Cognitive AI Lab director
One of the most senior Chinese AI researchers to publicly sign the Statement on AI Risk. Argues international coordination including China is possible on AI ethics and global risk.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Director of Brain-inspired Cognitive AI Lab, Chinese Academy of Sciences. Active publisher on AI ethics and brain-inspired computing. Significant role in Chinese AI governance.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
Major figure in Chinese AI policy. Less Western press visibility.
vintage
Deep-learning rise
Came up post-AlexNet. ImageNet, AlphaGo, transformer paper. DeepMind, Google Brain, FAIR establish the modern lab template.
Brain-inspired AI lab at CAS established 2015. His career maps to the Chinese AI rise during the deep-learning era.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Existential primacyendorses
Extinction/disempowerment risk overrides ordinary cost-benefitSignatory to the Statement on AI Risk.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
International treatyendorses
Arms-control-style treaty on frontier training or deploymentParticipant in US-China track II dialogues on AI safety.
AI safety should not be an area of geopolitical competition; it is a global public good.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Yi Zeng's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.