person
Vincent Conitzer
CMU professor; cooperative AI researcher
Game theorist and computer scientist who has argued that multi-agent cooperation and mechanism-design approaches should be central to AI safety. Runs the Foundations of Cooperative AI Lab at CMU.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Carnegie Mellon and Oxford. Game-theoretic AI safety; mechanism design; multi-agent ethics. Active publisher.
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Recognised in academic AI; less press.
vintage
Pre-deep-learning
Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.
Carnegie Mellon professor; multi-agent + game-theoretic AI work spans 2000s. Frame predates deep learning.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Cooperative AIendorses
Invest in AI-AI and AI-human cooperation capacitiesArgues cooperation-capacity research is a distinct and underfunded AI safety agenda.
Making AI cooperate, with us, and with itself, is an alignment problem of its own.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Vincent Conitzer's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.