person
Andrew Critch
Berkeley AI safety researcher; ARCHES framework
AI safety researcher focusing on multi-multi alignment, coordination between multiple AI systems and multiple human stakeholders, as the distinctive AI x-safety problem.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
UC Berkeley CHAI research scientist. Long publication record on multi-agent AI safety; coined 'multi/multi alignment' framing.
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Recognised inside the alignment community.
vintage
Pre-deep-learning
Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.
MIRI/CHAI from early 2010s. Multi-agent safety frame predates deep-learning.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Cooperative AIendorses
Invest in AI-AI and AI-human cooperation capacitiesArgues 'multi-multi delegation', coordinating many AIs with many stakeholders, is the distinctive existential AI problem.
Multi-multi delegation should be the focus of AI safety research. Many stakeholders, many AIs, coordinating under uncertainty.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Andrew Critch's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.