strategy tag
Cooperative AI.
Invest in AI-AI and AI-human cooperation capacities
stated endorsers
6
no opposers yet
profiled endorsers
2
248 on the board total
endorser p(doom)
·
no estimates on record
quotes by endorsers
6
just for this tag
People on the record.
6Andrew Critch
Berkeley AI safety researcher; ARCHES framework
Argues 'multi-multi delegation', coordinating many AIs with many stakeholders, is the distinctive existential AI problem.
Multi-multi delegation should be the focus of AI safety research. Many stakeholders, many AIs, coordinating under uncertainty.
Igor Mordatch
Google DeepMind; multi-agent and embodied AI
Argues multi-agent emergent communication is a fundamental research direction; the protocols agents invent to cooperate in simulated environments illuminate what AI–AI coordination at scale will look like.
Agents in simulated multi-agent environments can develop their own communication protocols when given the right incentives. The protocols are crude but the qualitative pattern, communication as an emergent solution to cooperation, generalizes.

Jakob Foerster
Oxford FLAIR lab; multi-agent RL
Argues that the social dynamics among learning agents, cooperation, communication, opponent shaping, are first-order alignment problems, not afterthoughts to single-agent training.
Treating each agent as if it learns in isolation misses the central question: what equilibria do learning algorithms select when they meet each other?
Kate Sills
AI economic systems and multi-agent markets researcher
Works on incentive-compatible market mechanisms for agent systems; pragmatic middle ground between doom and accel.
The multi-agent economy is already emerging. We need primitives for incentive-compatible cooperation between AIs.
Lewis Hammond
Cooperative AI Foundation co-director
Argues investment in AI–AI and AI–human cooperation capacities is a structural safety bet; orients the Cooperative AI Foundation toward this research agenda.
If many capable AI systems are going to coexist, the question of whether they cooperate or defect with each other and with humans is at least as important as whether any single one is aligned.
Vincent Conitzer
CMU professor; cooperative AI researcher
Argues cooperation-capacity research is a distinct and underfunded AI safety agenda.
Making AI cooperate, with us, and with itself, is an alignment problem of its own.