AGI Strategies
← overview

Culture · population culture

Ubuntu relational AI

Individualist alignment misses the relational dimension most moral traditions treat as primary. "I am because we are": AI's ethical status is constituted by its relationships, not by internal properties.

Mechanism

Evaluate AI via mutual constitution with affected communities, community consent, ongoing dialogue, mutual accountability, rather than unilateral alignment of the model.

If it succeeds: what binds next

Communities are in ongoing dialogue with the AI systems that affect them. The binding problem becomes scaling relational ethics beyond small stable communities, and who represents communities whose members disagree.

A strategy that produces a worse next problem than the one it solved has not done durable work.

Load-bearing commitments

Worldview positions this strategy quietly assumes. If the claim fails empirically or philosophically, the strategy loses its target or its premise.

Values

Ethical status is constituted by relationships, not by internal properties.

Fails if: If communities and AI systems cannot sustain the required dialogue at scale, the frame collapses to individualist alignment under another name.

Humans

Community is a first-class actor with standing to constitute AI's ethical status.

Fails if: If deployment infrastructure ignores community as actor, Ubuntu reduces to user-centric design.

Coordinates

Primary leverCulture (Cultivate)
Acts onpopulation culture
Coercionconsent
Actor in controlhumans
Time horizonhorizon neutral
Legitimacy sourceextra institutional

Conflicts, grouped by mechanism

0

No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.

Complements, grouped by mechanism

5

Cross-side bridge

one AI-side, one world-side

One acts on the model, the other on institutions or culture. The bridge hedges against both artefact-level and substrate-level failure.

Confucian role ethicsDharma conformityPlural AI ethic

Adjacent bet

different levers, loosely coupled

Different levers, different directions of action. They reinforce only via the general principle that covering more bets dominates covering fewer.

AI welfare as safety

Same-side diversification

same side, different lever

Both act on the same side (AI or world) but pull distinct levers. They cover several failure modes on that side while leaving the other side uncovered.

Democratic mandate

Same-lever twins

2

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

Consumer refusaltwinResearch community normstwin

Axis position

What the strategy acts onPopulation / culture
Coercion levelConsent
Actor in controlHumans as principals
Time horizonHorizon-neutral
Legitimacy sourceExtra-institutional

Source note: Ubuntu relational AI strategy.md