AGI Strategies
← overview

Value diversity · ai artefact

Plural AI ethic

Value lock-in is the dominant long-term risk and arrives through convergence of AI values; diversity at the AI layer preserves optionality for humanity to revise values.

Mechanism

Deliberately maintain multiple AI value systems rather than converging on one, across labs and feedback pools.

Falsification signal

Measured value convergence across frontier models within three years.

A strategy held without a falsification signal is not strategy; it is affiliation. Continued support after this signal lands is identity, not bet. See the identity diagnostic.

Self-undermining threshold

overshoot risk

When pursued by labs sharing feedback providers and architectures.

Nominal pluralism with real convergence. Value diversity on paper, value collapse in substrate.

Every strategy has a stable region where it reinforces itself and an unstable region where pursuit defeats it. The threshold between them is usually narrower than advocates acknowledge.

Addresses 1 failure scenario

all scenarios →

Load-bearing commitments

Worldview positions this strategy quietly assumes. If the claim fails empirically or philosophically, the strategy loses its target or its premise.

Values

Values are genuinely plural rather than convergent on truth.

Fails if: If there is moral truth, pluralism is a mistake.

Coordinates

Acts onai artefact
Coercionconsent
Actor in controlmulti ai
Time horizonduring transition
Legitimacy sourcemixed

Conflicts, grouped by mechanism

0

No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.

Complements, grouped by mechanism

5

Same phase, different layer

same stage, distinct levers

Both are active in the same phase of the transition but act on different layers (model vs institution vs culture). They cover different failure modes inside the same window.

MultipolarityCooperative AIAI welfare as safety

Same-side diversification

same side, different lever

Both act on the same side (AI or world) but pull distinct levers. They cover several failure modes on that side while leaving the other side uncovered.

Reframe AI

Stage-sequenced

one sets up the other

The pair is phase-offset: one acts before the transition, the other during or after. The first creates the conditions under which the second binds.

Long reflection

Axis position

What the strategy acts onAI artefact
Coercion levelConsent
Actor in controlMulti-AI equilibrium
Time horizonDuring transition
Legitimacy sourceMixed

Source note: Plural AI ethic strategy.md