AGI Strategies
← overview

Control mechanism · frame rejection

Reframe AI

The dominant alignment frame produces the wrong problem statement; switching frames either dissolves the problem or recasts it as tractable.

Mechanism

Replace the alignment-of-AI-to-principal frame with role-based safety, AI-as-partner, or cooperative AI framings.

Load-bearing commitments

Worldview positions this strategy quietly assumes. If the claim fails empirically or philosophically, the strategy loses its target or its premise.

AI nature

The dominant principal-oriented framing is itself the problem.

Fails if: If the principal frame is in fact adequate, reframing is strategic distraction.

Coordinates

Acts onframe rejection
Coercionconsent
Actor in controlhumans
Time horizonhorizon neutral
Legitimacy sourcetechnical

Conflicts, grouped by mechanism

0

No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.

Complements, grouped by mechanism

4

Shared authority

same legitimacy source

Different levers, same legitimacy source (democratic, state, technical, market). The pair hangs together under one kind of authority; it stands or falls with that authority.

AI welfare as safetyCooperative AI

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Dharma conformity

Adjacent bet

different levers, loosely coupled

Different levers, different directions of action. They reinforce only via the general principle that covering more bets dominates covering fewer.

AI as sovereign entity

Same-lever twins

1

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

Confucian role ethicstwin

Axis position

What the strategy acts onFrame rejection
Coercion levelConsent
Actor in controlHumans as principals
Time horizonHorizon-neutral
Legitimacy sourceTechnical

Source note: Reframe AI strategy.md