AGI Strategies
← overview

Control mechanism · frame rejection

Dharma conformity

Alignment frames AI as tool for an external principal; a dharma frame treats AI as a type of entity whose safety is conformity to its fitting functions.

Mechanism

Define a type for each AI system, specify fitting behaviour for each type, and build evaluation and training around type fit rather than principal approval.

Coordinates

Acts onframe rejection
Coercionconsent
Actor in controlhumans
Time horizonhorizon neutral
Legitimacy sourcereligious

Conflicts, grouped by mechanism

0

No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.

Complements, grouped by mechanism

4

Same-side diversification

same side, different lever

Both act on the same side (AI or world) but pull distinct levers. They cover several failure modes on that side while leaving the other side uncovered.

Plural AI ethicSafe by construction AI

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Reframe AI

Adjacent bet

different levers, loosely coupled

Different levers, different directions of action. They reinforce only via the general principle that covering more bets dominates covering fewer.

AI welfare as safety

Same-lever twins

1

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

Confucian role ethicstwin

Axis position

What the strategy acts onFrame rejection
Coercion levelConsent
Actor in controlHumans as principals
Time horizonHorizon-neutral
Legitimacy sourceReligious

Source note: Dharma conformity alternative to alignment.md