AGI Strategies
← overview

Control mechanism · frame rejection

Confucian role ethics

Western alignment assumes isolable preferences can be learned and matched; role ethics treats behaviour via fit with position and relationship, producing a less brittle, more context-sensitive standard.

Mechanism

Evaluate AI against whether it performs its role well, where role is defined by relationship to principal, society, and ecosystem, rather than against objective-function alignment.

If it succeeds: what binds next

AI systems are evaluated by role fit. The binding problem becomes who defines roles in a world where AI itself is restructuring the social positions role ethics depends on.

A strategy that produces a worse next problem than the one it solved has not done durable work.

Load-bearing commitments

Worldview positions this strategy quietly assumes. If the claim fails empirically or philosophically, the strategy loses its target or its premise.

Values

Ethics operate through fit with position and relationship rather than optimisation of preferences.

Fails if: If preferences are the load-bearing unit, role fit becomes window-dressing on preference optimisation.

Humans

Social roles are stable enough to specify fitting behaviour for AI-related positions.

Fails if: If AI itself destabilises social structure, the role framework loses its referent.

Coordinates

Acts onframe rejection
Coercionconsent
Actor in controlhumans
Time horizonhorizon neutral
Legitimacy sourcereligious

Conflicts, grouped by mechanism

0

No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.

Complements, grouped by mechanism

4

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Dharma conformityReframe AI

Same-side diversification

same side, different lever

Both act on the same side (AI or world) but pull distinct levers. They cover several failure modes on that side while leaving the other side uncovered.

Plural AI ethic

Cross-side bridge

one AI-side, one world-side

One acts on the model, the other on institutions or culture. The bridge hedges against both artefact-level and substrate-level failure.

Ubuntu relational AI

Axis position

What the strategy acts onFrame rejection
Coercion levelConsent
Actor in controlHumans as principals
Time horizonHorizon-neutral
Legitimacy sourceReligious

Source note: Confucian role ethics strategy.md