AGI Strategies
← overview

Control mechanism · ai artefact

AI containment

Useful AI does not require unrestricted actuation; strong capability in a contained system is better than limited capability uncontained.

Mechanism

Restrict AI to sandboxes, oracle / tool-AI paradigms, question-answering modes, and air-gapped deployments.

Addresses 1 failure scenario

all scenarios →

Coordinates

Acts onai artefact
Coercionfriction
Actor in controlhumans
Time horizonduring transition
Legitimacy sourcetechnical

Conflicts, grouped by mechanism

0

No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.

Complements, grouped by mechanism

4

Stage-sequenced

one sets up the other

The pair is phase-offset: one acts before the transition, the other during or after. The first creates the conditions under which the second binds.

Decouple reasoning from actionEmbodiment requirement

Same-side diversification

same side, different lever

Both act on the same side (AI or world) but pull distinct levers. They cover several failure modes on that side while leaving the other side uncovered.

Rate limited AI

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Safe by construction AI

Same-lever twins

4

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

AI for safetytwinAlignment firsttwinCounter AI AItwinInterpretability firsttwin

Axis position

What the strategy acts onAI artefact
Coercion levelFriction
Actor in controlHumans as principals
Time horizonDuring transition
Legitimacy sourceTechnical

Source note: AI containment as strategy.md