AGI Strategies
← overview

Concentration · institutional

Centralised AI project

Merging frontier development into one state-funded project reduces failure modes and absorbs race pressure by being the only game.

Mechanism

Consolidate talent and compute into a single CERN-for-AI or national Manhattan-style project with a defined principal.

If it succeeds: what binds next

One project owns frontier capability. The project's principal now holds concentrated authority. The question of how to legitimate that authority was deferred.

A strategy that produces a worse next problem than the one it solved has not done durable work.

Self-undermining threshold

overshoot risk

When one state's consolidation triggers mirror projects.

Rivals build their own centralised projects rather than accept one. The supposed benefit (single actor easier to govern) dissolves; the concentration-risk cost multiplies.

Every strategy has a stable region where it reinforces itself and an unstable region where pursuit defeats it. The threshold between them is usually narrower than advocates acknowledge.

People on the record

3

Profiled figures appear first, with their tier in small caps. Each face links to the person and their full quote record. Tag: centralised-project.

  • Leopold Aschenbrenner

    Leopold Aschenbrenner

    Deep ML / safety technical · Known across the AI/safety field

  • Sam Hammond

    Sam Hammond

    Foundation for American Innovation senior economist

  • Samo Burja

    Samo Burja

    Bismarck Analysis founder; civilizational decline theorist

Coordinates

Acts oninstitutional
Coercionstate coercion
Actor in controlhumans
Time horizonduring transition
Legitimacy sourcestate

Conflicts, grouped by mechanism

4

Lever opposition

same lever, opposite pull

The pair's primary lever is the same; they pull it in opposite directions. A portfolio containing both is internally incoherent on that lever.

Distributed buildersMultipolarityAntitrust primacy

Frame opposition

incompatible premises

The strategies accept different premises about what AI is or what the binding problem is. They conflict not on lever choice but on the frame that makes lever choice sensible.

Open source maximalism

Complements, grouped by mechanism

4

Stage-sequenced

one sets up the other

The pair is phase-offset: one acts before the transition, the other during or after. The first creates the conditions under which the second binds.

Alignment firstCompute governance

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Military primacy

Same phase, different layer

same stage, distinct levers

Both are active in the same phase of the transition but act on different layers (model vs institution vs culture). They cover different failure modes inside the same window.

Race to aligned superintelligence

Same-lever twins

1

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

Public AItwin

Axis position

What the strategy acts onInstitutional
Coercion levelState coercion
Actor in controlHumans as principals
Time horizonDuring transition
Legitimacy sourceState

Source note: Centralised AI project strategy.md