AGI Strategies
← overview

Institutional capacity · institutional

Governance first

Institutional capacity is the binding constraint; without it no technical success prevents misuse, capture, or concentration.

Mechanism

Build licensing, liability, audits, independent evaluation, and international coordination to supervise AI before it is ungovernable.

What this name has meant

vintage drift

The name is stable; the content has shifted. A reader acting on the label without asking which vintage is being meant risks arguing with a position nobody currently holds.

2020

Passing substantive AI legislation.

2026

Often means standard-setting at safety institutes plus international declarations, which is closer to voluntary restraint with state endorsement.

If it succeeds: what binds next

Functional regulatory infrastructure exists. The regulator now has to make substantive decisions with the empirical uncertainty that was the reason for its existence in the first place.

A strategy that produces a worse next problem than the one it solved has not done durable work.

Falsification signal

Enacted regulations cover less than 20% of frontier compute by some date, or institutional capture moves faster than capacity building.

A strategy held without a falsification signal is not strategy; it is affiliation. Continued support after this signal lands is identity, not bet. See the identity diagnostic.

Self-undermining threshold

overshoot risk

When pursued through national regulation without international coordination.

Uncoordinated regulation produces regulatory capture opportunities in each jurisdiction. Captured regulators then accelerate the concentration the strategy was supposed to prevent.

Every strategy has a stable region where it reinforces itself and an unstable region where pursuit defeats it. The threshold between them is usually narrower than advocates acknowledge.

Historical analogue

Aviation · FAA / equivalent

Every strategy inherits a plausible ceiling from its precedent. The analogue conditions the realistic reach.

Produced

Industry-wide standard practice; airworthiness oversight.

Did not produce

Slow on emerging categories (drones); certification capture risk.

Addresses 2 failure scenarios

all scenarios →

People on the record

252

Profiled figures appear first, with their tier in small caps. Each face links to the person and their full quote record. Tag: governance-first.

expertise mix · 53 profiled

Builds frontier systems
5
Deep ML / safety technical
10
Applied or adjacent technical
2
Governance, policy, strategy
28
Expert in another field
8
Public-square commentator
0

recognition mix

Mass-public recognition
28
Known across the AI/safety field
18
Recognised inside subfield
7
Newer or less central voice
0

A strategy whose endorsement skews to commentators or external-domain experts is in a different epistemic state from one endorsed mostly by frontier-builders. The mix is read carefully across both axes; see the board for criteria. Counts are over the 53 profiled people on this strategy (199 unprofiled excluded).

  • Abeba Birhane

    Abeba Birhane

    Deep ML / safety technical · Known across the AI/safety field

  • Allan Dafoe

    Governance, policy, strategy · Known across the AI/safety field

  • Alondra Nelson

    Alondra Nelson

    Governance, policy, strategy · Known across the AI/safety field

  • Amartya Sen

    Amartya Sen

    Expert in another field · Mass-public recognition

  • Amy Zegart

    Amy Zegart

    Governance, policy, strategy · Known across the AI/safety field

  • Andrew Yang

    Andrew Yang

    Governance, policy, strategy · Mass-public recognition

  • Bret Taylor

    Bret Taylor

    Governance, policy, strategy · Mass-public recognition

  • Carl Benedikt Frey

    Carl Benedikt Frey

    Expert in another field · Known across the AI/safety field

  • Cathy O'Neil

    Cathy O'Neil

    Applied or adjacent technical · Mass-public recognition

  • Chuck Schumer

    Chuck Schumer

    Governance, policy, strategy · Mass-public recognition

  • Daron Acemoglu

    Daron Acemoglu

    Expert in another field · Mass-public recognition

  • David Krueger

    David Krueger

    Deep ML / safety technical · Recognised inside subfield

  • Demis Hassabis

    Demis Hassabis

    Builds frontier systems · Mass-public recognition

  • Edward Felten

    Edward Felten

    Deep ML / safety technical · Known across the AI/safety field

  • Evan Williams

    Evan Williams

    Governance, policy, strategy · Mass-public recognition

  • Frank Pasquale

    Frank Pasquale

    Governance, policy, strategy · Known across the AI/safety field

  • Gary Marcus

    Gary Marcus

    Deep ML / safety technical · Mass-public recognition

  • Gillian Hadfield

    Gillian Hadfield

    Governance, policy, strategy · Known across the AI/safety field

  • Helen Toner

    Helen Toner

    Governance, policy, strategy · Known across the AI/safety field

  • Holden Karnofsky

    Holden Karnofsky

    Governance, policy, strategy · Known across the AI/safety field

  • Jack Clark

    Jack Clark

    Governance, policy, strategy · Known across the AI/safety field

  • Jason Matheny

    Jason Matheny

    Governance, policy, strategy · Known across the AI/safety field

  • Jeff Dean

    Jeff Dean

    Builds frontier systems · Known across the AI/safety field

  • Jen Easterly

    Jen Easterly

    Governance, policy, strategy · Known across the AI/safety field

  • Joe Biden

    Joe Biden

    Governance, policy, strategy · Mass-public recognition

  • Joseph Stiglitz

    Joseph Stiglitz

    Expert in another field · Mass-public recognition

  • Joy Buolamwini

    Joy Buolamwini

    Deep ML / safety technical · Mass-public recognition

  • Kamala Harris

    Kamala Harris

    Governance, policy, strategy · Mass-public recognition

  • Kara Swisher

    Kara Swisher

    Governance, policy, strategy · Mass-public recognition

  • Kate Darling

    Kate Darling

    Expert in another field · Known across the AI/safety field

  • Luciano Floridi

    Luciano Floridi

    Governance, policy, strategy · Known across the AI/safety field

  • MacKenzie Scott

    MacKenzie Scott

    Governance, policy, strategy · Mass-public recognition

  • Margaret Mitchell

    Margaret Mitchell

    Deep ML / safety technical · Known across the AI/safety field

  • Maria Ressa

    Maria Ressa

    Expert in another field · Mass-public recognition

  • Mireille Hildebrandt

    Mireille Hildebrandt

    Governance, policy, strategy · Recognised inside subfield

  • Mustafa Suleyman

    Mustafa Suleyman

    Builds frontier systems · Mass-public recognition

216 more on the record. See the full tag page: governance-first

Coordinates

Acts oninstitutional
Coercionstate coercion
Actor in controlhumans
Time horizonpre transition
Legitimacy sourcestate

Conflicts, grouped by mechanism

0

No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.

Complements, grouped by mechanism

5

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Liability driven safetyInternational AI agency

Cross-side bridge

one AI-side, one world-side

One acts on the model, the other on institutions or culture. The bridge hedges against both artefact-level and substrate-level failure.

Alignment first

Same-side diversification

same side, different lever

Both act on the same side (AI or world) but pull distinct levers. They cover several failure modes on that side while leaving the other side uncovered.

Resilience first

Same phase, different layer

same stage, distinct levers

Both are active in the same phase of the transition but act on different layers (model vs institution vs culture). They cover different failure modes inside the same window.

Compute governance

Same-lever twins

7

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

Academic firewallingtwinAI worker collective actiontwinArms control treatytwinCriminal liabilitytwinInsurance mandatetwinRegulated utilitytwinScientific accumulationtwin

Axis position

What the strategy acts onInstitutional
Coercion levelState coercion
Actor in controlHumans as principals
Time horizonPre-transition
Legitimacy sourceState

Source note: Governance first strategy.md