AGI Strategies
← overview

Information flow · institutional

Open source maximalism

Concentration risk dominates misuse risk; open weights are the only mechanism that prevents a safety coup by a closed lab with captured regulators.

Mechanism

Require open weights and open source at the frontier, letting any sufficiently resourced actor replicate or audit systems.

If it succeeds: what binds next

Everyone has frontier weights. The problem becomes whose defence stands up to whose offence, the offence-defence balance becomes the binding constraint.

A strategy that produces a worse next problem than the one it solved has not done durable work.

Falsification signal

An open released model produces a verified harm in a domain where defender access does not bound the risk.

A strategy held without a falsification signal is not strategy; it is affiliation. Continued support after this signal lands is identity, not bet. See the identity diagnostic.

Self-undermining threshold

overshoot risk

When capabilities exceed defender throughput.

The offence-defence symmetry holds only where defender access bounds the risk. Outside that domain open release is a one-way ratchet.

Every strategy has a stable region where it reinforces itself and an unstable region where pursuit defeats it. The threshold between them is usually narrower than advocates acknowledge.

People on the record

37

Profiled figures appear first, with their tier in small caps. Each face links to the person and their full quote record. Tag: open-source-maximalism.

expertise mix · 8 profiled

Builds frontier systems
3
Deep ML / safety technical
3
Applied or adjacent technical
0
Governance, policy, strategy
1
Expert in another field
0
Public-square commentator
1

recognition mix

Mass-public recognition
4
Known across the AI/safety field
4
Recognised inside subfield
0
Newer or less central voice
0

A strategy whose endorsement skews to commentators or external-domain experts is in a different epistemic state from one endorsed mostly by frontier-builders. The mix is read carefully across both axes; see the board for criteria. Counts are over the 8 profiled people on this strategy (29 unprofiled excluded).

  • Andrew Ng

    Andrew Ng

    Deep ML / safety technical · Mass-public recognition

  • Emad Mostaque

    Emad Mostaque

    Public-square commentator · Known across the AI/safety field

  • Jeremy Howard

    Jeremy Howard

    Deep ML / safety technical · Known across the AI/safety field

  • Joëlle Pineau

    Joëlle Pineau

    Deep ML / safety technical · Known across the AI/safety field

  • Mark Zuckerberg

    Mark Zuckerberg

    Governance, policy, strategy · Mass-public recognition

  • Stella Biderman

    Builds frontier systems · Known across the AI/safety field

  • Tim Berners-Lee

    Tim Berners-Lee

    Builds frontier systems · Mass-public recognition

  • Yann LeCun

    Yann LeCun

    Builds frontier systems · Mass-public recognition

  • Ada Rose Cannon

    W3C web standards advocate; AR/VR engineer

  • Ali Farhadi

    Allen Institute for AI CEO

  • Ali Ghodsi

    Ali Ghodsi

    Databricks co-founder and CEO

  • Anjney Midha

    Andreessen Horowitz general partner; AI investor

  • Arthur Mensch

    Arthur Mensch

    CEO of Mistral AI; French frontier-model founder

  • Ce Zhang

    ETH Zürich → University of Chicago; ML systems

  • Clément Delangue

    Clément Delangue

    CEO of Hugging Face; open-source AI advocate

  • Colin Raffel

    UofT; Hugging Face; T5 author

  • Illia Polosukhin

    NEAR Protocol co-founder; Transformer co-author

  • Lewis Tunstall

    Hugging Face; LLM post-training

  • Liang Wenfeng

    Founder of DeepSeek; Chinese frontier AI

  • Luis Ceze

    OctoML CEO; UW computer architecture

  • Martin Casado

    Andreessen Horowitz general partner; infrastructure investor

  • Matei Zaharia

    Matei Zaharia

    Databricks CTO and co-founder; Apache Spark creator

  • Mike Lewis

    Meta FAIR; BART, Llama 2 lead

  • Nathan Lambert

    Allen Institute for AI; 'Interconnects' newsletter

  • Nick Clegg

    Nick Clegg

    Former Meta President of Global Affairs (2018–2025)

  • Nigel Shadbolt

    Nigel Shadbolt

    Oxford / Open Data Institute co-founder

  • Omar Khattab

    Stanford / Databricks; DSPy creator

  • Pavel Durov

    Pavel Durov

    Telegram founder; arrested in France 2024

  • Peter Wang

    Peter Wang

    Co-founder of Anaconda; scientific Python and AI

  • Robin Rombach

    Black Forest Labs co-founder; Stable Diffusion lead

  • Sasha Rush

    Cornell Tech professor; HuggingFace research scientist

  • Sebastian Raschka

    Lightning AI; ML educator and author

  • Soumith Chintala

    PyTorch creator; Meta AI

  • Tianqi Chen

    CMU professor; XGBoost and TVM creator

  • Tim Dettmers

    Efficient-training and quantization researcher

  • Vukosi Marivate

    Vukosi Marivate

    Univ Pretoria; African NLP / Masakhane

1 more on the record. See the full tag page: open-source-maximalism

Coordinates

Acts oninstitutional
Coercionmarket
Actor in controlhumans
Time horizonhorizon neutral
Legitimacy sourcemarket

Conflicts, grouped by mechanism

3

Frame opposition

incompatible premises

The strategies accept different premises about what AI is or what the binding problem is. They conflict not on lever choice but on the frame that makes lever choice sensible.

Centralised AI projectCompute governance

Lever opposition

same lever, opposite pull

The pair's primary lever is the same; they pull it in opposite directions. A portfolio containing both is internally incoherent on that lever.

Closed weights mandate

Complements, grouped by mechanism

4

Same-side diversification

same side, different lever

Both act on the same side (AI or world) but pull distinct levers. They cover several failure modes on that side while leaving the other side uncovered.

Distributed buildersAntitrust primacy

Adjacent bet

different levers, loosely coupled

Different levers, different directions of action. They reinforce only via the general principle that covering more bets dominates covering fewer.

Interpretability first

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Whistleblower primacy

Same-lever twins

1

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

Information integrity firsttwin

Axis position

What the strategy acts onInstitutional
Coercion levelMarket
Actor in controlHumans as principals
Time horizonHorizon-neutral
Legitimacy sourceMarket

Source note: Open source maximalism strategy.md