AGI Strategies

strategy tag

Long reflection.

Use post-AGI stability for extended moral deliberation before locking in

stated endorsers

3

no opposers yet

profiled endorsers

3

248 on the board total

endorser p(doom)

·

no estimates on record

quotes by endorsers

3

just for this tag

principal voices

Highest-recognition profiled endorsers, broken ties by quote count. Inclusion is not endorsement of the position, it's recognition of who the discourse turns to when the bet is debated.

  • Nick BostromNick Bostrom

    Household name

  • Stewart BrandStewart Brand

    Household name

  • Anders SandbergAnders Sandberg

    Established

where the endorsers sit on the board

3 of 248 profiled · 1% of the board

expertise ↓ · recognition →Household nameField-leadingEstablishedEmerging
Frontier builder····
Deep technical····
Applied technical····
Policy / meta
  • Nick Bostrom
···
External-domain expert
  • Stewart Brand
·
  • Anders Sandberg
·
Commentator····

Each face is one profiled person. Cell shade intensifies with endorser density. Faces with × are profiled opposers, same tier, opposite position. Empty cells mark tier combinations the field has not produced for this bet.

Tier mix counts only endorsers (endorses, mixed, conditional, evolved-toward).

expertise mix of endorsers · 3 profiled of 3

Builds frontier systems
0
Deep ML / safety technical
0
Applied or adjacent technical
0
Governance, policy, strategy
1
Expert in another field
2
Public-square commentator
0

recognition mix of endorsers

Mass-public recognition
2
Known across the AI/safety field
0
Recognised inside subfield
1
Newer or less central voice
0

People on the record.

3
Anders Sandberg

Anders Sandberg

Former FHI researcher; transhumanist philosopher

endorses

Argues humanity should preserve optionality and invest in long-horizon deliberation capacity; AI governance should protect the ability to make big decisions well.

The quality of deliberation we are able to do before we make irreversible decisions is a civilisational resource.
articleAnders Sandberg, Grand Futures· andersandberg.net· 2024· loose paraphrase
Nick Bostrom

Nick Bostrom

Author of Superintelligence; founded Oxford's Future of Humanity Institute

endorses

His 2024 Deep Utopia explores what happens after superintelligence solves all practical problems, the 'post-instrumental' condition.

If we extrapolate this internal directionality to its logical terminus, we arrive at a condition in which we can accomplish everything with no effort. Superintelligence could whisk us the rest of the way.
bookDeep Utopia: Life and Meaning in a Solved World· Ideapress Publishing· 2024-03-27· faithful paraphrase
Stewart Brand

Stewart Brand

Long Now Foundation; Whole Earth Catalog founder

endorses

Argues civilisation needs much-longer time horizons; AI deployment risks collapsing those horizons.

Civilisation is acquiring tools whose effects unfold on civilisational timescales. We are not yet thinking on those timescales.
articleLong Now Foundation· Long Now Foundation· 2024· loose paraphrase