Highest-recognition profiled endorsers, broken ties by quote count. Inclusion is not endorsement of the position, it's recognition of who the discourse turns to when the bet is debated.
Nick Bostrom
Author of Superintelligence; founded Oxford's Future of Humanity Institute
External-domain expert · Established · Pre-deep-learning
Long reflection
Established
where the endorsers sit on the board
3 of 248 profiled · 1% of the board
expertise ↓ · recognition →
Household name
Field-leading
Established
Emerging
Frontier builder
·
·
·
·
Deep technical
·
·
·
·
Applied technical
·
·
·
·
Policy / meta
Nick Bostrom
Author of Superintelligence; founded Oxford's Future of Humanity Institute
Policy / meta · Household name · Symbolic era
Existential primacyAlignment firstLong reflection
·
·
·
External-domain expert
Stewart Brand
Long Now Foundation; Whole Earth Catalog founder
External-domain expert · Household name · Symbolic era
Long reflection
·
Anders Sandberg
Former FHI researcher; transhumanist philosopher
External-domain expert · Established · Pre-deep-learning
Long reflection
·
Commentator
·
·
·
·
Each face is one profiled person. Cell shade intensifies with endorser density. Faces with × are profiled opposers, same tier, opposite position. Empty cells mark tier combinations the field has not produced for this bet.
Tier mix counts only endorsers (endorses, mixed, conditional, evolved-toward).
expertise mix of endorsers · 3 profiled of 3
Builds frontier systems
0
Deep ML / safety technical
0
Applied or adjacent technical
0
Governance, policy, strategy
1
Expert in another field
2
Public-square commentator
0
recognition mix of endorsers
Mass-public recognition
2
Known across the AI/safety field
0
Recognised inside subfield
1
Newer or less central voice
0
People on the record.
3
Anders Sandberg
Former FHI researcher; transhumanist philosopher
External-domain expert · Established · Pre-deep-learning
External-domain expert · Established · Pre-deep-learning
Long reflection
Former FHI researcher; transhumanist philosopher
endorses
Argues humanity should preserve optionality and invest in long-horizon deliberation capacity; AI governance should protect the ability to make big decisions well.
The quality of deliberation we are able to do before we make irreversible decisions is a civilisational resource.
Author of Superintelligence; founded Oxford's Future of Humanity Institute
Policy / meta · Household name · Symbolic era
Existential primacyAlignment firstLong reflection
Author of Superintelligence; founded Oxford's Future of Humanity Institute
endorses
His 2024 Deep Utopia explores what happens after superintelligence solves all practical problems, the 'post-instrumental' condition.
If we extrapolate this internal directionality to its logical terminus, we arrive at a condition in which we can accomplish everything with no effort. Superintelligence could whisk us the rest of the way.