AGI Strategies

person

Nick Bostrom

Nick Bostrom

Author of Superintelligence; founded Oxford's Future of Humanity Institute

Philosopher whose 2014 book Superintelligence made 'existential risk from AI' legible to mainstream audiences and policymakers. Frames the problem as a control problem requiring pre-committed solutions before we create superhuman systems.

current Director, Macrostrategy Research Initiative
past Founding Director, Future of Humanity Institute, Oxford

Profile

expertise

Policy / meta

Specialises in AI policy, regulation, governance, philanthropy, or movement strategy. Reads the technical literature but does not produce it.

Founded Oxford's Future of Humanity Institute. 'Superintelligence' (2014) is the foundational existential-risk text. Philosopher; not a technical researcher.

recognition

Household name

Name recognition outside the AI/CS community. Featured by mainstream press, a Wikipedia page in many languages, a published bestseller, or holds a position the lay public knows.

Superintelligence was a NYT bestseller endorsed by Musk and Gates. Wikipedia entries in 30+ languages.

vintage

Symbolic era

Career started in the GOFAI / expert-systems / early-rationalist period. Vinge's 1993 Singularity, MIRI founded 2000, Bostrom and Yudkowsky writing.

Founded Oxford FHI 2005. Superintelligence (2014) was written before the deep-learning rise; his concept of AI is system-agnostic.

Hand-classified. See the board for the criteria and the full grid.

Strategy positions

Existential primacyendorses

Extinction/disempowerment risk overrides ordinary cost-benefit

Argues existential risk reduction should dominate ordinary cost-benefit analysis given the scale of what is at stake.

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct.”

Context: Closing passages of Superintelligence.

bookSuperintelligence: Paths, Dangers, Strategies· Oxford University Press· 2014-07-03· direct quote

Alignment firstendorses

Solve technical alignment before capability thresholds close

Articulated the 'control problem': a superintelligence must have human-beneficial goals built in, because attempting to control a more capable agent after the fact is expected to fail.

“The fate of the gorillas now depends more on us humans than on the gorillas themselves; so too the fate of our species then would come to depend on the actions of the machine superintelligence.”
bookSuperintelligence· Oxford University Press· 2014-07-03· direct quote

Long reflectionendorses

Use post-AGI stability for extended moral deliberation before locking in

His 2024 Deep Utopia explores what happens after superintelligence solves all practical problems, the 'post-instrumental' condition.

If we extrapolate this internal directionality to its logical terminus, we arrive at a condition in which we can accomplish everything with no effort. Superintelligence could whisk us the rest of the way.
bookDeep Utopia: Life and Meaning in a Solved World· Ideapress Publishing· 2024-03-27· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Nick Bostrom's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Anca Dragan

    shared 2 · J=0.67

    UC Berkeley professor; Google DeepMind AI safety lead

  • Andrew G. Barto

    shared 2 · J=0.67

    RL co-founder; 2024 Turing Award recipient

  • Joseph Carlsmith

    Joseph Carlsmith

    shared 2 · J=0.67

    Open Philanthropy researcher; 'Is Power-Seeking AI an Existential Risk?'

  • Stuart Russell

    Stuart Russell

    shared 2 · J=0.67

    Co-author of the standard AI textbook; leading critic of the 'standard model' of AI

Record last updated 2026-04-25.