AGI Strategies

person

Irving John Good

Irving John Good

British mathematician; articulated 'intelligence explosion' in 1965 (1916–2009)

Bletchley Park cryptographer whose 1965 paper 'Speculations Concerning the First Ultraintelligent Machine' originated the intelligence explosion argument later refined by Bostrom, Yudkowsky, and others.

past Professor of Statistics, Virginia Tech; Statistician working with Alan Turing, Bletchley Park (Hut 8)

Profile

expertise

Deep technical

Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.

Bletchley Park cryptanalyst (1916–2009). Coined the 'intelligence explosion' framing in 1965 essay 'Speculations Concerning the First Ultraintelligent Machine'.

recognition

Field-leading

Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.

Foundational reference for AI x-risk thought; less mainstream press.

vintage

Pioneer

Defining figure from before 1980. Cybernetics, formal computation, early AI laboratories. Their concept of intelligence is not bound to neural networks.

1916–2009. 'Speculations Concerning the First Ultraintelligent Machine' 1965. Coined intelligence-explosion framing.

Hand-classified. See the board for the criteria and the full grid.

Strategy positions

Existential primacyendorses

Extinction/disempowerment risk overrides ordinary cost-benefit

Coined the intelligence-explosion argument six decades before contemporary AI discourse.

“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
§ paperSpeculations Concerning the First Ultraintelligent Machine· Advances in Computers· 1965· direct quote

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Irving John Good's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Alan Robock

    Alan Robock

    shared 1 · J=1.00

    Rutgers climate scientist; nuclear winter researcher

  • Andy Jones

    shared 1 · J=1.00

    Anthropic researcher; scaling inference laws

  • Avital Balwit

    shared 1 · J=1.00

    Anthropic communications lead; public-facing AI safety voice

  • Bill McKibben

    Bill McKibben

    shared 1 · J=1.00

    Environmental writer; Middlebury scholar

  • Cade Metz

    shared 1 · J=1.00

    NYT AI reporter; Genius Makers author

  • Clay Graubard

    shared 1 · J=1.00

    Forecaster; RAND and Good Judgment contributor

Record last updated 2026-04-24.