AGI Strategies

person

Geoffrey Hinton

Geoffrey Hinton

Godfather of deep learning; left Google in 2023 to speak about AI risk

Turing Award–winning neural network pioneer whose 2023 departure from Google became a pivot for mainstream AI extinction discourse. Publicly estimates a non-trivial chance AI wipes out humanity and calls for international coordination, while remaining non-committal on specific policy levers.

current Emeritus Professor of Computer Science, University of Toronto
past VP and Engineering Fellow, Google

Profile

expertise

Deep technical

Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.

Co-invented backpropagation (1986), AlexNet (2012), capsule networks. Turing Award 2018, Nobel Prize in Physics 2024 for foundational neural-network work. No longer hands-on at a frontier lab but the technical foundation of much of modern ML traces to him.

recognition

Household name

Name recognition outside the AI/CS community. Featured by mainstream press, a Wikipedia page in many languages, a published bestseller, or holds a position the lay public knows.

Routinely covered by mainstream press as 'godfather of AI'. Nobel announcement made global news. Wikipedia entries in 60+ languages.

vintage

Pioneer

Defining figure from before 1980. Cybernetics, formal computation, early AI laboratories. Their concept of intelligence is not bound to neural networks.

PhD 1978 (Edinburgh). Backpropagation paper 1986. His worldview is rooted in pre-deep-learning AI; the deep-learning era is the one he created.

Hand-classified. See the board for the criteria and the full grid.

p(doom)

Strategy positions

Existential primacyendorses

Extinction/disempowerment risk overrides ordinary cost-benefit

Treats AI extinction risk as on par with pandemic and nuclear risk. Was a headline signatory of the CAIS Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Context: Single-sentence Statement on AI Risk published by CAIS; Hinton was listed first among AI scientists.

articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Pausemixed

Halt frontier training until alignment catches up

Has expressed sympathy for slowing development but stops short of endorsing a full moratorium; frames the risk as primarily about losing control and about bad-actor misuse.

If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us.

Context: CBS 60 Minutes interview with Scott Pelley, the most-watched mainstream coverage of Hinton's position.

videoGodfather of AI Geoffrey Hinton: The 60 Minutes Interview· CBS 60 Minutes· 2023-10-08· faithful paraphrase
“It is hard to see how you can prevent the bad actors from using it for bad things.”

Context: Interview with the New York Times announcing his departure from Google so he could speak freely about AI dangers.

articleGeoffrey Hinton: AI pioneer quits Google to warn about the technology's 'dangers'· CNN Business· 2023-05-01· direct quote
“I left so that I could talk about the dangers of AI without considering how this impacts Google.”
articleDeep learning pioneer Geoffrey Hinton quits Google· MIT Technology Review· 2023-05-01· direct quote

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Geoffrey Hinton's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Jaan Tallinn

    Jaan Tallinn

    shared 2 · J=1.00

    Skype co-founder; AI safety funder and advocate

  • Liron Shapira

    Liron Shapira

    shared 2 · J=1.00

    Founder; Doom Debates podcast host

  • Alan Robock

    Alan Robock

    shared 1 · J=0.50

    Rutgers climate scientist; nuclear winter researcher

  • Andrea Miotti

    shared 1 · J=0.50

    Founder of ControlAI; pause campaigner

  • Andy Jones

    shared 1 · J=0.50

    Anthropic researcher; scaling inference laws

  • Anthony Aguirre

    Anthony Aguirre

    shared 1 · J=0.50

    UC Santa Cruz physicist; FLI co-founder

Record last updated 2026-04-24.