person

Geoffrey Hinton
Godfather of deep learning; left Google in 2023 to speak about AI risk
Turing Award–winning neural network pioneer whose 2023 departure from Google became a pivot for mainstream AI extinction discourse. Publicly estimates a non-trivial chance AI wipes out humanity and calls for international coordination, while remaining non-committal on specific policy levers.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Co-invented backpropagation (1986), AlexNet (2012), capsule networks. Turing Award 2018, Nobel Prize in Physics 2024 for foundational neural-network work. No longer hands-on at a frontier lab but the technical foundation of much of modern ML traces to him.
recognition
Household name
Name recognition outside the AI/CS community. Featured by mainstream press, a Wikipedia page in many languages, a published bestseller, or holds a position the lay public knows.
Routinely covered by mainstream press as 'godfather of AI'. Nobel announcement made global news. Wikipedia entries in 60+ languages.
vintage
Pioneer
Defining figure from before 1980. Cybernetics, formal computation, early AI laboratories. Their concept of intelligence is not bound to neural networks.
PhD 1978 (Edinburgh). Backpropagation paper 1986. His worldview is rooted in pre-deep-learning AI; the deep-learning era is the one he created.
Hand-classified. See the board for the criteria and the full grid.
p(doom)
- 10–50%2024-06
Definition used: Probability AI leads to human extinction in the next 30 years
PauseAI aggregated p(doom) list · PauseAI
Strategy positions
Existential primacyendorses
Extinction/disempowerment risk overrides ordinary cost-benefitTreats AI extinction risk as on par with pandemic and nuclear risk. Was a headline signatory of the CAIS Statement on AI Risk.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Context: Single-sentence Statement on AI Risk published by CAIS; Hinton was listed first among AI scientists.
Pausemixed
Halt frontier training until alignment catches upHas expressed sympathy for slowing development but stops short of endorsing a full moratorium; frames the risk as primarily about losing control and about bad-actor misuse.
If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us.
Context: CBS 60 Minutes interview with Scott Pelley, the most-watched mainstream coverage of Hinton's position.
“It is hard to see how you can prevent the bad actors from using it for bad things.”
Context: Interview with the New York Times announcing his departure from Google so he could speak freely about AI dangers.
“I left so that I could talk about the dangers of AI without considering how this impacts Google.”
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Geoffrey Hinton's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.