AGI Strategies

person

Jeff Clune

Jeff Clune

OpenAI / UBC researcher; open-ended evolution advocate

Computer scientist known for work on open-ended learning and AI-generating algorithms. Has publicly flip-flopped from dismissive to deeply worried about AI risk.

current Associate Professor, University of British Columbia; Senior Research Advisor, OpenAI

Profile

expertise

Deep technical

Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.

University of British Columbia, formerly OpenAI/Uber AI Labs. Foundational work on neural-network surprise behaviour, open-endedness, AI-generating algorithms.

recognition

Field-leading

Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.

Recognised inside ML research community.

vintage

Deep-learning rise

Came up post-AlexNet. ImageNet, AlphaGo, transformer paper. DeepMind, Google Brain, FAIR establish the modern lab template.

Cornell PhD 2010. Open-endedness and AI-generating-algorithms work spans the deep-learning era.

Hand-classified. See the board for the criteria and the full grid.

Strategy positions

Existential primacyevolved-toward

Extinction/disempowerment risk overrides ordinary cost-benefit

Moved from skepticism in the 2010s to explicitly signing the Statement on AI Risk in 2023.

I used to dismiss AI-risk arguments. The past few years of capability progress have substantially shifted my view.
articleStatement on AI Risk, signatories· Center for AI Safety· 2023· loose paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Jeff Clune's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Alan Robock

    Alan Robock

    shared 1 · J=1.00

    Rutgers climate scientist; nuclear winter researcher

  • Andy Jones

    shared 1 · J=1.00

    Anthropic researcher; scaling inference laws

  • Avital Balwit

    shared 1 · J=1.00

    Anthropic communications lead; public-facing AI safety voice

  • Bill McKibben

    Bill McKibben

    shared 1 · J=1.00

    Environmental writer; Middlebury scholar

  • Cade Metz

    shared 1 · J=1.00

    NYT AI reporter; Genius Makers author

  • Clay Graubard

    shared 1 · J=1.00

    Forecaster; RAND and Good Judgment contributor

Record last updated 2026-04-24.