person

Jeff Clune
OpenAI / UBC researcher; open-ended evolution advocate
Computer scientist known for work on open-ended learning and AI-generating algorithms. Has publicly flip-flopped from dismissive to deeply worried about AI risk.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
University of British Columbia, formerly OpenAI/Uber AI Labs. Foundational work on neural-network surprise behaviour, open-endedness, AI-generating algorithms.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
Recognised inside ML research community.
vintage
Deep-learning rise
Came up post-AlexNet. ImageNet, AlphaGo, transformer paper. DeepMind, Google Brain, FAIR establish the modern lab template.
Cornell PhD 2010. Open-endedness and AI-generating-algorithms work spans the deep-learning era.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Existential primacyevolved-toward
Extinction/disempowerment risk overrides ordinary cost-benefitMoved from skepticism in the 2010s to explicitly signing the Statement on AI Risk in 2023.
I used to dismiss AI-risk arguments. The past few years of capability progress have substantially shifted my view.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Jeff Clune's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.