person
Daniel Kokotajlo
Former OpenAI governance team member; author of AI 2027 scenario
Left OpenAI in 2024 over what he said was lost faith in the company's ability to handle AGI responsibly, refusing a non-disparagement-tied severance. Co-authored the influential AI 2027 scenario forecasting detailed takeover dynamics.
Profile
expertise
Policy / meta
Specialises in AI policy, regulation, governance, philanthropy, or movement strategy. Reads the technical literature but does not produce it.
Former OpenAI policy/governance researcher. Forecasting work at AI Futures Project. Co-author of 'AI 2027' scenario (2025). No frontier ML research output.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
OpenAI departure (2024) and AI 2027 piece got tech-press coverage. Recognised in safety/policy circles.
vintage
Scaling era
Worldview formed during GPT-2/3, scaling laws, Anthropic's founding. Pre-ChatGPT but post-deep-learning. The 'scale is all you need' debate is live.
OpenAI policy/governance 2018–2024. AI 2027 (2025) builds on scaling-era forecasting practice.
Hand-classified. See the board for the criteria and the full grid.
p(doom)
- 70%2023
Definition used: Self-reported ~70% chance of existential catastrophe.
LessWrong comment by Daniel Kokotajlo · LessWrong
Strategy positions
Pauseendorses
Halt frontier training until alignment catches upPublicly urged OpenAI to change course and has endorsed stronger regulatory constraints on frontier training.
I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence.
Context: Statement to the New York Times on why he resigned from OpenAI.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Daniel Kokotajlo's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.