person

Brian Christian
Author of The Alignment Problem
Non-fiction writer whose 2020 book The Alignment Problem translated mainstream alignment research into accessible prose. Fellow at the Berkeley Center for Human-Compatible AI.
Profile
expertise
External-domain expert
Recognised expert outside AI (philosophy, economics, biology, journalism) who weighs in on AI consequences from that vantage.
UC Berkeley CHAI affiliate; writer. 'The Alignment Problem' (2020) is one of the most-cited mainstream AI-safety books. CS background, but role is journalist-of-the-field.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
NYT bestseller author; recognised in safety community.
vintage
Scaling era
Worldview formed during GPT-2/3, scaling laws, Anthropic's founding. Pre-ChatGPT but post-deep-learning. The 'scale is all you need' debate is live.
Algorithms to Live By 2016 with Tom Griffiths; The Alignment Problem 2020. Career frame is scaling era.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closeBook-length treatment of alignment: inverse reinforcement learning, reward hacking, specification gaming.
The alignment problem is already here, and will keep scaling with capability.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Brian Christian's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.