person
Ryan Greenblatt
Redwood Research; AI control researcher
Researcher at Redwood working on 'AI control' protocols and scheming model behaviour. Public voice for practical near-term alignment engineering.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Redwood Research. Active publisher on AI control, scheming, alignment-stress-testing.
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Recognised in alignment community.
vintage
Scaling era
Worldview formed during GPT-2/3, scaling laws, Anthropic's founding. Pre-ChatGPT but post-deep-learning. The 'scale is all you need' debate is live.
Redwood Research from ~2021. AI control work in scaling era.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closeArgues frontier labs should adopt AI control protocols that remain safe against scheming behaviour.
You don't need to assume your AI is aligned. Design the deployment so that you're safe even if it isn't.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Ryan Greenblatt's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.