person
Sam Bowman
Anthropic alignment researcher; NYU associate professor
Anthropic researcher working on alignment, fine-tuning, and scalable oversight. Public voice for measured inside-Anthropic positions on safety-capability tradeoffs.
current Alignment researcher, Anthropic; Associate Professor, NYU
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closePublicly argues that running a frontier lab with strong safety commitments is preferable to either pure pause or pure acceleration.
If frontier AI is being built, it's better to have safety-focused labs at the frontier than to cede it to racing actors.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Sam Bowman's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.