AGI Strategies

person

Sam Bowman

Anthropic alignment researcher; NYU associate professor

Anthropic researcher working on alignment, fine-tuning, and scalable oversight. Public voice for measured inside-Anthropic positions on safety-capability tradeoffs.

current Alignment researcher, Anthropic; Associate Professor, NYU

Strategy positions

Alignment firstendorses

Solve technical alignment before capability thresholds close

Publicly argues that running a frontier lab with strong safety commitments is preferable to either pure pause or pure acceleration.

If frontier AI is being built, it's better to have safety-focused labs at the frontier than to cede it to racing actors.
blogAnthropic research blog· Anthropic· 2024· loose paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Sam Bowman's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Aaron Courville

    shared 1 · J=1.00

    Université de Montréal; Deep Learning textbook co-author

  • Adam Jermyn

    shared 1 · J=1.00

    Anthropic; previously astrophysics

  • Adam Kalai

    shared 1 · J=1.00

    Microsoft Research; AI fairness and safety

  • Agnes Callard

    Agnes Callard

    shared 1 · J=1.00

    University of Chicago philosopher; aspiration theorist

  • Ajeya Cotra

    shared 1 · J=1.00

    Open Philanthropy researcher; 'biological anchors' forecaster

  • Alan Turing

    Alan Turing

    shared 1 · J=1.00

    Founder of theoretical computer science (1912–1954)

Record last updated 2026-04-24.