person

William Saunders
OpenAI Superalignment alumnus; whistleblower
Former OpenAI Superalignment team member who resigned in 2024 and publicly testified to the US Senate about safety culture concerns at frontier labs.
Profile
expertise
Frontier builder
Currently or recently led training, architecture, or safety work on a frontier model. Hands on the loss curve.
Former OpenAI Superalignment researcher. Resigned 2024 with public concerns. Senate testimony on AI risk.
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Recognised in AI-safety circles via departure and testimony.
vintage
Scaling era
Worldview formed during GPT-2/3, scaling laws, Anthropic's founding. Pre-ChatGPT but post-deep-learning. The 'scale is all you need' debate is live.
OpenAI Superalignment researcher in scaling era. Resigned 2024 with public concerns.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Governance firstendorses
Lead with regulation, treaties, liability regimesPublicly testified to Congress that frontier AI development cannot be trusted to labs alone.
I believe that OpenAI's current trajectory of development is inadequately focused on safety.
Context: US Senate testimony.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with William Saunders's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.