AGI Strategies

person

Dan Hendrycks

Dan Hendrycks

Director of the Center for AI Safety; drafter of the Statement on AI Risk

Led the 2023 Statement on AI Risk signing, turning CAIS into the convening body for extinction-level AI concern among mainstream researchers. Works on evals, robustness, and policy; advises xAI on safety.

current Executive Director, Center for AI Safety (CAIS); Safety Advisor, xAI

Profile

expertise

Deep technical

Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.

Director of Center for AI Safety. Co-author of MMLU, GLUE robustness benchmarks, GELU activation. Safety advisor at xAI. Active publisher in technical safety.

recognition

Field-leading

Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.

Drove the May 2023 'mitigate extinction risk' open letter signed by Hinton, Bengio, Altman, Hassabis. Recognised within the field.

vintage

Scaling era

Worldview formed during GPT-2/3, scaling laws, Anthropic's founding. Pre-ChatGPT but post-deep-learning. The 'scale is all you need' debate is live.

GELU 2016, MMLU 2020. Founded CAIS 2022. Worldview formed during scaling-era safety benchmarks.

Hand-classified. See the board for the criteria and the full grid.

p(doom)

  • 80%2023-04-02

    Definition used: Hendrycks has publicly indicated a p(doom) above 80%.

    Tweet from Dan Hendrycks · X/Twitter

Strategy positions

Existential primacyendorses

Extinction/disempowerment risk overrides ordinary cost-benefit

Organised the single-sentence Statement on AI Risk to move extinction concern into the Overton window.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Context: Statement Hendrycks drafted and organised.

articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Evals-drivenendorses

Capability/risk evals gate deployment; evals are the load-bearing artefact

Publishes widely-used benchmarks and argues that capability/risk evals are load-bearing for governance.

If AI research continues without adequate caution, it is reasonably likely that AI could precipitate human extinction or similarly catastrophic outcomes.
tweetTweet from Dan Hendrycks· X/Twitter· 2023-04-02· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Dan Hendrycks's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Alan Robock

    Alan Robock

    shared 1 · J=0.50

    Rutgers climate scientist; nuclear winter researcher

  • Aleksander Mądry

    shared 1 · J=0.50

    MIT; ex-OpenAI head of preparedness

  • Alex Meinke

    Alex Meinke

    shared 1 · J=0.50

    Apollo Research; deceptive alignment evaluations

  • Ali Rahimi

    Ali Rahimi

    shared 1 · J=0.50

    Google Brain ML researcher; 'Alchemy' speech

  • Andy Jones

    shared 1 · J=0.50

    Anthropic researcher; scaling inference laws

  • Anna Rogers

    Anna Rogers

    shared 1 · J=0.50

    IT University of Copenhagen; LLM benchmarking critique

Record last updated 2026-04-24.