AGI Strategies

person

Dario Amodei

Dario Amodei

CEO of Anthropic; 'Machines of Loving Grace' author

Former OpenAI VP of research who left to start Anthropic. Oscillates between bullish on AI transformation (Machines of Loving Grace, 2024) and unambiguous about catastrophic risk. Originator of the Responsible Scaling Policy framing.

current CEO and Co-founder, Anthropic
past VP of Research, OpenAI

Profile

expertise

Frontier builder

Currently or recently led training, architecture, or safety work on a frontier model. Hands on the loss curve.

Co-founder and CEO of Anthropic; co-author of GPT-3. Personally involved in scaling-laws research and Anthropic's RSP and interpretability programs.

recognition

Household name

Name recognition outside the AI/CS community. Featured by mainstream press, a Wikipedia page in many languages, a published bestseller, or holds a position the lay public knows.

TIME100 2024. Covered by mainstream press. 'Machines of Loving Grace' essay (2024) reached far past the AI community.

vintage

Scaling era

Worldview formed during GPT-2/3, scaling laws, Anthropic's founding. Pre-ChatGPT but post-deep-learning. The 'scale is all you need' debate is live.

Joined OpenAI 2016 as a researcher; led GPT-3 work. Co-founded Anthropic 2021. His worldview is built on scaling laws and frontier-lab incentives.

Hand-classified. See the board for the criteria and the full grid.

p(doom)

Timelines

  • Powerful AI (Anthropic's internal term for transformative AI) by 2026

    stated 2024-10-11

    Machines of Loving Grace · darioamodei.com

Strategy positions

Existential primacyendorses

Extinction/disempowerment risk overrides ordinary cost-benefit

Signatory to the Statement on AI Risk; treats catastrophic misuse and loss of control as primary downside risks.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

RSP-style commitmentsendorses

Responsible scaling policies; labs commit to capability-tied safety

Championed Responsible Scaling Policies: capability thresholds trigger progressively stronger safety commitments.

If we put enough effort into solving these problems, a truly amazing, hopeful future could be available.

Context: Opening framing of Machines of Loving Grace, which argues powerful AI could compress 50–100 years of biological progress into 5–10.

blogMachines of Loving Grace· darioamodei.com· 2024-10-11· faithful paraphrase

Race to aligned SImixed

Build aligned superintelligence first, before adversaries

Runs a frontier lab on the stated theory that safety-focused actors must be at the frontier; publicly acknowledges the 'we are pushing what we fear' tension.

Powerful AI could appear as early as 2026.
blogMachines of Loving Grace· darioamodei.com· 2024-10-11· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Dario Amodei's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Ilya Sutskever

    Ilya Sutskever

    shared 2 · J=0.67

    OpenAI co-founder; now CEO of Safe Superintelligence Inc (SSI)

Record last updated 2026-04-24.