AGI Strategies

person

Aleksander Mądry

MIT; ex-OpenAI head of preparedness

MIT professor of computer science specializing in robust machine learning. Led the OpenAI Preparedness Team in 2023–24 to evaluate frontier model risks across CBRN, cyber, and persuasion domains.

current Cadence Design Systems Professor of Computing, MIT
past Head of Preparedness, OpenAI (2023-10–2024-08)

Strategy positions

Evals-drivenendorses

Capability/risk evals gate deployment; evals are the load-bearing artefact

Argues frontier-AI risk needs to be measured systematically before deployment and that capability evaluations are the precondition for any meaningful safety commitment.

We need to make our understanding of frontier model risks empirical, not narrative. The Preparedness Framework is about measuring danger before it manifests.
articleOpenAI Preparedness Framework (Beta)· OpenAI· 2023-12· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Aleksander Mądry's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Alex Meinke

    Alex Meinke

    shared 1 · J=1.00

    Apollo Research; deceptive alignment evaluations

  • Ali Rahimi

    Ali Rahimi

    shared 1 · J=1.00

    Google Brain ML researcher; 'Alchemy' speech

  • Anna Rogers

    Anna Rogers

    shared 1 · J=1.00

    IT University of Copenhagen; LLM benchmarking critique

  • Arati Prabhakar

    Arati Prabhakar

    shared 1 · J=1.00

    White House OSTP director (2022–2025)

  • Beth Barnes

    Beth Barnes

    shared 1 · J=1.00

    Founder of METR; dangerous capability evaluations

  • Bo Li

    shared 1 · J=1.00

    UChicago / UIUC; AI safety evaluations

Record last updated 2026-04-25.