AGI Strategies

person

Vitaly Shmatikov

Cornell Tech; ML privacy and security

Professor at Cornell Tech; long-running researcher on privacy attacks against ML systems. Co-author of foundational membership inference and model-inversion papers.

current Professor of Computer Science, Cornell Tech

Strategy positions

Security mindsetendorses

Treat safety as adversarial security; assume systems break under attack

Argues ML systems leak training data in predictable ways; the field treats privacy as an afterthought when it should be foundational.

We can extract verbatim training examples from large language models with no special access. Privacy in ML is not a future problem; it is a present, pervasive failure.
§ paperExtracting Training Data from Large Language Models· arXiv / USENIX Security· 2021· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Vitaly Shmatikov's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Daniel Kang

    shared 1 · J=1.00

    UIUC; LLM agents and AI security

  • Nicholas Carlini

    Nicholas Carlini

    shared 1 · J=1.00

    Anthropic adversarial-ML researcher; ex-Google Brain

  • Nicolas Papernot

    shared 1 · J=1.00

    U Toronto / Vector Institute; ML privacy and security

  • Riley Goodside

    shared 1 · J=1.00

    Scale AI; prompt engineering pioneer

  • Simon Willison

    Simon Willison

    shared 1 · J=1.00

    Independent developer; co-creator of Django; LLM tools

Record last updated 2026-04-25.