AGI Strategies

person

Daniel Kang

UIUC; LLM agents and AI security

UIUC assistant professor; researches whether LLM agents can autonomously exploit cybersecurity vulnerabilities. Lead author of papers showing agents that succeed on a meaningful fraction of one-day vulnerabilities.

current Assistant Professor of CS, University of Illinois Urbana-Champaign

Strategy positions

Security mindsetendorses

Treat safety as adversarial security; assume systems break under attack

Argues LLM agents are already capable enough to weaponize publicly disclosed vulnerabilities; calls for evaluations and red-team frameworks that match the speed of capability progress.

We show that GPT-4 agents can autonomously exploit one-day vulnerabilities in real-world systems with high success rates given just a CVE description. The capability gap is closing faster than security research is.
§ paperLLM Agents can Autonomously Exploit One-day Vulnerabilities· arXiv· 2024-04· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Daniel Kang's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Nicholas Carlini

    Nicholas Carlini

    shared 1 · J=1.00

    Anthropic adversarial-ML researcher; ex-Google Brain

  • Nicolas Papernot

    shared 1 · J=1.00

    U Toronto / Vector Institute; ML privacy and security

  • Riley Goodside

    shared 1 · J=1.00

    Scale AI; prompt engineering pioneer

  • Simon Willison

    Simon Willison

    shared 1 · J=1.00

    Independent developer; co-creator of Django; LLM tools

  • Vitaly Shmatikov

    shared 1 · J=1.00

    Cornell Tech; ML privacy and security

Record last updated 2026-04-25.