AGI Strategies

person

Charlie Snell

UC Berkeley; LLM efficiency and inference compute

UC Berkeley PhD researcher whose 2024 paper showed that scaling test-time compute can outperform scaling model size for certain reasoning tasks, a major shift in how 'capability' is conceived.

current PhD Researcher, UC Berkeley

Strategy positions

Accelerationendorsestentative

Build faster; delay costs more than capability

Argues inference-time compute is a separable axis of capability scaling that has been underweighted; smaller models with more 'thinking' can match larger ones on hard problems.

Test-time compute can be more effective than scaling model size for certain reasoning tasks. The trade-off between training-time and test-time scaling is far richer than headline metrics suggest.
§ paperScaling LLM Test-Time Compute Optimally Can be More Effective than Scaling Model Parameters· arXiv / DeepMind· 2024-08· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Charlie Snell's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Aditya Ramesh

    shared 1 · J=1.00

    OpenAI DALL·E creator

  • Albert Gu

    shared 1 · J=1.00

    CMU; Mamba and structured state-space models

  • Alec Radford

    shared 1 · J=1.00

    OpenAI; lead author of GPT, Whisper, CLIP

  • Ashish Vaswani

    shared 1 · J=1.00

    Co-founder Essential AI; lead author of 'Attention Is All You Need'

  • Brian Chau

    shared 1 · J=1.00

    Executive Director of Alliance for the Future

  • David Luan

    shared 1 · J=1.00

    Amazon; ex-Adept co-founder

Record last updated 2026-04-25.