AGI Strategies

strategy tag

Acceleration.

Build faster; delay costs more than capability

also known as: e/acc, effective accelerationism

stated endorsers

14

+15 tentative · 0 oppose

profiled endorsers

7

248 on the board total

endorser mean p(doom)

5%

n=1 · median 5%

quotes by endorsers

15

just for this tag

principal voices

Highest-recognition profiled endorsers, broken ties by quote count. Inclusion is not endorsement of the position, it's recognition of who the discourse turns to when the bet is debated.

  • Marc AndreessenMarc Andreessen

    Household name

  • JD VanceJD Vance

    Household name

  • Donald TrumpDonald Trump

    Household name

  • David SacksDavid Sacks

    Household name

  • Vivek RamaswamyVivek Ramaswamy

    Household name

where the endorsers sit on the board

7 of 248 profiled · 3% of the board

expertise ↓ · recognition →Household nameField-leadingEstablishedEmerging
Frontier builder····
Deep technical·
  • Richard S. Sutton
··
Applied technical····
Policy / meta
  • JD Vance
  • Donald Trump
  • David Sacks
···
External-domain expert····
Commentator
  • Marc Andreessen
  • Vivek Ramaswamy
···

Each face is one profiled person. Cell shade intensifies with endorser density. Faces with × are profiled opposers, same tier, opposite position. Empty cells mark tier combinations the field has not produced for this bet.

Tier mix counts only endorsers (endorses, mixed, conditional, evolved-toward).

expertise mix of endorsers · 7 profiled of 14

Builds frontier systems
0
Deep ML / safety technical
2
Applied or adjacent technical
0
Governance, policy, strategy
3
Expert in another field
0
Public-square commentator
2

recognition mix of endorsers

Mass-public recognition
5
Known across the AI/safety field
2
Recognised inside subfield
0
Newer or less central voice
0

vintage mix · n=7 of 7 profiled with era assigned

Pioneer
0
Symbolic era
1
Pre-deep-learning
0
Deep-learning rise
0
Scaling era
0
Post-ChatGPT
6

Vintage is the era when this person's AI worldview formed, pioneer through post-ChatGPT. A bet held mostly by post-ChatGPT entrants is in a different epistemic state from one held by pre-deep-learning veterans.

People on the record.

29 · 15 tentative

Brian Chau

Executive Director of Alliance for the Future

endorses

Organised lobbying counterweight to AI safety policy in Washington; frames pause/safety advocates as doomers.

Regulation of AI is regulation of inference. Regulation of inference is regulation of thought.
articleAlliance for the Future· Alliance for the Future· 2024· loose paraphrase

David Luan

Amazon; ex-Adept co-founder

endorses

Argues agentic AI, systems that take actions on the user's behalf, is the next major capability surface; capability progress here will reshape every productivity tool.

We believe the next decade of AI is action, not just text generation. Adept's bet was that the agents that take real-world action will be the most consequential AI systems.
articleAdept AI· Adept AI· 2023· faithful paraphrase
David Sacks

David Sacks

White House AI & Crypto Czar (2025); VC

endorses

Advocates for aggressive US deregulation of frontier AI; framed the Biden executive order as burdensome and anti-competitive.

“We've got to let the private sector cook.”
articleWhite House AI czar on race with China: 'We've got to let the private sector cook'· FedScoop· 2025-01· direct quote
Donald Trump

Donald Trump

US President (2017–2021, 2025–)

endorses

Explicit acceleration framing: rescinded prior AI safety-oriented orders and launched large-scale compute investment.

Stargate is a new American company that will invest $500 billion, at least, in AI infrastructure.
articleStargate announcement· The White House· 2025-01-21· faithful paraphrase

Eric Jang

1X Technologies VP of AI; ex-Google Brain

endorses

Argues end-to-end neural network policies for humanoids, not classical pipelines, are the path to general-purpose physical AI; capability progress will follow the same scaling pattern as language models.

Humanoid robots running large neural network policies are the embodied analogue of GPT-style language models. The same scaling laws that produced reasoning in language are starting to produce skill acquisition in physical action.
blogEric Jang, evjang.com· evjang.com· 2024· faithful paraphrase

Guillaume Verdon

Founder of Extropic; aka 'Beff Jezos', founder of the e/acc movement

endorses

Frames accelerating compute and AI capability as aligned with the thermodynamic direction of life.

Effective accelerationism wants to propel humanity up the Kardashev gradient.

Context: Lex Fridman podcast episode 407.

videoGuillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI· Lex Fridman Podcast· 2023-12-29· faithful paraphrase
JD Vance

JD Vance

US Vice President; AI 'opportunity, not safety' advocate

endorses

Publicly rejected safety-first framings at the Paris AI Action Summit; aligned US policy with acceleration.

“I'm not here this morning to talk about AI safety... I'm here to talk about AI opportunity.”
talkRemarks by the Vice President at the Paris AI Action Summit· The White House· 2025-02-11· direct quote
John Carmack

John Carmack

Keen Technologies founder; ex-Meta CTO

endorses

Argues AGI is a tractable engineering problem with current architectures; founded Keen on the thesis that a small team with focused effort can make meaningful progress on general intelligence.

I think a single individual could probably do the entire AGI training pipeline. The bottleneck is not budget or compute, it is engineering insight.
interview-transcriptJohn Carmack on AGI and Keen Technologies· Lex Fridman Podcast· 2023· faithful paraphrase
Marc Andreessen

Marc Andreessen

Co-founder of Andreessen Horowitz; techno-optimist manifesto author

endorses

Argues any deceleration of AI costs lives via foregone medical and scientific progress.

“Any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.”

Context: The Techno-Optimist Manifesto.

blogThe Techno-Optimist Manifesto· a16z· 2023-10-16· direct quote
“We believe that we are, have been, and will always be the masters of technology, not mastered by technology.”
blogThe Techno-Optimist Manifesto· a16z· 2023-10-16· direct quote

Mike Solana

Pirate Wires founder; tech contrarian

endorses

Frames AI safety advocates as captured by political and economic incumbency. Pro-acceleration cultural voice.

The 'AI is going to kill us' people are remarkably aligned with the 'AI should be regulated by us' people. Convenient.
blogPirate Wires· Pirate Wires· 2024· loose paraphrase
Richard S. Sutton

Richard S. Sutton

RL pioneer; 2024 Turing Award recipient

endorses

Argues general methods that scale with computation will continue to outperform clever human-engineered approaches; views the bitter lesson as the dominant pattern of AI history.

“The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.”
blogThe Bitter Lesson· incompleteideas.net· 2019-03-13· direct quote

Sébastien Bubeck

OpenAI; lead author of 'Sparks of AGI' paper

mixed

Argues GPT-4 already exhibits early AGI behaviors and that capability progress will continue rapidly; less explicit on safety strategy.

“We contend that GPT-4 could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”
§ paperSparks of Artificial General Intelligence: Early Experiments with GPT-4· arXiv· 2023-03· direct quote
Sergey Levine

Sergey Levine

UC Berkeley; robot learning, deep RL

mixed

Argues physical-world robotics is the bottleneck to general AI usefulness; less explicit on x-risk strategy but views capability progress as the priority.

If we want machines that can do useful things in the physical world, we need to scale up real-world data and self-supervision. Internet text gets you far, but not into a kitchen.
§ paperOpen X-Embodiment: Robotic Learning Datasets and RT-X Models· arXiv· 2024· faithful paraphrase
Vivek Ramaswamy

Vivek Ramaswamy

Former US presidential candidate; AI deregulation advocate

endorses

Argues against AI regulation; frames AI safety advocacy as a form of regulatory capture.

AI regulation is a Trojan horse for incumbent protection.
tweetVivek Ramaswamy on X· X/Twitter· 2024· loose paraphrase

tentative · 15

Below are entries flagged tentative: assignments inferred from a passing remark, hype quote, or paper abstract rather than a clear strategy statement. Shown in dashed cards so a stronger primary source can replace them later.

Aditya Ramesh

OpenAI DALL·E creator

mixedtentative

Pioneered the unification of text and image generation in single foundation models; positioned as a capability-driven researcher more than a public safety voice.

DALL·E 2 generates more realistic and accurate images with 4x greater resolution. The visual reasoning that emerges from large multimodal training continues to surprise us.
articleDALL·E 2· OpenAI· 2022-04· faithful paraphrase

Albert Gu

CMU; Mamba and structured state-space models

endorsestentative

Argues structured state-space models can scale to language with linear time and memory, breaking the quadratic attention bottleneck and reshaping where capability research is going.

“We propose Mamba, a selective state space model that achieves Transformer-level performance with linear scaling in sequence length.”
§ paperMamba: Linear-Time Sequence Modeling with Selective State Spaces· arXiv· 2023-12· direct quote

Alec Radford

OpenAI; lead author of GPT, Whisper, CLIP

mixedtentative

Public statements are rare; positions inferred from research output emphasize scaling generative pretraining and unifying modalities into a single representation.

“We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text.”
§ paperImproving Language Understanding by Generative Pre-Training· OpenAI· 2018-06· direct quote

Ashish Vaswani

Co-founder Essential AI; lead author of 'Attention Is All You Need'

endorsestentative

Position inferred from career trajectory (Transformer architect, Essential AI co-founder building frontier tooling); no public position statement on AI strategy is on record. Quote below is from the Transformer paper, which is technical, not strategic.

“We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.”
§ paperAttention Is All You Need· arXiv / NeurIPS· 2017-06· direct quote

Charlie Snell

UC Berkeley; LLM efficiency and inference compute

endorsestentative

Argues inference-time compute is a separable axis of capability scaling that has been underweighted; smaller models with more 'thinking' can match larger ones on hard problems.

Test-time compute can be more effective than scaling model size for certain reasoning tasks. The trade-off between training-time and test-time scaling is far richer than headline metrics suggest.
§ paperScaling LLM Test-Time Compute Optimally Can be More Effective than Scaling Model Parameters· arXiv / DeepMind· 2024-08· faithful paraphrase

Denny Zhou

Google DeepMind; reasoning team lead

endorsestentative

Argues reasoning, via chain-of-thought, self-consistency, and tree-of-thought, is the next major capability surface beyond raw scale; leads DeepMind work on this.

Reasoning is one of the most important capabilities of LLMs. Chain-of-thought is the simplest demonstration that scale plus reasoning prompts unlocks much more than either alone.
articleDenny Zhou, Google DeepMind· Google Research· 2023· faithful paraphrase

Jakob Uszkoreit

Inceptive co-founder; Transformer co-author

mixedtentative

Position inferred from work on Transformer-derived RNA design; no explicit AI-strategy statement on record. Quote describes the technical bet, not the strategic one.

We're using the same architecture that powers language models to design RNA medicines. The substrate matters, but the underlying learning machinery generalizes.
articleInceptive: AI-designed RNA· Inceptive· 2023· faithful paraphrase

Jason Wei

OpenAI; chain-of-thought prompting

endorsestentative

Argues scaling and emergent capabilities are the primary axis of AI progress; views capability research as the engine of useful applications.

“Chain-of-thought prompting elicits reasoning in large language models. We find that scaling matters: chain-of-thought is an emergent ability of scale.”
§ paperChain-of-Thought Prompting Elicits Reasoning in Large Language Models· arXiv / Google Brain· 2022-01· direct quote
John von Neumann

John von Neumann

Mathematician and singularity originator (1903–1957)

mixedtentative

Anticipated that the accelerating pace of technological progress would reach an essential singularity beyond which human affairs as we know them could not continue. Treated this as descriptive rather than prescriptive.

“The accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

Context: Reported by Stanislaw Ulam in his 1958 obituary of von Neumann; widely cited as the first articulation of a technological singularity.

§ paperTribute to John von Neumann· Bulletin of the American Mathematical Society· 1958· direct quote

Niki Parmar

Co-founder Essential AI; Transformer co-author

endorsestentative

Position inferred from being a Transformer co-author and Essential AI co-founder; no public position statement on AI strategy is on record. Quote below is from the Transformer paper, technical not strategic.

“Multi-headed self-attention enables the model to jointly attend to information from different representation subspaces at different positions.”
§ paperAttention Is All You Need· arXiv / NeurIPS· 2017-06· direct quote

Oriol Vinyals

Google DeepMind; Gemini technical lead

endorsestentative

Position inferred from research portfolio (AlphaStar, Gemini lead), capability scaling is the implicit research bet. No primary-source AI-strategy statement on record.

Our agent, AlphaStar, learned by playing against itself, scaling self-play to a domain at the edge of professional human ability.
§ paperGrandmaster level in StarCraft II using multi-agent reinforcement learning· Nature· 2019-10· faithful paraphrase

Prafulla Dhariwal

OpenAI; GPT-4o lead

endorsestentative

Architect of OpenAI's unified multimodal flagship; argues unified end-to-end models will replace pipelined modality-specific systems.

GPT-4o is our new flagship model that can reason across audio, vision, and text in real time. The end-to-end training over modalities is what unlocks low-latency interaction.
articleHello GPT-4o· OpenAI· 2024-05· faithful paraphrase

Stefano Ermon

Stanford; generative models pioneer

endorsestentative

Pioneered score-based generative models that became the technical backbone of diffusion-driven image, audio, and video synthesis; views capability research as essential to safe deployment.

Score-based generative models learn the gradient of the log data distribution. The framework unifies many seemingly disparate generative model families and underpins modern diffusion models.
§ paperGenerative Modeling by Estimating Gradients of the Data Distribution· arXiv / NeurIPS· 2019· faithful paraphrase
Tim Brooks

Tim Brooks

Google DeepMind Veo; ex-OpenAI Sora research lead

endorsestentative

Argues video generation is on a trajectory similar to language modeling, qualitative improvements every few months, and that the next bottleneck will be control rather than fidelity.

Sora generates video by predicting future frames from a sequence of input frames. This formulation lets us scale data and compute in the same way that text models do.
articleSora: Creating video from text· OpenAI· 2024-02· faithful paraphrase

Tri Dao

Princeton; Together AI; FlashAttention and Mamba

endorsestentative

Argues throughput and efficiency improvements, not new architectures alone, are doing most of the heavy lifting in capability progress; positions Together AI's open infrastructure on this thesis.

FlashAttention computes attention with no approximation in linear memory, by being aware of GPU memory hierarchy. The same engineering carefulness can keep delivering capability for years.
§ paperFlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness· arXiv / NeurIPS· 2022-05· faithful paraphrase