AGI Strategies

strategy tag

Governance first.

Lead with regulation, treaties, liability regimes

stated endorsers

252

no opposers yet

profiled endorsers

53

248 on the board total

endorser mean p(doom)

35%

n=2 · median 35%

quotes by endorsers

262

just for this tag

principal voices

Highest-recognition profiled endorsers, broken ties by quote count. Inclusion is not endorsement of the position, it's recognition of who the discourse turns to when the bet is debated.

  • Yoshua BengioYoshua Bengio

    Household name

  • Demis HassabisDemis Hassabis

    Household name

  • Sam AltmanSam Altman

    Household name

  • Gary MarcusGary Marcus

    Household name

  • Mustafa SuleymanMustafa Suleyman

    Household name

where the endorsers sit on the board

53 of 248 profiled · 21% of the board

expertise ↓ · recognition →Household nameField-leadingEstablishedEmerging
Frontier builder
  • Demis Hassabis
  • Mustafa Suleyman
  • Suchir Balaji
  • Jeff Dean
  • William Saunders
·
Deep technical
  • Yoshua Bengio
  • Gary Marcus
  • Timnit Gebru
  • Joy Buolamwini
  • Vint Cerf
  • Margaret Mitchell
  • Abeba Birhane
  • Edward Felten
  • David Krueger
·
Applied technical
  • Cathy O'Neil
·
·
Policy / meta
  • Sam Altman
  • Chuck Schumer
  • Rishi Sunak
  • Sundar Pichai
  • Ursula von der Leyen
  • Kamala Harris
  • Joe Biden
  • MacKenzie Scott
  • Andrew Yang
  • Evan Williams
  • Olaf Scholz
  • Kara Swisher
  • Bret Taylor
  • Tony Blair
  • Holden Karnofsky
  • Jack Clark
  • Helen Toner
  • Jen Easterly
  • Jason Matheny
  • Gillian Hadfield
  • Alondra Nelson
  • Frank Pasquale
  • Luciano Floridi
  • Amy Zegart
  • Ted Lieu
  • Stuart Buck
  • Mireille Hildebrandt
·
External-domain expert
  • Pope Francis
  • Daron Acemoglu
  • Shoshana Zuboff
  • Maria Ressa
  • Amartya Sen
  • Joseph Stiglitz
  • Carl Benedikt Frey
  • Kate Darling
··
Commentator····

Each face is one profiled person. Cell shade intensifies with endorser density. Faces with × are profiled opposers, same tier, opposite position. Empty cells mark tier combinations the field has not produced for this bet.

also held by these endorsers

What other strategies the same people endorse. Behavioural signal of compatibility, not a declared rule. A high share means the two positions are routinely held together.

Compare this list to the declared relations matrix. Where they differ, the data reveals a pairing the framework doesn't name yet, the global co-endorsement view ranks all pairs.

Tier mix counts only endorsers (endorses, mixed, conditional, evolved-toward).

expertise mix of endorsers · 53 profiled of 252

Builds frontier systems
5
Deep ML / safety technical
10
Applied or adjacent technical
2
Governance, policy, strategy
28
Expert in another field
8
Public-square commentator
0

recognition mix of endorsers

Mass-public recognition
28
Known across the AI/safety field
18
Recognised inside subfield
7
Newer or less central voice
0

vintage mix · n=53 of 53 profiled with era assigned

Pioneer
3
Symbolic era
1
Pre-deep-learning
6
Deep-learning rise
18
Scaling era
11
Post-ChatGPT
14

Vintage is the era when this person's AI worldview formed, pioneer through post-ChatGPT. A bet held mostly by post-ChatGPT entrants is in a different epistemic state from one held by pre-deep-learning veterans.

People on the record.

252
Abeba Birhane

Abeba Birhane

Mozilla Foundation senior advisor; AI ethics researcher

endorses

Argues dataset-level audits are the tractable governance lever and that 'AGI' rhetoric is harmful to minoritised users.

The dataset is the system. Audit the dataset.
articleAbeba Birhane, research· abeba-birhane.com· 2023· faithful paraphrase
Adam Tooze

Adam Tooze

Columbia historian; Chartbook newsletter

endorses

Argues AI governance is fundamentally a question of macroeconomic and geopolitical strategy; treats the China-U.S. tech competition as the structural frame within which AI policy will be set.

AI is unfolding within a configuration of state power, capital, and infrastructure that is already in motion. Treating it as a free-floating technology to be governed in the abstract misses where the action is.
blogChartbook by Adam Tooze· Substack· 2024· faithful paraphrase

Adrian Weller

Cambridge professor; Alan Turing Institute fellow

endorses

Bridges technical ML research and UK government AI policy work; argues evidence-based regulation is the durable framework.

Evidence-based AI policy beats principles-based AI policy when the evidence is there. We just have to invest in producing the evidence.
articleAdrian Weller, Alan Turing Institute· Alan Turing Institute· 2024· loose paraphrase
Adrienne LaFrance

Adrienne LaFrance

The Atlantic executive editor; technology critic

endorses

Frames AI governance around democratic epistemics and civic resilience rather than around extinction or optimism.

The question isn't whether AI will change democracy. It is whether we will have functioning democracy afterwards.
articleThe Atlantic editorial direction· The Atlantic· 2024· loose paraphrase

Adrienne Williams

Former Amazon warehouse worker; AI labour activist

endorses

First-hand voice for workers surveilled by AI; argues those affected should lead policy.

I was the AI's training data. The people building AI for warehouses have never worked in one.
articleAI Now Institute· AI Now Institute· 2024· loose paraphrase

Akash Wasil

Encode Justice; AI policy advocate

endorses

Argues U.S. policy must catch up to capability progress; supports legally enforceable safety standards rather than purely voluntary frameworks.

We are losing the race between capability and policy. Legally enforceable safety standards, with real consequences for violations, are the only way to close that gap.
articleCenter for AI Policy· Center for AI Policy· 2024· faithful paraphrase

Albert Fox Cahn

Surveillance Technology Oversight Project (S.T.O.P.) founder

endorses

Litigates against AI-enabled surveillance; argues current US law allows surveillance practices that would have been unthinkable a decade ago.

AI surveillance is rolling out faster than the laws to govern it. The gap is the danger.
articleS.T.O.P.· S.T.O.P.· 2024· loose paraphrase
Alex 'Sandy' Pentland

Alex 'Sandy' Pentland

MIT Connection Science director; computational social science

endorses

Argues data is collective property and should be governed via 'data cooperatives' rather than corporate ownership.

Data should be treated as a community asset. Data cooperatives are the institutional form that follows from that.
bookSocial Physics· Penguin· 2014· faithful paraphrase

Alex Kantrowitz

Big Technology podcast host; tech journalist

mixed

Reports AI from a measured tech-business angle; pushes CEOs on accountability without being captured.

The AI industry has not earned the public trust it is asking for. The story is far from settled.
blogBig Technology· Big Technology· 2024· loose paraphrase

Alex Tamkin

Anthropic societal impact researcher

endorses

Publishes on how AI is actually deployed and what the societal impact patterns are, concrete data rather than speculative framings.

Measuring how models are actually used is the prerequisite for credible societal impact claims.
blogAnthropic societal impact research· Anthropic· 2024· loose paraphrase

Allan Dafoe

DeepMind Frontier Safety and Governance lead

endorses

Argues AI governance must reckon with strategic incentives: lab races, great-power competition, and institutional path dependence.

AI governance needs to be treated as a political-economy problem, not only a technical compliance problem.
§ paperAI Governance: A Research Agenda· Future of Humanity Institute· 2018· faithful paraphrase

Allen Gunn

Executive Director of Aspiration Tech

endorses

Organises civil-society-side AI governance work; champions participatory governance over expert-led regulation.

AI governance has to include the people most affected by AI. Otherwise it's just self-regulation.
articleAspiration Tech· Aspiration· 2023· loose paraphrase
Alondra Nelson

Alondra Nelson

Former Biden OSTP deputy director; architect of the AI Bill of Rights

endorses

Advocated civil-rights-framed AI governance: the AI Bill of Rights proposes five principles (safe systems, algorithmic discrimination protections, data privacy, notice, and human alternatives).

“The Blueprint for an AI Bill of Rights is for everyone who interacts daily with these powerful technologies, and every person whose life has been altered by unaccountable algorithms.”
articleBlueprint for an AI Bill of Rights· The White House· 2022-10-04· direct quote
Amartya Sen

Amartya Sen

Harvard economist; capability approach pioneer

endorses

Argues AI evaluation must be grounded in human capabilities, what people can do and become, not just narrow technical or economic metrics.

Development is about expanding capabilities. AI should be evaluated by how it expands human capabilities.
articleAmartya Sen, Harvard· Harvard Scholar· 2024· loose paraphrase
Amba Kak

Amba Kak

Co-director of the AI Now Institute

endorses

Argues AI governance is primarily a political-economy problem and that reform must go beyond procedural 'safety' framings.

AI policy has been captured by the industry being regulated. The question is who governs the governors.
articleAI Now Institute· AI Now Institute· 2023· loose paraphrase
Amy Zegart

Amy Zegart

Stanford Hoover senior fellow; national security and AI

endorses

Argues AI is transforming intelligence; national security institutions must adapt to AI as infrastructure.

Intelligence agencies are now picking through huge haystacks for one or two needles of insight, and that's precisely the kind of project at which AI excels.
bookSpies, Lies, and Algorithms· Princeton University Press· 2022· faithful paraphrase

Andrew Trask

Founder of OpenMined; privacy-preserving AI

endorses

Argues privacy-preserving AI is the technical substrate for AI that can be both open and safe.

Structured transparency, letting outsiders verify that an AI system has the properties it claims, without exposing the data, is the missing layer of AI governance.
articleOpenMined· OpenMined· 2024· loose paraphrase
Andrew Yang

Andrew Yang

Former US presidential candidate; Forward Party founder

endorses

Advocates for UBI and new labour-market institutions in response to AI automation; signed the Pause letter.

Automation is not on its way. It's here. We need a Freedom Dividend to respond.
bookAndrew Yang, The War on Normal People· Hachette· 2019· faithful paraphrase
Anil Dash

Anil Dash

Glitch former CEO; technology culture writer

endorses

Argues tech regulation should be grounded in civil-society frameworks; has criticized 'AI' as a marketing category that obscures specific harms.

'AI' is marketing. The actual question is whose data, whose labour, and whose rules.
blogAnil Dash blog· anildash.com· 2024· loose paraphrase
Anita Allen

Anita Allen

UPenn law professor; privacy and AI

endorses

Argues legal-philosophical privacy frameworks are foundational to AI governance, not just technical privacy mechanisms.

Privacy theory is not a luxury for AI. It is the precondition of AI policy that protects human dignity.
articleAnita Allen, UPenn Carey Law· UPenn Carey Law School· 2024· loose paraphrase
Anna Bacciarelli

Anna Bacciarelli

Human Rights Watch senior researcher; formerly Amnesty International

endorses

Argues AI governance must be grounded in existing international human-rights law, with particular focus on non-discrimination and surveillance.

The Toronto Declaration sets out tangible and actionable standards for states and the private sector to uphold the principles of equality and non-discrimination under binding human rights laws.
articleAccess Now and Amnesty International launch Toronto Declaration· Access Now· 2018-05-16· faithful paraphrase
Anna Eshoo

Anna Eshoo

Former US Representative (CA); AI Foundation Model Transparency Act sponsor

endorses

Architect of the AI Foundation Model Transparency Act; advocates for structured transparency over permission-based regulation.

“Transparency into how AI models are trained and what data is used to train them is critical for consumers and policy makers.”
articleEshoo, Beyer Introduce Landmark AI Regulation Bill· Office of Congresswoman Anna Eshoo· 2023-12· direct quote

Anna Makanju

OpenAI VP of Global Impact; policy veteran

endorses

Argues OpenAI engages proactively with governments and advocates measured, risk-tiered regulation.

AI policy needs to be built by people who understand both the technology and the geopolitics.
articleOpenAI policy leadership· OpenAI· 2024· loose paraphrase
Anthony Albanese

Anthony Albanese

Prime Minister of Australia (2022–)

mixed

Cautious supporter of AI regulation; aligns Australia with mid-Atlantic positions, stronger than U.S., softer than EU, on frontier model governance.

Australia must shape the rules around AI rather than be a passive recipient of them. That means working with both our allies and our region.
articleAustralian Government's response to safe and responsible AI in Australia consultation· Australian Department of Industry, Science and Resources· 2024· faithful paraphrase
Anu Bradford

Anu Bradford

Columbia Law professor; 'The Brussels Effect' author

endorses

Argues the EU AI Act will propagate globally via the Brussels Effect, regardless of US action.

The Brussels Effect operates on AI as on every other regulated technology: when the EU regulates a global market, that regulation becomes global standard.
bookThe Brussels Effect· Oxford University Press· 2023· faithful paraphrase
Arvind Krishna

Arvind Krishna

CEO of IBM

mixed

Supports accountability-focused AI regulation; opposes rules that create unpredictability for business.

Companies that put out AI models should be held accountable to their models.
videoIBM CEO on AI and regulation· Bloomberg· 2023· faithful paraphrase
Azeem Azhar

Azeem Azhar

Exponential View founder; tech-economy analyst

endorses

Argues institutional capacity to absorb AI is the binding constraint on whether AI is net positive.

We have exponential technology in linear institutions. The gap is the governance problem.
bookThe Exponential Age· Diversion Books· 2021-09-07· faithful paraphrase

Barath Raghavan

USC professor; digital infrastructure and AI energy

endorses

Argues AI energy consumption must be treated as first-class infrastructure cost, with accountability for usage.

If AI consumes 10% of world electricity, we should decide that consciously, not as an emergent property.
articleBarath Raghavan, USC· USC· 2024· loose paraphrase

Ben Buchanan

Former White House AI Special Advisor (2021–2025)

endorses

Architect of chip export controls and the 2023 AI executive order; argues national security and AI safety are inseparable.

Export controls are the most important tool the United States has on frontier AI.
articleBen Buchanan on AI and national security· ChinaTalk· 2024· faithful paraphrase
Brad Smith

Brad Smith

Microsoft Vice Chair and President

endorses

Supports licensing of frontier models, export controls on advanced chips, and an internationally coordinated oversight regime.

We need to slow down, not stop, so that we can put in place the guardrails that a technology this powerful demands.
blogBrad Smith on Microsoft's AI governance posture· Microsoft· 2023· loose paraphrase
Brad Templeton

Brad Templeton

Long-time tech journalist; self-driving cars critic

mixed

Bridges technical engineering and AI policy on transport. Argues self-driving safety claims need real-world validation, not just simulation.

Real-world miles matter. AVs that are safer in simulation than on roads need scrutiny.
blogBrad Templeton on autonomous vehicles· templetons.com· 2024· loose paraphrase
Brando Benifei

Brando Benifei

MEP; EU AI Act co-rapporteur

endorses

Argued for stricter rules on foundation models and biometric surveillance during the AI Act trilogues; framed AI regulation as a fundamental-rights protection mechanism.

AI must be human-centric and rights-based. Without rules, the technology will reshape our societies according to whatever values its developers happen to hold.
articleBrando Benifei on the AI Act· European Parliament· 2023-12· faithful paraphrase
Bret Taylor

Bret Taylor

Chairman of OpenAI; co-CEO of Sierra

endorses

As OpenAI Chair, has emphasised structured governance and board independence; co-led the post-Altman-saga reform.

OpenAI's mission requires governance that can survive disagreement among its board.
articleOpenAI board statement· OpenAI· 2023-11-29· loose paraphrase
Bruce Schneier

Bruce Schneier

Security guru; AI security and democracy critic

endorses

Argues AI security and AI democracy questions overlap; advocates structural changes to platform power.

We're being conditioned by AI in ways we don't yet understand. The political-economic question is who builds the AI we are conditioned by.
blogSchneier on Security· Schneier on Security· 2024· loose paraphrase

Carina Prunkl

Utrecht AI ethics researcher; former FHI

endorses

Argues AI ethics must engage more robustly with broader structural and political factors, not just algorithmic properties.

AI ethics has largely focused on algorithmic properties; we need to zoom out to structural and political context.
§ paperBeyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society· arXiv· 2020· faithful paraphrase
Carl Benedikt Frey

Carl Benedikt Frey

Oxford economist; 'The Future of Employment' author

endorses

Argues labour-market impact demands policy response; complements the AI x-risk agenda with economic welfare concerns.

“About 47 percent of total US employment is in the high-risk category, meaning associated jobs could be automated in the next decade or two.”

Context: From the landmark 2013 Frey–Osborne paper.

§ paperThe Future of Employment· Oxford Martin School· 2013· direct quote

Carlos Ignacio Gutierrez

Future of Life Institute AI policy researcher

endorses

Maps the comparative AI legislative landscape across jurisdictions.

Without comparative AI legislative analysis, jurisdictions repeat each other's mistakes.
articleFLI AI policy· Future of Life Institute· 2024· loose paraphrase
Carme Artigas

Carme Artigas

Spanish AI and Digital Agenda Secretary; AI Advisory Body co-chair

endorses

Chief negotiator of the EU AI Act and co-chair of the UN AI Advisory Body's global-governance work.

We negotiated the EU AI Act with one principle: human rights are non-negotiable.
articleCarme Artigas on the EU AI Act· European Commission· 2023-12-08· faithful paraphrase

Carolyn Rouse

Princeton anthropology chair; AI sociology

endorses

Argues AI ethics requires deep sociological grounding, particularly on race and historical inequality.

AI ethics without sociology produces frameworks that are blind to the structural conditions in which AI is deployed.
articleCarolyn Rouse, Princeton Anthropology· Princeton University· 2024· loose paraphrase
Casey Newton

Casey Newton

Platformer founder; Hard Fork co-host

mixed

Reports on AI policy and the AI lab politics; generally pro-regulation pragmatic.

The AI companies are going to police themselves exactly as well as every past industry has, which is to say, not at all.
blogPlatformer· Platformer· 2024· loose paraphrase

Catelijne Muller

ALLAI president; EU AI Act civil-society voice

endorses

Argues the EU's risk-based regulatory approach should be the global template; pushed for stronger civil-society participation in the AI Act trilogues.

Trustworthy AI must be lawful, ethical and robust. The EU AI Act is the world's first comprehensive attempt to make these requirements binding.
articleALLAI position on the EU AI Act· ALLAI· 2023· faithful paraphrase
Cathy O'Neil

Cathy O'Neil

Mathematician; Weapons of Math Destruction author

endorses

Argues algorithmic systems must be audited and their harms to vulnerable populations must be measured and mitigated.

“Models are opinions embedded in mathematics.”
bookWeapons of Math Destruction· Crown· 2016-09-06· direct quote
“The human victims of WMDs are held to a far higher standard of evidence than the algorithms themselves.”
bookWeapons of Math Destruction· Crown· 2016-09-06· direct quote

Chinasa T. Okolo

Brookings fellow; African Union AI strategy contributor

endorses

Contributed to the AU-AI Continental Strategy. Argues AI governance in Africa cannot be imported wholesale from OECD frameworks.

AI is not Africa's savior. Avoiding technosolutionism in digital development requires AI governance rooted in African contexts.
articleAI is not Africa's savior: Avoiding technosolutionism in digital development· Brookings· 2024· faithful paraphrase
Chinmayi Arun

Chinmayi Arun

Yale ISP fellow; Indian tech policy scholar

endorses

Argues AI governance must take non-US, non-EU legal systems seriously; frames AI policy through a comparative constitutional law lens.

AI governance frameworks built on US and EU constitutional premises often don't translate. Indian, Brazilian, and South African jurisprudence has its own grip on these problems.
articleChinmayi Arun at Yale ISP· Yale ISP· 2024· loose paraphrase
Chuck Schumer

Chuck Schumer

US Senate Majority Leader (2021–2024); architect of the SAFE AI framework

endorses

Pushed the SAFE framework (Security, Accountability, Foundations, Explain) as the basis for federal AI legislation; organised bipartisan AI Insight Forums.

We need an all-hands-on-deck effort to contend with AI.
talkSchumer launches SAFE Innovation framework at CSIS· CSIS· 2023-06· faithful paraphrase

Claire Leibowicz

Partnership on AI; AI and media

endorses

Argues synthetic media governance, provenance, disclosure, liability, is the tractable live AI governance problem.

Provenance and disclosure are the foundational trust layer for AI-in-media.
articlePartnership on AI· Partnership on AI· 2024· loose paraphrase
Dame Wendy Hall

Dame Wendy Hall

Southampton professor; UK AI policy author

endorses

Co-authored the foundational UK AI strategy report (2017) and continues to advise on UK AI policy.

The UK can lead in AI if we treat it as a sovereign capacity, not a technology to be imported.
§ paperGrowing the artificial intelligence industry in the UK (Hall-Pesenti review)· UK Government· 2017-10· faithful paraphrase
Darío Gil

Darío Gil

SVP and Director of IBM Research

endorses

Advocates shared open benchmarks and standards as the backbone of AI governance.

Open science and open standards are the backbone of trustworthy AI.
articleIBM Research AI· IBM Research· 2024· loose paraphrase
Daron Acemoglu

Daron Acemoglu

MIT economist; 2024 Nobel laureate

endorses

Argues AI must be redirected toward human augmentation via policy, antitrust, and labour-market mechanisms.

Progress depends on the choices societies make about technology. We have to choose human-complementary AI, or else it will be chosen for us.
bookPower and Progress· PublicAffairs· 2023-05· faithful paraphrase
David Krueger

David Krueger

Cambridge professor; AI extinction risk advocate

endorses

Calls for binding international governance and argues that voluntary commitments from frontier labs are structurally insufficient.

Voluntary commitments from frontier labs are structurally unreliable. We need binding external constraints.
blogDavid Krueger on AI governance· davidscottkrueger.com· 2024· faithful paraphrase

Deep Ganguli

Anthropic societal impact lead

endorses

Argues meaningful safety work must include societal-impact measurement alongside technical evaluations.

We cannot align AI with the right human values until we measure what it does to society when deployed.
articleAnthropic Societal Impact research· Anthropic· 2024· loose paraphrase

Deepak Padmanabhan

Queens University Belfast; AI responsibility

endorses

Argues AI responsibility must address structural patterns, not just model-level metrics.

Responsible AI needs to look at systems in context, not just at models on a bench.
articleQueen's University Belfast, Deepak Padmanabhan· Queen's University Belfast· 2023· loose paraphrase
Demis Hassabis

Demis Hassabis

CEO of Google DeepMind; 2024 Nobel laureate

endorses

Calls for international coordination on frontier AI, framed around immediate bio/cyber misuse risk plus longer-term autonomous-system risk.

We should think of aligning AI like raising a child, guardrails and values have to come together.

Context: CBS 60 Minutes interview with Scott Pelley.

videoDemis Hassabis | Sunday on 60 Minutes· CBS 60 Minutes· 2025-04-20· faithful paraphrase
Artificial intelligence could end disease and lead to radical abundance.
articleArtificial intelligence could end disease, lead to 'radical abundance'· CBS News· 2025-04-20· faithful paraphrase
Diane Coyle

Diane Coyle

Cambridge economist; Bennett Professor of Public Policy

endorses

Argues GDP-style measurement frameworks need overhaul to capture AI's economic effects; without measurement, governance is blind.

We are governing AI based on outdated economic indicators that don't measure most of what AI is doing.
articleDiane Coyle, Bennett Institute Cambridge· Bennett Institute Cambridge· 2024· loose paraphrase

Divya Shrivastava

RAND Corporation AI safety policy researcher

endorses

Contributes technical-risk analysis to RAND's AI-biosecurity and cyber research.

The near-term catastrophic AI risks we can actually measure, biosecurity uplift, cyber offence, should ground policy, not speculative framings.
articleRAND AI research· RAND· 2024· loose paraphrase

Dominic Cummings

Former UK No. 10 chief adviser; AI policy commentator

endorses

Writes on the UK and US governance weakness in responding to frontier AI; argues for professionalised expert teams in government.

Western states are dangerously underpowered to handle frontier AI. The machinery of state needs technical teams, not more committees.
blogDominic Cummings on AI and state capacity· Substack· 2024· loose paraphrase
Don Beyer

Don Beyer

US Representative (VA); AI Foundation Model Transparency Act sponsor

endorses

One of the Congress members with technical AI training; co-sponsored transparency-first AI legislation.

We cannot regulate what we cannot audit. Transparency about training data and model characteristics is the minimum.
articleDon Beyer on AI transparency· Office of Congressman Don Beyer· 2023-12· loose paraphrase
Dorothy Denning

Dorothy Denning

Georgetown emeritus; cybersecurity pioneer

endorses

Brings cybersecurity grounding to AI governance; argues AI creates new attack surfaces that existing defense doctrine does not cover.

AI is both a tool for defenders and a tool for attackers. The balance depends on the deployment context.
articleDorothy Denning, Georgetown· Georgetown University· 2023· loose paraphrase
Dragoș Tudorache

Dragoș Tudorache

MEP; EU AI Act co-rapporteur

endorses

Argued AI regulation must be horizontal and risk-based; co-shaped the EU AI Act's tiered framework that distinguishes prohibited, high-risk, and limited-risk uses.

The EU AI Act is the world's first comprehensive AI regulation. We chose a risk-based approach because we wanted to regulate uses of AI, not the technology itself.
articleEU AI Act adopted· European Parliament· 2024-03· faithful paraphrase

Ed Newton-Rex

Fairly Trained founder; ex-Stability AI

endorses

Runs the Fairly Trained certifier for consent-based AI training; argues fair-use defence is structurally wrong for generative AI.

“I resigned from Stability AI because I disagree with the company's position that training generative AI models on copyrighted works is 'fair use'.”
tweetEd Newton-Rex, resignation statement· X/Twitter· 2023-11-15· direct quote
Edward Felten

Edward Felten

Princeton emeritus; ex-FTC Chief Technologist

endorses

Argues AI policy should be built on technical literacy in government; technologists need to be inside agencies to make policy implementable rather than performative. Frames AI governance as a continuity of decades of computer-and-society policy work.

Good tech policy requires technologists in government, not just outside advisors. The detail of what AI systems actually do is where policy succeeds or fails.
articleCITP Princeton· Princeton CITP· 2023· faithful paraphrase
AI governance is not a new field. It is a continuation of decades of computer-and-society policy work.
articleEdward Felten, Princeton· Princeton University· 2024· loose paraphrase

Edward Harris

Gladstone AI co-founder

endorses

Authored policy recommendations including export controls on frontier compute and mandatory model evaluations.

The US should create a frontier AI regulatory agency with compute licensing authority.
§ paperAn Action Plan to Increase the Safety and Security of Advanced AI· Gladstone AI / US State Department· 2024-03-11· faithful paraphrase
Edward Snowden

Edward Snowden

NSA whistleblower; AI surveillance critic

endorses

Argues AI massively expands the surveillance possibilities he warned about a decade ago. Calls for civil-liberty-grounded constraints.

AI is the most powerful surveillance technology ever invented. The threat model has changed; the law has not.
tweetEdward Snowden on X· X/Twitter· 2024· loose paraphrase
Eliza Strickland

Eliza Strickland

IEEE Spectrum senior editor; AI Spectrum

mixed

Reports AI from an engineering-society lens; pushes for measurable, auditable AI deployment.

Engineering AI requires engineering accountability. Right now, marketing is outpacing both.
articleIEEE Spectrum AI· IEEE Spectrum· 2024· loose paraphrase
Elizabeth Kelly

Elizabeth Kelly

Founding director of the US AI Safety Institute

endorses

Framed the US AI Safety Institute's mission around 'advancing the science of AI safety' via evaluations, red-teaming, and international coordination.

“Safety enables trust, which enables adoption, which enables innovation.”
talkThe U.S. Vision for AI Safety: A Conversation with Elizabeth Kelly· CSIS· 2024· direct quote

Emily Grumbling

Former AI policy advisor; National Academies staff

endorses

Frames US AI governance as a cross-cutting interagency challenge requiring better expertise and coordination mechanisms.

Federal AI expertise is unevenly distributed; the interagency coordination task is bigger than people appreciate.
articleNASEM Computer Science and Telecommunications Board· National Academies· 2024· loose paraphrase

Emma Strubell

CMU professor; energy cost of AI pioneer

endorses

Argues energy and carbon should be first-class constraints on AI training.

Training a single large NLP model can emit as much carbon as five cars over their lifetimes.
§ paperEnergy and Policy Considerations for Deep Learning in NLP· arXiv· 2019-06-05· faithful paraphrase
Erie Meyer

Erie Meyer

Former CFPB Chief Technologist

endorses

Argues enforcement of existing consumer-protection law is underused against AI harms.

There is no AI exemption in the Equal Credit Opportunity Act.
articleCFPB Circular on chatbot use· CFPB· 2023· faithful paraphrase
Evan Greer

Evan Greer

Fight for the Future director; digital rights activist

endorses

Frames AI policy as a digital civil-rights battle; mobilises grassroots opposition to surveillance AI.

Big Tech wants you to debate whether AI will kill us all in 50 years so you don't notice it is harming you today.
articleFight for the Future· Fight for the Future· 2024· loose paraphrase
Evan Williams

Evan Williams

Twitter co-founder; Medium founder

mixed

Publicly concerned about AI's effect on information ecosystems; cautious about both hype and doom.

We are in the middle of an experiment on the information ecosystem. We should not pretend we have consent to run it.
blogEv Williams interviews· Medium· 2024· loose paraphrase
Frank Pasquale

Frank Pasquale

Brooklyn Law; Black Box Society

endorses

Proposed four 'New Laws of Robotics', robots should complement humans, not counterfeit them, and humans must retain accountability.

Robots should not counterfeit humanity; they should complement it.
bookNew Laws of Robotics· Harvard University Press· 2020· faithful paraphrase
Gabriel Weinberg

Gabriel Weinberg

Founder and CEO of DuckDuckGo

endorses

Argues AI surveillance is a civil-rights emergency and should be banned before it is entrenched.

“AI surveillance should be banned while there is still time. All the same privacy harms with online tracking are also present with AI, but worse.”
articleDuckDuckGo founder: AI surveillance should be banned· Rude Vulture· 2024· direct quote

Garrison Lovely

Journalist covering AI safety and EA

endorses

Reports on the AI-safety movement from a left-wing labour perspective; combines x-risk seriousness with labour-politics framing.

The AI debate is not just about whether we survive, but about who controls what survives.
blogGarrison Lovely, freelance archive· garrisonlovely.com· 2024· loose paraphrase
Gary Marcus

Gary Marcus

Cognitive scientist; LLM skeptic; regulation advocate

endorses

Argues for an FDA-style pre-deployment safety review, a nimble monitoring agency with pullback authority, and mandatory transparency.

“The big tech companies' preferred plan boils down to 'trust us'. Why should we?”

Context: Senate testimony on AI oversight.

testimonySenate Testimony Gary Marcus May 16, 2023· US Senate Judiciary Committee· 2023-05-16· direct quote
“We are facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability.”
testimonySenate Testimony Gary Marcus May 16, 2023· US Senate Judiciary Committee· 2023-05-16· direct quote

Geoffrey Cain

Author of 'The Perfect Police State'

endorses

Documents how AI surveillance is already deployed in authoritarian contexts; argues governance frameworks must address this present reality.

Xinjiang is a glimpse of what AI in the hands of an authoritarian state actually looks like.
bookThe Perfect Police State· PublicAffairs· 2021-06-29· loose paraphrase
Gillian Hadfield

Gillian Hadfield

University of Toronto; 'regulatory markets' theorist

endorses

Argues the standard harms-regulation paradigm is necessary but insufficient; proposes private regulatory markets as a scalable complement.

Regulatory markets require the targets of regulation to purchase regulatory services from a private regulator, which competes on quality of regulation.
§ paperRegulatory Markets: The Future of AI Governance· arXiv· 2020· faithful paraphrase

Hadrien Pouget

Carnegie Endowment; EU AI Act translator-in-chief

endorses

Argues U.S. policymakers underestimate how much the EU AI Act will set de facto global standards; calls for U.S. policy that engages substantively rather than dismissing Brussels.

The EU AI Act is going to shape the global market for advanced AI whether U.S. firms like it or not. The substantive question is which provisions are exportable and which are uniquely European.
articleHadrien Pouget, Carnegie Endowment· Carnegie Endowment· 2024· faithful paraphrase

Hany Farid

UC Berkeley professor; digital forensics pioneer

endorses

Advocates for content provenance standards (C2PA) and universally-applied media-detection infrastructure.

The problem with deepfakes is not the fakes. It's that every real thing now has plausible deniability.
articleHany Farid on deepfakes· UC Berkeley· 2023· loose paraphrase

Haydn Belfield

Cambridge CSER academic project manager

endorses

Bridges Cambridge x-risk research and UK policy; helps design third-party AI evaluation frameworks.

Third-party AI evaluation is an under-developed governance primitive that the next decade of AI policy will be built on.
articleCSER policy work· CSER· 2024· loose paraphrase

He Jianfeng

China Academy of Information and Communications Technology researcher

endorses

Contributes to Chinese AI standards work and participates in international AI governance dialogues.

China's AI governance framework is evolving in conversation with international standards, not in isolation.
articleCAICT AI research· CAICT· 2024· loose paraphrase

172 more on the record. The page renders the first 80 alphabetically; the rest live in the full directory, filterable by this tag.