Who believes what about AGI.
Every strategic claim on this site belongs to a named person, is dated, and links to a primary source. Direct quotes are marked direct; paraphrases are marked as such.
Strategy categories are inductive, built from what people actually argue, not imposed from a framework. Expect the taxonomy to change as the corpus grows.
People indexed
935
Quotes
1011
p(doom) on record
28
Strategy tags
41
Browse the corpus.
p(doom) board →profile filters · expertise · recognition · vintage
expertise
recognition
vintage · era of AI worldview formation
See the board for the full grid and tier criteria. Profiled subset is hand-classified, unprofiled people are filtered out by these chips.
filter by specific strategy (41 tags)

Nick Bostrom
Author of Superintelligence; founded Oxford's Future of Humanity Institute
Philosopher whose 2014 book Superintelligence made 'existential risk from AI' legible to mainstream audiences and policymakers. Frames the problem as a control problem requiring pre-committed solutions before we create superhuman systems.
Policy / meta · Household name · Symbolic era
Existential primacyAlignment firstLong reflection
Liron Shapira
p 50%Founder; Doom Debates podcast host
Tech founder (Pulse, Relationship Hero) and host of Doom Debates podcast, where he argues for short timelines and high p(doom) against guests who disagree.
Applied technical · Established · Scaling era
Existential primacyPause
Stuart Armstrong
Aligned AI co-founder; ex-FHI; value-extrapolation approach
Philosopher and AI safety researcher who spent over a decade at the Future of Humanity Institute. Co-founded Aligned AI; his research centres on value extrapolation, the hypothesis that solving how to extend human values across contexts is necessary and nearly sufficient for alignment.
Deep technical · Established · Pre-deep-learning
Alignment first
Joseph Carlsmith
p 10%Open Philanthropy researcher; 'Is Power-Seeking AI an Existential Risk?'
Philosopher and senior research analyst at Open Philanthropy whose 2021 report on power-seeking AI produced the most cited quantitative decomposition of the existential AI risk argument.
Policy / meta · Field-leading · Scaling era
Existential primacyAlignment firstInioluwa Deborah Raji
Mozilla fellow; algorithmic audit researcher
Mozilla fellow and PhD student at UC Berkeley working on algorithmic auditing. Co-authored foundational work with Joy Buolamwini on commercial facial-recognition bias.
Governance firstNear-term harms first
Jeff Dean
Google Chief Scientist; co-leader of Google DeepMind
One of two Google Senior Fellows. Took over as chief scientist across Google DeepMind and Google Research after the 2023 reorg. Publicly frames AI as dual-use; emphasises present-day harms over extinction framings.
Frontier builder · Field-leading · Deep-learning rise
Governance firstSubbarao Kambhampati
ASU professor; 'LLMs Can't Plan' advocate
ASU computer science professor and former AAAI president who has been the most consistent senior academic voice arguing LLMs cannot plan, reason, or self-verify in the formal senses required for AGI.
Deep technical · Field-leading · Symbolic era
AI skeptic
Rebecca Parsons
Thoughtworks CTO emerita; AI pragmatist
Thoughtworks CTO emerita who has written on the practical software engineering implications of AI deployment. Argues careful deployment practices matter more than headline-grabbing safety debates.
Governance first
Chelsea Finn
Stanford professor; meta-learning and robotics researcher
Stanford CS professor whose robotics and meta-learning research has shaped how frontier labs think about sample-efficient learning and generalisation. Publicly on the safety-engaged side but measured.
Alignment firstAnthropic Policy Team (RSP authors)
Anthropic responsible scaling policy author
Anthropic policy team member who co-authored the Responsible Scaling Policy framework, Anthropic's capability-tied safety commitments which became the most-emulated industry RSP template.
RSP-style commitments
Isaac Asimov
Science fiction author; Three Laws of Robotics author (1920–1992)
Biochemist and prolific SF author whose 1942 'Three Laws of Robotics' pre-figured the alignment problem. Included for historical continuity and because the Three Laws remain a rhetorical reference in AI safety debates.
Alignment firstMurray Shanahan
Imperial College cognitive robotics professor; DeepMind senior scientist
Philosopher and cognitive scientist at Imperial College and DeepMind. Author of 2015 book The Technological Singularity and recent papers on the 'dissociative identity' frame for understanding LLMs.
AI welfare
Joscha Bach
Cognitive scientist; consciousness researcher
Cognitive scientist and AI researcher whose talks on consciousness and AGI are widely shared. Argues AGI is closer than people think and that the question of whether AI systems are conscious is live.
Deep technical · Established · Pre-deep-learning
Digital minds
Anders Sandberg
Former FHI researcher; transhumanist philosopher
Long-time Oxford FHI researcher who published foundational work on whole-brain emulation and existential risk. Now independent; writes on the philosophy of grand futures.
External-domain expert · Established · Pre-deep-learning
Long reflectionAvital Balwit
Anthropic communications lead; public-facing AI safety voice
Anthropic communications lead. Has written essays framing the near-term timeline to AGI as the most pressing personal and civilisational concern for her generation.
Existential primacyAmanda Askell
Anthropic philosopher-researcher
Philosopher who leads Claude's 'character' work at Anthropic. Central voice on model welfare, AI personality, and virtue-ethics-informed alignment.
Alignment firstClay Graubard
Forecaster; RAND and Good Judgment contributor
Superforecaster who has contributed to AI and x-risk forecasting exercises. Represents the professional-forecaster wing of the AI risk community.
Existential primacy
Naomi Klein
Author of This Changes Everything; AI-and-climate critic
Journalist and author of The Shock Doctrine and This Changes Everything. Argues that 'AI hallucinations' are a distraction: the actual hallucinations are the promises AI CEOs make to investors and governments.
External-domain expert · Household name · Post-ChatGPT
AI skeptic
Marietje Schaake
Former MEP; Stanford Cyber Policy Center fellow; UN AI advisory body
Former EU parliamentarian, now at Stanford's Cyber Policy Center. Serves on the UN Secretary-General's AI Advisory Body. Argues corporate capture of AI governance is the primary democratic threat.
Governance first
Cory Doctorow
EFF special advisor; 'enshittification' coiner
Long-time digital rights activist and EFF advisor. Argues AI is being driven by the same 'enshittification' dynamic that decayed social platforms, and that capture of AI policy by incumbents will make it worse.
Policy / meta · Household name · Post-ChatGPT
Antitrust primacy
Evgeny Morozov
Belarusian scholar; 'solutionism' critic
Writer on the politics of tech; coined 'technological solutionism'. Argues Silicon Valley AI framings systematically obscure political-economy questions.
External-domain expert · Field-leading · Pre-deep-learning
AI skeptic
Ruha Benjamin
Princeton sociologist; 'Race After Technology'
Princeton sociologist whose 2019 Race After Technology coined 'the New Jim Code', the way digital technologies reinforce racial hierarchies. Central voice in the civil-rights framing of AI governance.
Governance first
Jeremie Harris
Gladstone AI co-founder; AI state policy advisor
Co-founder of Gladstone AI, a US government contractor producing AI risk assessments. Authored the 2024 US State Department commissioned report on frontier AI risks.
Governance firstEdward Harris
Gladstone AI co-founder
Co-founder of Gladstone AI with his brother Jeremie. Contributed to the 2024 US State Department-commissioned action plan on frontier AI risks.
Governance first
Karen Hao
Atlantic staff writer; AI industry critic
Journalist covering AI for The Atlantic and previously MIT Technology Review. Her reporting on OpenAI, labour in the AI supply chain, and frontier-lab culture has shaped mainstream industry understanding.
Governance first
Adrienne LaFrance
The Atlantic executive editor; technology critic
Executive editor at The Atlantic whose editorial direction has framed AI coverage around democracy, epistemic integrity, and civic institutions.
Governance firstPeter Railton
Michigan ethicist; AI moral learning researcher
Michigan moral philosopher who has argued that reinforcement learning analogues in AI could form the basis for genuinely moral AI agents. Engages AI safety philosophically.
Alignment firstKate Sills
AI economic systems and multi-agent markets researcher
Applied cryptographer and cooperative-AI researcher who works on incentive-compatible market mechanisms for AI agents. Represents the applied multi-agent governance corner.
Cooperative AIIan Hogarth
Chair of UK AI Safety Institute (2023–2025); investor
Angel investor and former chair of the UK AI Safety Institute, which he helped stand up from the November 2023 Bletchley summit. Co-author of the annual 'State of AI Report'.
Governance firstHugh Zhang
Epoch AI researcher
Machine learning researcher at Epoch AI who has published on reproducibility of capability benchmarks and the scaling-data bottleneck.
Evals-driven
Lisa Su
CEO of AMD; central figure in the AI compute supply
Runs AMD, the #2 AI chip supplier. Public voice for 'more compute = more capability' and argues supply-chain constraints rather than demand have been the bottleneck on AI progress.
Policy / meta · Household name · Deep-learning rise
Techno-optimism
Clément Delangue
CEO of Hugging Face; open-source AI advocate
French co-founder of Hugging Face, the largest open-source AI model hub. Testified to US Congress that open-source AI is 'extremely aligned with American interests'.
Open source
Alex Wang
Founder of Scale AI; data infrastructure for frontier models
Youngest self-made billionaire; founded Scale AI to provide data labelling and evaluation infrastructure to frontier labs and US national security agencies.
Policy / meta · Field-leading · Scaling era
Race to aligned SI
Arthur Mensch
CEO of Mistral AI; French frontier-model founder
Co-founder of Mistral AI, the French open-weight foundation model company. Representative of European open-weight frontier effort and AI sovereignty.
Open source
Sasha Luccioni
Hugging Face AI & climate lead
Researcher focused on the environmental and energy footprint of large models. Has published widely cited work quantifying the carbon cost of training frontier models.
Governance firstTim Dettmers
Efficient-training and quantization researcher
Researcher at UW and Allen AI whose work on quantization (QLoRA, bitsandbytes) has made frontier-scale fine-tuning feasible on modest hardware. Argues frontier access is an equity question.
Open sourceYuri Burda
OpenAI researcher; exploration and RL
Longtime OpenAI RL researcher whose work on exploration (Random Network Distillation) has been widely used in frontier training. Represents the technical-engineering inside view at OpenAI.
Alignment first
Brad Smith
Microsoft Vice Chair and President
Microsoft's president and chief legal officer. Public face of Microsoft's regulatory posture on AI; argues for 'governed progress' via licensing and national-security-aware regulation.
Governance first
Matt Clifford
UK PM AI Opportunities advisor; Entrepreneur First co-founder
Led the UK AI Safety Summit preparation under Rishi Sunak and the AI Opportunities Action Plan under Keir Starmer. Bridges entrepreneurship and AI safety.
Governance firstDominic Cummings
Former UK No. 10 chief adviser; AI policy commentator
Former chief adviser to UK PM Boris Johnson. Has written extensively on AI risk and governance through his Substack, combining Whitehall experience with an empirical-risk framing.
Governance firstPeter Szolovits
MIT medical AI pioneer
MIT professor who pioneered clinical decision-support AI in the 1970s. Argues the most urgent AI policy questions concern reliability, evaluation, and deployment context, not superintelligence.
Evals-drivenIsabella Wilkinson
Chatham House international affairs AI researcher
Researcher at Chatham House focused on AI and international order. Has argued that AI's geopolitics will require institutions analogous to the IAEA for frontier compute.
International treaty
Elizabeth Kelly
Founding director of the US AI Safety Institute
Deputy assistant to the president for economic policy who became the founding director of the US AI Safety Institute at NIST after the October 2023 AI Executive Order.
Governance firstAnna Makanju
OpenAI VP of Global Impact; policy veteran
Former State Department and White House advisor; runs OpenAI's policy work. Represents OpenAI's public face in Washington and Brussels.
Governance firstNoam Brown
OpenAI reasoning researcher; Diplomacy AI
Researcher behind Meta's CICERO Diplomacy-playing AI and now a lead on OpenAI's reasoning-model research. Has driven much of the 2024–2025 shift toward chain-of-thought / o-series models.
Existential primacyBen Mann
Anthropic co-founder; researcher
Anthropic co-founder who worked on GPT-3 at OpenAI. One of the technical architects of Constitutional AI training.
Constitutional AI
Jared Kaplan
Anthropic co-founder; scaling-laws co-author
Theoretical physicist who co-authored the 2020 'Scaling Laws for Neural Language Models' paper. Anthropic co-founder and chief science officer.
Alignment first
Tom Brown
Anthropic co-founder; first author of GPT-3 paper
First author of 'Language Models are Few-Shot Learners' (GPT-3). Anthropic co-founder focusing on infrastructure and safety engineering.
Alignment firstSam McCandlish
Anthropic co-founder
Anthropic co-founder and research lead on training methods. Contributor to foundational scaling-law research at OpenAI before joining Anthropic.
Alignment firstSara Hossein
International AI law scholar
Scholar who has contributed to the Council of Europe's AI convention and argues international human-rights law is the right foundation for AI governance.
International treatyPetar Veličković
DeepMind researcher; graph neural networks
DeepMind senior research scientist known for graph neural networks and geometric deep learning. Public educator on deep learning and broadly pro-safety.
Alignment first
Seth Lazar
ANU Professor of Philosophy; MINT Lab founder
Australian National University moral philosopher whose 2023 Stanford Tanner Lecture on AI and human values has been widely cited. Runs the Machine Intelligence and Normative Theory Lab.
Governance first
Shakeel Hashim
Editor of Transformer news; AI policy journalist
Editor of Transformer, the leading weekly AI-policy newsletter. Ex-Economist journalist, now at the Tarbell Center for AI Journalism. Frames the beat as 'AI safety and governance' rather than generic tech coverage.
Governance firstAndrea Miotti
Founder of ControlAI; pause campaigner
Italian founder of ControlAI, a non-profit calling for prohibition of development of superintelligent AI. Works directly with UK and US policymakers.
Pause
Steven Adler
Independent AI researcher; former OpenAI policy team
Ex-OpenAI policy researcher who resigned citing safety culture concerns. Now independent, collaborating with ControlAI on governance proposals.
Governance firstGarrison Lovely
Journalist covering AI safety and EA
Freelance journalist who reports on AI safety, effective altruism, and the political-economy dynamics of frontier labs. Has been widely published in Jacobin, The Nation, and other outlets.
Governance firstJessica Newman
UC Berkeley AI Security Initiative director
Director of the AI Security Initiative at UC Berkeley's Center for Long-Term Cybersecurity. Works at the intersection of AI and international security.
Governance first
Simon Rosenberg
NDN democracy strategist; AI political impact
Democratic strategist whose recent writing has focused on AI's threat to democratic elections. Argues election-security is the near-term critical governance question.
Governance first
Jim Keller
CEO of Tenstorrent; legendary chip architect
Chip architect who designed key components of AMD, Apple, and Tesla silicon. Now runs Tenstorrent, building open-architecture AI accelerators. Frames compute openness as a democratisation lever.
Distributed buildersRuslan Salakhutdinov
CMU professor; former Apple AI head
CMU deep learning professor and former Apple AI research head. Publicly engaged on safety but more measured than the more outspoken camp.
Alignment firstRamin Hasani
Liquid AI CEO; liquid neural networks pioneer
MIT-trained researcher who co-founded Liquid AI to build non-transformer foundation models. Argues the future of AI is architectural diversity rather than monolithic scale.
AI skepticThomas Krendl Gilbert
Cornell Tech ethicist; reinforcement learning ethics
AI ethicist who studies the governance and moral dimensions of reinforcement learning systems. Argues the norms governing RLHF shape what AI values become.
Alignment first
Julia Galef
Rationalist author; former CFAR president
Rationalist writer and former president of the Center for Applied Rationality. Author of The Scout Mindset. Has written measured takes on AI risk that mostly agree with the non-extreme end of the AI safety community.
Existential primacy
Max Roser
Founder of Our World in Data; Oxford economist
Oxford economist who founded Our World in Data. His AI entry and charts have become the standard quantitative reference for AI capability and investment trends.
Existential primacy
Laurence Tribe
Harvard constitutional law professor emeritus
Harvard's most prominent constitutional law scholar. Signed the 2023 Statement on AI Risk; frames AI as a constitutional-scale governance challenge requiring robust legal frameworks.
Existential primacy
James Manyika
SVP of Research, Technology and Society at Google-Alphabet
Former McKinsey Global Institute chair who now runs research, tech, and society at Alphabet. Signatory to the Statement on AI Risk; public voice for measured 'shared prosperity' framings.
Existential primacy
Bill McKibben
Environmental writer; Middlebury scholar
Journalist and climate activist who has extended his civilisational-risk framing to AI. Signed the Statement on AI Risk; argues AI and climate are linked crises of extraction.
External-domain expert · Household name · Post-ChatGPT
Existential primacy
Alan Robock
Rutgers climate scientist; nuclear winter researcher
Distinguished Rutgers climate scientist who helped establish modern nuclear winter science. Signatory to the Statement on AI Risk; argues AI should be treated like nuclear weapons as a civilisational hazard.
Existential primacy
Angela Kane
Former UN High Representative for Disarmament Affairs
Senior UN diplomat who has argued for applying disarmament-style frameworks to AI. Signatory to the Statement on AI Risk.
International treaty
Martin Hellman
Stanford cryptographer; Turing Award winner
Turing Award-winning cryptographer (Diffie-Hellman key exchange). Long-time activist on nuclear risk; signed the 2023 Statement on AI Risk.
Existential primacy
Joseph Sifakis
Turing Award laureate; embedded systems researcher
2007 Turing Award laureate (model checking). Greek-French computer scientist who signed the Statement on AI Risk.
Existential primacyBen Buchanan
Former White House AI Special Advisor (2021–2025)
Georgetown CSET researcher who served as White House Special Advisor for AI from 2021 to 2025. Key architect of the Biden administration's chip export controls and the 2023 AI Executive Order.
Governance first
Kevin Esvelt
MIT biosecurity and gene drive researcher
MIT biologist who invented CRISPR gene drives. Has warned consistently that LLM-assisted biology lowers barriers to bioweapon development; a key advisor to US biosecurity policy.
Governance firstJonas Kgomo
Decolonial AI researcher; Ghana
Researcher focused on how AI affects African contexts; advocate for decolonial AI governance frameworks.
Governance first
Carme Artigas
Spanish AI and Digital Agenda Secretary; AI Advisory Body co-chair
Former Spanish Secretary of State for Digitalisation and AI who led the EU AI Act negotiations under the Spanish Presidency. Co-chairs the UN AI Advisory Body.
Governance first
Amandeep Singh Gill
UN Secretary-General's Envoy on Technology
Indian diplomat who serves as the UN Secretary-General's Envoy on Technology. Leads the UN's Global Digital Compact, including its AI provisions.
International treatyPushmeet Kohli
VP of AI Science at Google DeepMind
DeepMind executive leading AI-for-science efforts (AlphaFold, AlphaProof). Frames AI as a scientific instrument for solving structured problems, not a sentient agent.
Techno-optimismBret Kugelmass
CEO of Last Energy; 'energy is the governance variable' framing
MIT-trained entrepreneur who argues that compute, energy, and AI governance are the same problem, and that micro-reactor deployment is necessary to decouple AI progress from fossil-energy constraints.
Techno-optimismMolly Kinder
Brookings Institution fellow; AI and labour
Brookings fellow who has published some of the most cited mainstream work on AI's labour-market impact and policy responses (wage insurance, retraining).
Governance firstShazeda Ahmed
NYU AI Now fellow; technology and democracy researcher
AI Now fellow whose work on China's AI governance has been widely cited in US policy debates. Argues US AI governance often misreads Chinese developments.
Governance firstJessica Cussins Newman
AI policy specialist; Microsoft Responsible AI
AI policy researcher who led UC Berkeley's AI Security Initiative and now works on Responsible AI at Microsoft. Frames AI security as an international-coordination problem.
Governance first
Irene Solaiman
Chief Policy Officer at Hugging Face
Hugging Face's top policy officer who has led the field's thinking on staged release of AI models since her 2019 work on GPT-2 at OpenAI.
Governance firstDaniel Khashabi
Johns Hopkins assistant professor; NLP safety researcher
NLP researcher at Johns Hopkins focused on making LLMs more trustworthy, including reasoning reliability and safety-evaluation frameworks in collaboration with Microsoft.
Evals-drivenSabrina Küspert
EU AI Office; Italian / German policy researcher
EU AI Office scientist who contributed to the GPAI Code of Practice. Public voice for EU-style risk-tiered regulation.
Governance firstRobert Trager
Oxford Martin AI governance scholar
Political scientist at Oxford's Blavatnik School focused on international AI governance and verification regimes. Argues verifiable compute accounting is plausible and necessary.
International treatyLennart Heim
Compute governance researcher at RAND
Former Centre for the Governance of AI researcher now at RAND, focused specifically on compute governance, the chokepoint framework for frontier AI.
Compute governanceYonadav Shavit
OpenAI researcher; on-chip compute verification
Computer scientist who has published on how to verify AI training and inference via on-chip mechanisms, the technical side of compute governance.
Hardware killswitchTim Fist
Institute for Progress AI policy researcher
CSET alumnus who now leads AI policy at the Institute for Progress. Focused on chip export controls, compute thresholds, and domestic AI industrial policy.
Compute governanceSaif M. Khan
Former NSC AI technology director
Former CSET researcher who served on the National Security Council staff as director for technology policy. Worked closely with Ben Buchanan on chip export controls.
Compute governanceBletchley Declaration Signatories
First international AI Safety Summit signatories (2023)
29 countries including the US, UK, EU, China, India, and Japan signed the November 2023 Bletchley Declaration, the first international statement on frontier AI risk. This entry stands in for the collective action.
International treaty
Victor Gao
Chinese diplomat; AI dialogue participant
Chinese diplomat and former adviser who has participated in US-China AI safety track II dialogues. Representative voice of the Chinese establishment's public AI-governance framing.
International treatyTianhua Tang
Chinese AI safety researcher
Chinese AI safety researcher participating in international dialogues on AI alignment and governance, including IDAIS.
Alignment firstBrian Chau
Executive Director of Alliance for the Future
Former ML engineer who directs Alliance for the Future, a US policy think tank aligned with the e/acc movement and opposed to frontier AI regulation.
AccelerationJonah Brown-Cohen
DeepMind scalable oversight researcher
DeepMind researcher who authored doubly-efficient debate protocols for scalable AI oversight. Technical collaborator with Geoffrey Irving.
Alignment firstNat McAleese
OpenAI researcher; ex-DeepMind reliability
AI reliability and alignment researcher at OpenAI; previously at DeepMind working on debate-style oversight and reward modelling.
Alignment firstJoar Skalse
Oxford researcher; reward-hacking formalism
Oxford AI safety researcher who co-authored foundational work defining when reward hacking can occur in learned reward models.
Alignment firstChristopher Summerfield
Oxford neuroscientist; DeepMind senior researcher
Oxford cognitive neuroscientist and DeepMind senior research scientist. Has written on how cognitive science informs alignment and the nature of AI understanding.
Alignment firstToby Shevlane
DeepMind model evaluations researcher
DeepMind research scientist focused on dangerous-capability evaluations. Co-authored foundational papers on red-teaming and evaluation frameworks.
Evals-drivenMary Phuong
DeepMind autonomous-replication evaluations researcher
DeepMind research scientist who leads autonomous-replication evaluations, tests for whether models can autonomously set up and run copies of themselves.
Evals-drivenHoda Heidari
CMU algorithmic fairness researcher
CMU assistant professor whose work bridges algorithmic fairness and AI governance. Argues fairness metrics must be tied to concrete consequentialist framings.
Governance first
Rumman Chowdhury
Former Twitter ML ethics director; Humane Intelligence
Data scientist who ran ML ethics at Twitter and now runs Humane Intelligence, a non-profit red-teaming organisation that partners with frontier labs and DEF CON.
Governance firstAviv Ovadya
Berkman Klein Center; platform democracy
Founder of the AI & Democracy Foundation. Argues AI's threat to democracy lies less in content generation and more in epistemic infrastructure degradation.
Democratic mandateJacob Hilton
Alignment Research Center; Prover-Verifier Games
Alignment researcher at the Alignment Research Center and independent researcher. Has published influential work on prover-verifier games and eliciting latent knowledge.
Alignment firstAndy Jones
Anthropic researcher; scaling inference laws
Anthropic researcher whose work on inference scaling laws has informed the field's understanding of how reasoning and compute trade off.
Existential primacy
Matthew Barnett
Epoch AI forecaster; Metaculus AI timelines
AI forecaster at Epoch AI who co-authors many of the most-cited Metaculus AI questions, including the Transformative AI Date question.
Existential primacy
Stephen Casper
MIT PhD researcher; red-teaming and model audit
MIT algorithmic alignment researcher focused on red-teaming, auditing, and interpretability. Has documented how safeguards at current frontier labs are reliably broken by determined red-teamers.
Evals-drivenDylan Hadfield-Menell
MIT professor; Stuart Russell student; assistance games
MIT assistant professor and former Stuart Russell PhD student who works on assistance games and practical alignment of AI systems.
Alignment firstVikrant Varma
Google DeepMind AI safety researcher
DeepMind safety researcher working on model evaluation and alignment. Contributor to several major DeepMind safety publications.
Alignment firstLilian Weng
Thinking Machines; former OpenAI VP of Research
Former VP of Research at OpenAI who helped lead safety research there. Wrote widely-read technical blog posts on RLHF and alignment. Joined Mira Murati's Thinking Machines Lab in 2024.
Alignment firstEthan Perez
Anthropic researcher; red-teaming language models
Anthropic research scientist focused on red-teaming and sycophancy. Has published foundational work on model evaluation and LM-generated evaluations.
Alignment firstDeep Ganguli
Anthropic societal impact lead
Head of Anthropic's Societal Impact team. Argues that the social dimension of alignment must be front and centre of safety work.
Governance firstZac Hatfield-Dodds
Anthropic assurance team; property-based testing
Anthropic engineer known for property-based testing and assurance work. Technical voice on how software-engineering practices can support AI safety.
Alignment firstMichael Chen
METR evaluations researcher
Researcher at METR focused on autonomous-task evaluations for frontier models. Contributor to the 'task length doubling' frontier-capability tracking.
Evals-drivenJess Whittlestone
Head of AI Policy at the Centre for Long-Term Resilience
Cambridge-based AI policy researcher who led foundational work at the Ada Lovelace Institute, GovAI, and CSER. Now leads AI policy at the Centre for Long-Term Resilience, feeding UK government work on frontier AI risks.
Governance firstCarina Prunkl
Utrecht AI ethics researcher; former FHI
AI ethics researcher whose work critiques dominant AI ethics frameworks as too narrowly technical. Former FHI researcher now at Utrecht.
Governance firstJonas Schuett
GovAI; AI risk governance researcher
Centre for the Governance of AI researcher who works on AI risk management, structured transparency, and internal AI lab governance structures.
Governance firstJulia Haas
Council on Foreign Relations AI policy fellow
Policy fellow at the Council on Foreign Relations focused on AI and international security. Bridges national-security and tech-policy audiences.
Governance firstSaria Hassan
Pakistan-based AI policy researcher
Representative of the Global South perspective on AI governance; argues the current AI governance conversation systematically undervalues non-Western stakeholders.
Governance firstOscar Moxon
AI safety researcher; independent
Independent AI safety researcher publishing on LessWrong and the Alignment Forum; contributes to the technical reproduction and critique of frontier-lab claims.
Alignment firstNick Ryder
OpenAI research scientist; scaling-laws contributor
OpenAI research scientist who co-authored foundational scaling laws papers. Public-engineering voice for capability-driven progress.
Techno-optimism
Lisa Gelobter
tEQuitable founder; former Obama CTO
Former US Chief Digital Service officer under Obama, now founder of tEQuitable, a platform for addressing workplace bias. Has advocated for AI governance that serves workers.
Governance first
Jeffrey Sachs
Columbia economist; sustainable development advocate
Columbia University professor and UN sustainable development advisor. Argues AI is already transforming labour markets with no adequate policy response, and that technological power without political control is the defining risk of our era.
Governance first
Maria Ressa
Rappler CEO; 2021 Nobel Peace Prize laureate
Filipino-American journalist whose Rappler fought disinformation under the Duterte regime. Won the 2021 Nobel Peace Prize and chairs the Paris Charter on AI and Journalism commission.
External-domain expert · Household name · Post-ChatGPT
Governance first
Mariana Mazzucato
UCL economist; Entrepreneurial State author
Economist known for arguing the state is the primary driver of transformative innovation. Has turned this framework on AI: argues AI policy must be oriented toward mission-driven public investment, not laissez-faire.
Public AI
Robert Reich
Former US Labor Secretary; UC Berkeley professor
Former Clinton Labor Secretary whose recent commentary has focused on AI's disruption of labour markets and the need for anti-concentration AI policy.
Antitrust primacy
Renée DiResta
Former Stanford Internet Observatory research manager
Disinformation researcher whose work on the Russian IRA, COVID, and AI disinformation has been highly influential. Joined Georgetown CPIP in 2024 after the shutdown of Stanford Internet Observatory.
Governance firstHany Farid
UC Berkeley professor; digital forensics pioneer
UC Berkeley professor who helped pioneer digital image forensics. Leads deepfake-detection research and advocates for provenance-based governance of synthetic media.
Governance firstDivya Shrivastava
RAND Corporation AI safety policy researcher
RAND researcher focused on AI risks in biology, cyber, and national security. Contributed to the 2024 Gladstone action plan and subsequent US policy work.
Governance firstMargot Kaminski
University of Colorado law professor
Technology law professor whose work on algorithmic accountability has informed EU and US regulatory design. Argues tort law and traditional liability frameworks have more to offer than they get credit for.
Liability-driven safetyWoodrow Hartzog
BU law professor; privacy and AI scholar
Boston University law professor whose book Privacy's Blueprint shaped modern discussion of privacy by design. Argues AI governance should embed legal duties of loyalty and care.
Governance first
Frank Pasquale
Brooklyn Law; Black Box Society
Brooklyn Law professor whose 2015 Black Box Society foreshadowed modern debate about AI accountability. Advocates 'functional' laws of AI, humans must retain moral agency and accountability.
Policy / meta · Field-leading · Deep-learning rise
Governance firstRebecca Crootof
University of Richmond law professor
Law professor whose work has shaped modern thinking about tort law, AI liability, and the legal status of autonomous systems.
Liability-driven safetyRyan Calo
UW law professor; robotics law pioneer
University of Washington law professor who helped establish robotics law as a field. Argues AI law must learn from aviation, medicine, and other sector-specific regulatory histories.
Governance firstNeil Thompson
MIT CSAIL FutureTech director; computing economics
MIT researcher whose quantitative work on compute, scaling, and algorithmic progress has become standard reference material. Director of FutureTech at MIT CSAIL.
Existential primacy
Anna Bacciarelli
Human Rights Watch senior researcher; formerly Amnesty International
Founded Amnesty International's AI and algorithmic accountability program; now at Human Rights Watch. Co-author of the Toronto Declaration on Human Rights and AI.
Governance first
Darío Gil
SVP and Director of IBM Research
Leads IBM Research and has been a public voice for IBM's view that AI governance should be centered on shared standards and competitive openness rather than moratoria or extinction framings.
Governance firstReid Southen
Film concept artist; AI copyright litigation voice
Film concept artist whose collaboration with Gary Marcus on 'AI is a plagiarism machine' put image-model copyright litigation into the mainstream.
Governance firstEd Newton-Rex
Fairly Trained founder; ex-Stability AI
Former Stability AI VP of Audio who resigned in November 2023 citing disagreement with fair-use defence of training data. Now runs Fairly Trained, a certifier of consent-based AI training.
Governance firstKarine Perset
OECD AI Unit head
Heads the OECD's AI Unit, including OECD.AI and its policy observatory. Responsible for convening the 38 OECD member states on AI governance.
Governance first
Michał Kosiński
Stanford psychologist; psychometric AI researcher
Stanford psychologist whose work has shown that off-the-shelf LLMs can pass cognitive, moral, and psychometric tests at human or super-human levels. Argues emergent capabilities are already more extensive than acknowledged.
Existential primacy
Inga Strümke
NTNU AI researcher; Norwegian AI public voice
Norwegian AI researcher whose public communication on AI risk and opportunity has made her one of Scandinavia's leading voices on AI.
Governance first
Anja Kaspersen
UN senior fellow; disarmament diplomat
Norwegian diplomat and former director of UN disarmament. Has pushed for applying arms-control frameworks to AI.
International treaty
Fumio Kishida
Former Prime Minister of Japan (2021–2024); Hiroshima AI Process architect
Led Japan's G7 presidency in 2023, launching the Hiroshima AI Process as the premier G7-level international AI governance framework. Architect of the G7 International Guiding Principles for Advanced AI Systems.
Policy / meta · Household name · Post-ChatGPT
International treatyHe Jianfeng
China Academy of Information and Communications Technology researcher
Chinese researcher whose work on AI governance bridges Western and Chinese perspectives; contributor to Chinese standards bodies and international dialogues.
Governance first
Urvashi Aneja
Founding Director of Digital Futures Lab (India)
Indian researcher and founder of Digital Futures Lab, a Goa-based research practice focused on AI and society from a Global South perspective.
Governance first
Ashwini Vaishnaw
Minister of Electronics and IT, Government of India
Indian cabinet minister responsible for the AI Mission and the Digital India Act. Balances US–China framing with India's 'AI for All' strategy.
Sovereign AI
Nandan Nilekani
Infosys co-founder; architect of India's Aadhaar digital ID
Infosys co-founder and architect of India's Aadhaar and UPI digital public infrastructure. Advocates AI governance built on digital public infrastructure rather than proprietary AI.
Public AI
Enrique Dans
IE University professor; AI commentator
Spanish technology professor whose blog and Forbes column have been widely-read European commentary on AI deployment and governance.
Techno-optimism
Rajeev Chandrasekhar
Former Indian Minister of State for Electronics and IT
Former Indian deputy IT minister who oversaw India's early AI policy formation. Advocate for regulatory frameworks that balance sovereign AI with open ecosystems.
Sovereign AISeifeldin Ayad
MENA-based AI governance voice
Middle East and North Africa-focused AI policy voice. Argues MENA perspectives are systematically missing from mainstream AI governance discussions.
Governance firstZiv Epstein
Stanford CRFM; human-AI interaction and creativity
Stanford researcher whose work on human-AI creative interaction has shaped understanding of how AI affects human authorship. Published in Science on generative AI governance.
Governance first
Gabriel Weinberg
Founder and CEO of DuckDuckGo
Founder of DuckDuckGo who has extended his privacy advocacy into AI. Argues AI surveillance is more dangerous than search-engine surveillance and should be banned.
Governance firstMei Lin Fung
Chair of People-Centered Internet; AI global cooperation advocate
Chair of People-Centered Internet, founded with Vint Cerf. Chairs the Digital Cooperation and Diplomacy network. Advocates AI governance rooted in inclusive digital cooperation.
Governance first
Moxie Marlinspike
Signal co-founder; cryptographer
Cryptographer who co-founded Signal. Has written on software-supply-chain risk in a world of ubiquitous AI-assisted coding.
AI skeptic
Anil Dash
Glitch former CEO; technology culture writer
Longtime technology culture writer and ex-CEO of Glitch. Has written extensively on how AI fits (and breaks) the existing labour, media, and civic institutions.
Governance first
Mireille Hildebrandt
Brussels jurist and philosopher; 'algorithmic governance' theorist
Belgian jurist and philosopher whose work on 'smart technologies and the end of law' established foundational European framings for algorithmic governance.
Policy / meta · Established · Deep-learning rise
Governance first
Erie Meyer
Former CFPB Chief Technologist
Former Chief Technologist at the Consumer Financial Protection Bureau and the US Digital Service. Now at Vanderbilt Policy Accelerator on AI and consumer protection.
Governance firstAdam Kalai
Microsoft Research; AI fairness and safety
Microsoft Research senior principal researcher whose work on algorithmic fairness has become standard reference. Contributes to mainstream technical safety work.
Alignment firstEce Kamar
Microsoft Research AI Frontiers VP
Microsoft Research VP who leads the AI Frontiers lab. Runs mainstream industry research on AI reliability, tool use, and safety.
Alignment first
Thomas Dietterich
Oregon State emeritus; AAAI past president
Distinguished AI researcher and former AAAI president. Has argued AI safety should focus on everyday reliability failures, not extinction scenarios.
Deep technical · Field-leading · Symbolic era
AI skepticNear-term harms first
Zeynep Tufekci
Princeton sociologist; NYT columnist
Princeton sociologist and NYT opinion columnist whose work has shaped mainstream understanding of algorithmic influence on democracy and epistemic ecosystems.
Governance first
Nicholas Thompson
CEO of The Atlantic; former Wired editor
CEO of The Atlantic and former Wired editor whose interviews with AI leaders have shaped mainstream understanding of frontier AI. Public commentator on AI and democracy.
Governance first
Martin Ford
Rise of the Robots author; labour economics of AI
Author of the 2015 Rise of the Robots and 2021 Rule of the Robots, arguing AI will displace cognitive labour on a scale requiring fundamental economic policy responses.
Governance firstMatt Mahmoudi
Amnesty International AI researcher
Amnesty International researcher focused on AI-enabled surveillance and human-rights violations. Co-led Amnesty's facial-recognition investigations.
Governance firstSuresh Venkatasubramanian
Brown University professor; former White House OSTP
Brown CS professor and former OSTP deputy who co-led the development of the 2022 AI Bill of Rights blueprint.
Governance firstZebi Williams
US Digital Service former director
Former deputy administrator of the US Digital Service. Helped design responsible-procurement policies for federal AI purchasing.
Governance firstEmma Strubell
CMU professor; energy cost of AI pioneer
CMU professor whose 2019 paper on the carbon footprint of NLP training was the first widely-cited quantification of AI's environmental cost. Has continued to publish on energy efficiency and sustainability.
Governance firstRashawn Ray
Brookings Institution; AI and policing
Brookings senior fellow and sociologist whose work on AI in policing has shaped mainstream discussion of algorithmic predictive policing.
Governance first
Michael Page
Anthropic policy team
Anthropic policy team member focused on frontier model governance. Former Future of Life Institute AI policy lead.
RSP-style commitments
Christof Koch
Neuroscientist; Allen Institute for Brain Science
Neuroscientist known for work on the neural correlates of consciousness. Argues AI systems may approach consciousness on an integrated-information-theory basis.
External-domain expert · Field-leading · Symbolic era
AI welfare
Shivon Zilis
Neuralink director; OpenAI board alumna
Canadian technology executive who serves on the Neuralink leadership team. Former OpenAI board observer (2016–2019) and long-time Musk collaborator on AI safety framings.
Existential primacyMatt Sheehan
Carnegie Endowment China AI fellow
Carnegie Endowment for International Peace senior fellow focused on China's AI development. Author of widely-cited tracking reports on Chinese AI governance.
Governance first
Evan Williams
Twitter co-founder; Medium founder
Twitter and Medium co-founder who has written about his concerns with AI-driven social media and with the direction of the industry.
Policy / meta · Household name · Post-ChatGPT
Governance firstSajjad Sayyed Hossain
Bangladesh-based AI policy researcher
Representative voice for Bangladesh and Global South AI-governance perspectives. Writes on AI applications in developing economies.
Governance first
Brian Tse
Founder of Concordia AI; China AI safety
Founder of Concordia AI, a Beijing-based research organisation focused on AI safety and Chinese-Western dialogue.
International treaty
Chinmayi Arun
Yale ISP fellow; Indian tech policy scholar
Legal scholar whose work bridges Indian, US, and international tech policy. Has published on AI and content moderation, the digital public sphere, and platform governance.
Governance first
Steven Weber
UC Berkeley political scientist; tech governance
Berkeley political scientist who has written widely on tech governance, including Success of Open Source and recent work on AI and international order.
Governance first
Alex Karp
CEO of Palantir
Palantir CEO who has positioned the company as the main Western defense-AI vendor. Publicly argues the US must win the AI race against China and that AI safety framings risk American defeat.
Policy / meta · Household name · Deep-learning rise
Race to aligned SI
Palmer Luckey
Founder of Anduril; defense AI builder
Oculus founder who founded Anduril to build Western defense AI and autonomous systems. Argues the US and allies must develop AI-enabled weapons before adversaries.
Commentator · Household name · Scaling era
Race to aligned SIStanislas Polu
Co-founder of Dust.tt; ex-OpenAI formal math
Former OpenAI researcher who led formal mathematics work (miniF2F, curriculum learning). Co-founded Dust.tt, a platform for building AI agents inside companies.
Techno-optimism
Casey Handmer
Founder of Terraform Industries; AI economics commentator
Former Caltech/JPL physicist who founded Terraform Industries. Writes widely on AI's economic implications and has argued AI will produce a small number of extreme-productivity individuals.
Techno-optimism
Kim Stanley Robinson
Science fiction novelist; The Ministry for the Future
Hugo-winning science fiction writer whose 2020 The Ministry for the Future centres AI systems managing global coordination on climate. Has become a reference framing for hopeful-but-serious AI governance futures.
Governance first
Kevin Collier
NBC News cybersecurity reporter; AI coverage
NBC News cybersecurity reporter whose recent coverage of AI has focused on real-world deployment risks, disinformation, and governance debates.
Governance first
Lisa Gilbert
Public Citizen co-president; AI and democracy
Co-president of Public Citizen, a consumer-rights non-profit. Has pushed AI-accountability policy at state and federal levels.
Governance firstRoshni Rao
Data & Society; AI worker rights
Data & Society researcher focused on AI and labour, including data workers in the Global South and US gig economy.
Governance first
Shannon Vallor
Edinburgh philosopher of technology; 'The AI Mirror'
Edinburgh philosopher whose 2024 book The AI Mirror argues AI reflects and amplifies human values rather than creating new ones. Former senior Googler on responsible AI.
Governance first
Randi Weingarten
President of the American Federation of Teachers
Leads the 1.7-million-member AFT, positioning it as a major labour voice on AI in education. Argues teachers must be in control of how AI is deployed in classrooms.
Governance first
Arvind Krishna
CEO of IBM
IBM CEO who has positioned IBM as the measured enterprise-AI vendor. Supports AI regulation on accountability grounds but opposes rules that hamper business predictability.
Governance first
Dorothy Denning
Georgetown emeritus; cybersecurity pioneer
Georgetown University emeritus professor who helped establish computer security as a field. Has written on AI's implications for cybersecurity and national defense.
Governance first
Peter Wang
Co-founder of Anaconda; scientific Python and AI
Co-founder of Anaconda, the default Python distribution for scientific and AI computing. Public commentator on open-source AI and the politics of the Python ecosystem.
Open source
Laura Weidman Powers
Co-founder of Code2040; diversity in AI
Co-founder of Code2040, a non-profit focused on Black and Latinx representation in technology. Argues AI will reproduce existing inequities unless the AI workforce diversifies.
Governance first
Anna Eshoo
Former US Representative (CA); AI Foundation Model Transparency Act sponsor
Silicon Valley Democrat who co-chaired the Congressional AI Caucus and co-sponsored the AI Foundation Model Transparency Act. Retired from Congress in January 2025.
Governance first
Don Beyer
US Representative (VA); AI Foundation Model Transparency Act sponsor
Virginia Democrat who co-sponsored the AI Foundation Model Transparency Act with Anna Eshoo. Studying for a master's in machine learning at George Mason University during his congressional tenure.
Governance first
Liz Fong-Jones
Honeycomb field CTO; tech labour voice
Former Google SRE who became a prominent voice in tech labour organising after the Google walkout. Now Honeycomb Field CTO. Public voice on AI worker rights.
Governance firstSean M. Connor
Seattle AI and governance law scholar
Director of the Cascadia Innovation Corridor; former UW law professor specialising in technology governance. Advocate for regional technology regulatory capacity.
Governance first
Dan Jurafsky
Stanford NLP professor; textbook author
Stanford NLP professor whose Speech and Language Processing textbook has been the canonical NLP reference for two decades. Has written on AI's impact on language and discourse.
Alignment firstLina Dencik
Cardiff University; data-justice researcher
Cardiff University professor and co-founder of the Data Justice Lab. Scholar of data-driven surveillance and algorithmic injustice.
Governance firstKim Crayton
Anti-racism in tech strategist
Strategist who has pushed anti-racism frameworks in tech companies, including AI ethics teams. Public voice for structural framings of AI governance.
Governance firstChinasa T. Okolo
Brookings fellow; African Union AI strategy contributor
Brookings technology fellow who worked with the African Union on developing the AU-AI Continental Strategy. Named one of Time's 100 most influential people in AI in 2024.
Governance firstJuan Ortiz Freuler
Berkman Klein affiliate; Latin America AI policy
Argentine researcher and Berkman Klein affiliate focused on digital platforms, AI, and Latin American governance. Works on comparative AI governance across the region.
Governance first
Renata Ávila
Open Future CEO; digital rights lawyer
Guatemalan human-rights lawyer who serves as CEO of Open Future, a European think tank on the digital commons. Works on digital sovereignty for emerging economies.
Democratic mandate
Paola Ricaurte Quijano
Tec de Monterrey; data-centric epistemic justice
Mexican researcher whose work on 'data epistemologies' has influenced decolonial AI ethics frameworks. Co-founder of the Tierra Común network.
Governance firstCatherine Aiken
CSET researcher; China AI talent and capability
Georgetown CSET researcher focused on China's AI research ecosystem. Has contributed to US understanding of Chinese AI talent and capability developments.
Compute governanceIfe Adebara
African NLP and LLM researcher
Computational linguist whose research focuses on African languages in LLMs. Argues current LLMs systematically fail African users.
Governance firstWafa Ben-Hassine
Omidyar Network; human rights and tech advisor
Former human-rights lawyer and now principal at the Omidyar Network focused on responsible technology. Public voice on AI and democracy, especially in MENA.
Governance first
Susan Schneider
FAU; 'Artificial You' author; machine consciousness
Director of the Center for the Future Mind at Florida Atlantic University; author of 'Artificial You' (2019). Has held NASA and Library of Congress chairs in technology and ethics.
AI welfare
Margaret Boden
Sussex emerita; cognitive science of AI
Research professor of cognitive science at the University of Sussex emerita; one of the founding scholars of cognitive science. Author of 'AI: Its Nature and Future' (2016) and 'Mind as Machine' (2006).
AI skepticNita A. Farahany
Duke Law; 'The Battle for Your Brain'
Professor of law and philosophy at Duke; author of 'The Battle for Your Brain' (2023) on neurotechnology, AI, and cognitive liberty. Member of the National Advisory Council on Bioethics.
Near-term harms first
David Eagleman
Stanford neuroscientist; Neosensory founder
Adjunct professor of neuroscience at Stanford; founder of Neosensory. Author of 'Livewired' (2020). Public communicator on neuroplasticity and human-machine integration.
Cyborg/merge
Naval Ravikant
AngelList co-founder; tech philosopher
Co-founder of AngelList; widely-followed angel investor and tech aphorist. Public commentator on AI as a tool for individual leverage rather than concentrated power.
Techno-optimism
Mark Cuban
Shark Tank investor; Dallas Mavericks owner
Tech entrepreneur, Shark Tank investor, and former owner of the Dallas Mavericks. Frequent commentator on business implications of AI; holds large positions in AI-adjacent companies.
Techno-optimismTristan Hume
Anthropic mechanistic interpretability
Anthropic researcher whose work on dictionary-learning sparse autoencoders for Claude was a landmark in scaling mechanistic interpretability beyond toy models.
Interpretability bet
Sebastian Ruder
Cohere; ex-Google; NLP transfer learning
Researcher at Cohere; previously at Google. His 2018 'NLP's ImageNet moment has arrived' essay coined widely-used framing for transfer learning in NLP.
Alignment firstLeigh Marie Braswell
Founders Fund partner; AI investor
Investor at Founders Fund focused on AI; previously a software engineer. Frequent commentator on AI infrastructure and its market structure.
Techno-optimismRaphaël Millière
Macquarie University philosopher of cognitive science
Macquarie University Lecturer in the Philosophy of AI; previously Presidential Scholar in Society and Neuroscience at Columbia. Researches the philosophical implications of large language models for theories of mind, meaning, and reasoning.
AI skepticMax Bartolo
Cohere; LLM evaluation researcher
Researcher at Cohere; previously a UCL DeepMind PhD student. Co-developed adversarial-training and evaluation methods for question-answering and instruction-following.
Evals-drivenStéphanie Hare
Tech researcher; 'Technology Is Not Neutral' author
Independent researcher and author of 'Technology Is Not Neutral' (2022). Frequent BBC and Financial Times contributor on tech ethics.
Near-term harms first
David Holz
Midjourney founder
Founder of Midjourney, the image-generation service that grew rapidly through Discord-first distribution. Previously co-founded Leap Motion. Vocal proponent of AI as creative augmentation rather than replacement.
Techno-optimismJoshua Browder
DoNotPay CEO; legal-tech AI
Founder and CEO of DoNotPay, a consumer legal-services AI company; widely covered for plans to deploy an AI lawyer in court (eventually withdrawn under bar complaints).
Techno-optimism
Bryan Johnson
Blueprint founder; AI-driven longevity
Founder of Kernel (neural interfaces) and of Blueprint (the personal-data-driven anti-aging protocol). Frequent commentator on AI as a means of human optimization and longevity.
Techno-optimism
David Baszucki
Roblox CEO; co-founder
Co-founder and CEO of Roblox; long-running advocate of user-generated content as the dominant form of digital experience and of AI as the tool that makes UGC accessible to everyone.
Techno-optimism
John Carmack
p 5%Keen Technologies founder; ex-Meta CTO
Co-founder of id Software (Doom, Quake) and former CTO of Oculus VR / Meta Reality Labs. In 2022 left Meta to found Keen Technologies, focused on AGI research with reportedly low team headcount.
Acceleration
Sam Harris
Making Sense podcast; neuroscientist and philosopher
Author and host of the Making Sense podcast; neuroscientist and philosopher. His 2016 TED talk 'Can we build AI without losing control over it?' was an early mainstream introduction to AI x-risk arguments.
Existential primacyIlan Gur
ARIA UK CEO; ex-ARPA-E
CEO of the UK Advanced Research and Invention Agency (ARIA), launched in 2023 to fund high-risk, high-reward research. Previously a program director at U.S. ARPA-E.
Differential technology
David D. Cox
MIT-IBM Watson AI Lab director
Director of the MIT-IBM Watson AI Lab; previously a Harvard professor of molecular and cellular biology. Bridge figure between academic and corporate AI research.
Alignment first
Neal Stephenson
Sci-fi novelist; Snow Crash, Cryptonomicon, Anathem, Termination Shock
Sci-fi novelist whose books have repeatedly anticipated technical developments (the metaverse, cryptocurrency); recent novels and essays grapple directly with AI's social effects.
Near-term harms first
Liu Cixin
Sci-fi novelist; Three-Body Problem trilogy
Chinese science fiction author whose 'Three-Body Problem' trilogy has become globally influential; the 'dark forest' theory has shaped how some readers think about AI civilizations and existential risk.
Existential primacy
Arati Prabhakar
White House OSTP director (2022–2025)
Director of the White House Office of Science and Technology Policy (OSTP) and Assistant to the President for Science and Technology under the Biden administration. Previously DARPA director (2012–2017) and NIST director (1993–1997).
Evals-driven
Paul Scharre
CNAS executive VP; 'Army of None', 'Four Battlegrounds' author
Executive Vice President at the Center for a New American Security (CNAS); author of 'Army of None' (2018) on autonomous weapons and 'Four Battlegrounds' (2023) on AI in great-power competition.
Compute governanceElsa Kania
CNAS adjunct senior fellow; China AI specialist
Adjunct senior fellow at CNAS specializing in Chinese military innovation; cited extensively in U.S. policy debates about Chinese AI development.
Compute governanceJon Bateman
Carnegie senior fellow; AI and cyber strategy
Senior fellow at the Carnegie Endowment for International Peace specializing in technology and international affairs. Former U.S. intelligence official.
International treaty
Steven Levy
Wired editor at large; long-time tech historian
Editor at large at Wired and author of multiple histories of computing including 'Hackers' (1984), 'In the Plex' (2011) on Google, and 'Facebook: The Inside Story' (2020). Long-running interview access at frontier labs.
Near-term harms first
Hal Varian
UC Berkeley emeritus; Google chief economist emeritus
Emeritus chief economist at Google (2002–2023) and Distinguished Professor of Economics at UC Berkeley emeritus. Pioneer of digital-platform economics; co-author of 'Information Rules' (1999).
Techno-optimism
Trae Stephens
Anduril co-founder; Founders Fund partner
Co-founder of Anduril Industries (defense AI hardware) and partner at Founders Fund. Frequent commentator on the integration of AI into Western defense capabilities.
Military primacyBrian Schimpf
Anduril Industries CEO
Co-founder and CEO of Anduril Industries since 2017. Helped build the company into a major defense technology player integrating autonomous systems with command-and-control software.
Military primacy
Chris Hughes
Facebook co-founder turned antitrust advocate
Co-founder of Facebook; co-chair of the Economic Security Project. In 2019 publicly called for breaking up Facebook; has since extended antitrust framing to OpenAI and AI-cloud concentration.
Antitrust primacy
Katherine Boyle
Andreessen Horowitz; American Dynamism
General partner at Andreessen Horowitz leading the firm's American Dynamism practice (defense, aerospace, public-interest tech). Previously a journalist at the Washington Post.
Military primacy
Virginia Dignum
Umeå University; UN AI Advisory Body member
Professor of responsible AI at Umeå University; author of 'Responsible Artificial Intelligence' (2019); member of the UN AI Advisory Body. Long-standing voice in European AI ethics.
Governance first
Ricardo Baeza-Yates
Northeastern; Institute for Experiential AI
Director of research at the Institute for Experiential AI at Northeastern University; previously a VP at Yahoo Research. Long-time voice on responsible AI auditing and bias detection.
Governance firstNello Cristianini
Bath University; ML pioneer; 'Shortcut' author
Professor of artificial intelligence at the University of Bath; co-author of foundational textbooks on Support Vector Machines and kernel methods. Author of 'The Shortcut: Why Intelligent Machines Do Not Think Like Us' (2023).
AI skeptic
John Naughton
Cambridge / Open University; Observer technology columnist
Long-time Observer (UK) technology columnist; emeritus professor of public understanding of technology at the Open University. Co-director of Cambridge's Minderoo Centre for Technology and Democracy.
Antitrust primacyHung-yi Lee
National Taiwan University; speech and LLM researcher
Professor at National Taiwan University; widely-watched online instructor for ML/LLMs in Mandarin. Has been a key public communicator of LLM concepts to Chinese-speaking audiences.
Alignment firstPelonomi Moiloa
Lelapa AI co-founder; African languages NLP
Co-founder and CEO of Lelapa AI, a South African AI lab building NLP models for African languages including isiZulu, Sesotho, and Yoruba. Vocal advocate for region-led AI infrastructure.
Sovereign AI
Vukosi Marivate
Univ Pretoria; African NLP / Masakhane
Associate professor at the University of Pretoria; co-founder of the Masakhane research community for African NLP. Long-running advocate for low-resource language model development.
Open source
Toby Walsh
UNSW Sydney; AI safety advocate
Scientia Professor of AI at UNSW Sydney; chief scientist of the UNSW AI Institute. Long-standing campaigner against autonomous weapons and a leading public voice for AI regulation in Australia.
International treaty
Genevieve Bell
ANU; vice-chancellor; cultural anthropologist
Vice-Chancellor of the Australian National University since 2024; previously a senior fellow and VP at Intel, where she founded the Anthropology and User Experience research group. Distinctive cultural-anthropological perspective on AI.
Near-term harms first
Brian Tomasik
Foundational Research Institute co-founder; suffering-focused ethics
Co-founder of the Foundational Research Institute (now Center on Long-Term Risk); long-standing essayist on suffering-focused ethics and digital sentience. His writing has shaped EA-adjacent positions on AI welfare.
AI welfareAnna Salamon
CFAR co-founder; rationality and existential risk
Co-founder of the Center for Applied Rationality (CFAR); long-time figure in rationalist and AI-risk circles. Helped train many current alignment researchers through CFAR workshops in 2010s.
Alignment firstNick Beckstead
Future Fund co-founder; FHI alumnus
Philosopher and former Future Fund co-founder. Author of 'On the Overwhelming Importance of Shaping the Far Future' (2013), one of the foundational texts of academic longtermism.
EA framing
Paul Bloom
Yale and University of Toronto; cognitive science of AI moral status
Professor of psychology at Yale and University of Toronto; author of 'Against Empathy' and 'Psych'. Public commentator on AI's moral status, deception, and our intuitive responses to artificial minds.
AI skeptic
Simon Willison
Independent developer; co-creator of Django; LLM tools
Co-creator of the Django web framework and founder of Datasette. His blog at simonwillison.net has been one of the most-read developer-oriented sources of LLM analysis since GPT-3, with extensive practical attention to prompt injection and capability evaluation.
Security mindsetRiley Goodside
Scale AI; prompt engineering pioneer
Researcher at Scale AI; widely credited as one of the first practitioners to systematically explore the boundaries of LLM behaviour through prompting. Coined many of the foundational examples of prompt injection.
Security mindset
George Hotz
Comma.ai / tinygrad founder
Founder of Comma.ai (open-source autonomy) and tinygrad (compact deep-learning framework). Self-taught hacker famous for jailbreaking the iPhone and PS3; vocal opponent of frontier-AI doomerism.
Techno-optimismLucius Bushnaq
Apollo Research; mech interp
Senior researcher at Apollo Research; works on mechanistic interpretability and on detecting deceptive cognition in language models.
Interpretability betRob Wiblin
80,000 Hours podcast host
Co-founder of 80,000 Hours and host of its podcast. Has interviewed many of the people on this list at unusual length, with a recent emphasis on AI risk and policy.
Alignment firstBenjamin Todd
Founder of 80,000 Hours
Co-founder of 80,000 Hours, the EA career-advice organization that has placed hundreds of researchers into AI safety roles. Author of '80,000 Hours: Find a fulfilling career that does good'.
EA framingHolly Elmore
PauseAI US executive director
Executive director of PauseAI US; previously a researcher at Centre for Effective Altruism. Visible organizer of in-person protests and policy advocacy for an enforced pause on frontier training.
Pause
Spencer Greenberg
Clearer Thinking founder; rationality researcher
Mathematician and entrepreneur; founded Clearer Thinking, a behavioural research and rationality-tools project. Hosts the Clearer Thinking podcast where AI risk is a recurring theme.
Evals-drivenGus Docker
Future of Life Institute podcast host
Host of the Future of Life Institute Podcast; long-form interviews with AI safety researchers, policy figures, and adjacent thinkers. Influential conduit for technical alignment discourse.
Alignment first
Tobias Lütke
Shopify CEO; AI-first internal mandate
Founder and CEO of Shopify. In April 2025 issued an internal memo stating that 'reflexive AI usage is now a baseline expectation' for all Shopify employees, one of the most explicit AI-first labor policies from a major company.
Commentator · Household name · Post-ChatGPT
Techno-optimism
Sam Hammond
Foundation for American Innovation senior economist
Senior economist at the Foundation for American Innovation (formerly Lincoln Network); writes the 'Second Best' Substack on technology and political economy. Argues for an aggressive U.S. industrial policy on AI compute.
Centralised projectGovernance first
Sal Khan
Khan Academy founder; AI tutor advocate
Founder of Khan Academy. In 2023 launched Khanmigo, an AI tutor based on GPT-4. Advocates publicly that AI tutoring done well can produce Bloom's '2-sigma' learning gains for every student.
Techno-optimism
Eric Topol
Scripps cardiologist; AI in medicine pioneer
Founder and director of the Scripps Research Translational Institute; cardiologist and author of 'Deep Medicine' (2019). Long-standing voice on how AI should reshape clinical practice.
Techno-optimism
Holly Herndon
Musician; 'Holly+' digital twin and Spawning
Composer and electronic musician whose work pioneered AI-assisted music; co-founded Spawning, which built tools for artists to opt out of generative AI training datasets.
Near-term harms firstDavid Rolnick
McGill / Mila; Climate Change AI co-founder
McGill assistant professor and Mila core faculty; co-founder of Climate Change AI. Co-author of the influential 'Tackling Climate Change with Machine Learning' (2019) paper.
Differential technology
Daniel Susskind
Oxford / King's College London; AI and work economist
Oxford economist; senior research associate at King's College London. Author of 'A World Without Work' (2020) and co-author with father Richard Susskind of 'The Future of the Professions' (2015).
Near-term harms first
Anton Korinek
UVA economist; AI and inequality
University of Virginia professor of economics; senior fellow at Brookings. Has produced influential work on AI's macroeconomic implications and on transformative AI scenarios.
Near-term harms first
Richard Susskind
Tech-and-law thinker; 'Future of the Professions'
British author and IT advisor to the Lord Chief Justice of England and Wales; pioneer of 'online courts' and frequent speaker on AI's effect on legal services.
Techno-optimismNathan Lambert
Allen Institute for AI; 'Interconnects' newsletter
Senior research scientist at the Allen Institute for AI (Ai2) and author of the widely read 'Interconnects' newsletter. Co-leads Ai2's open-source post-training research and publishes detailed analyses of frontier-lab releases.
Open sourceAli Farhadi
Allen Institute for AI CEO
CEO of the Allen Institute for AI since 2023; previously a senior director at Apple. Computer-vision researcher (YOLO; UW) and longtime advocate of open-source frontier models.
Open sourceChip Huyen
Author of 'Designing Machine Learning Systems'
ML engineer and author whose work on production ML systems and LLM evaluation has shaped industry practice. Previously co-founded Claypot AI; ex-NVIDIA, ex-Snorkel.
Evals-driven
Ronen Eldan
Microsoft Research; 'TinyStories' author; mathematician
Microsoft Research mathematician; co-author of 'TinyStories' (2023), which showed that small language models trained on synthetic children's stories can produce coherent text, reframing what 'small' models can do.
AI skeptic
Henry Kissinger
Former U.S. Secretary of State; co-author 'The Age of AI'
Former U.S. Secretary of State (1973–77); died 2023. Co-authored 'The Age of AI: And Our Human Future' (2021) with Eric Schmidt and Daniel Huttenlocher, framing AI as a category-shifting transformation in the structure of knowledge.
International treatyMike Knoop
Co-founder ARC Prize; ex-Zapier
Co-founder of Zapier and, with François Chollet, of the 'ARC Prize', a public competition built around the ARC-AGI benchmark to reward systems that demonstrate genuine generalization rather than scaled pattern matching.
AI skepticYuntao Bai
Anthropic; Constitutional AI co-author
Anthropic researcher; co-lead author of the Constitutional AI paper introducing principles-based RLHF training and harmlessness from AI feedback.
Constitutional AITrenton Bricken
Anthropic mechanistic interpretability
Anthropic researcher whose work on sparse autoencoders, attention dynamics, and dictionary learning has been central to the mechanistic interpretability program.
Interpretability betCatherine Olsson
Anthropic; ex-OpenAI; AI safety community organizer
Anthropic researcher and longtime fixture of the AI safety research community. Co-author of OpenAI's 'AI and Compute' analysis. Was a longtime safety advocate at Google Brain and Open Philanthropy.
Alignment firstAdam Jermyn
Anthropic; previously astrophysics
Anthropic researcher whose path from theoretical astrophysics to AI safety is widely cited as a model for cross-field switching. Researches deceptive alignment risks.
Alignment firstLukas Finnveden
Open Philanthropy; AI safety analyst
Open Philanthropy researcher whose detailed analyses of AI takeoff dynamics, training data running out, and alignment training methods have been widely cited in EA circles.
Alignment first
Christopher Manning
Stanford NLP director; foundation models
Stanford professor of computer science and linguistics; director of the Stanford AI Lab. Co-led the Center for Research on Foundation Models. Author of the 'Foundations of Statistical Natural Language Processing' (1999) and 'Speech and Language Processing' (with Jurafsky).
Alignment first
Garry Kasparov
Former world chess champion; 'Deep Thinking' author
Former world chess champion; in 1997 lost a match against IBM's Deep Blue, an inflection moment for public perception of AI. Author of 'Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins' (2017).
Techno-optimism
Hannah Fry
Cambridge mathematician; 'Hello World' author
Mathematician and broadcaster; Cambridge professor (from 2024) and author of 'Hello World: How to be Human in the Age of the Machine' (2018). Frequent BBC presenter on algorithms and decision-making.
Near-term harms first
Tim Urban
Wait But Why; viral AI explainer
Author of the Wait But Why blog. His 2015 two-part series on superintelligence reached an order-of-magnitude wider audience than any prior writing on AI safety and shaped the public picture for years.
Existential primacy
Lila Ibrahim
DeepMind COO; AI ethics governance
Chief Operating Officer of Google DeepMind. Previously president of Coursera. Helped institutionalize DeepMind's Ethics & Society and AI Safety teams.
RSP-style commitmentsSam Charrington
Host of The TWIML AI Podcast
Founder and host of TWIML (This Week in Machine Learning & AI), one of the longest-running independent technical AI podcasts. Has interviewed hundreds of researchers from across the alignment, capability, and applications spectrum.
Evals-drivenTomek Korbak
UK AI Security Institute; ex-Anthropic; pretraining alignment
Researcher at the UK AI Security Institute (AISI); previously at Anthropic. Doctoral work on pretraining-time alignment via behaviour-cloning and conditional control.
Alignment firstFabien Roger
Anthropic alignment researcher; control evaluations
Anthropic alignment researcher whose work on AI control, designing evaluations to test whether models can subvert oversight even when they are trying to, has been widely cited in safety circles.
Evals-driven
Marc Raibert
Boston Dynamics founder; AI Institute executive director
Founder of Boston Dynamics and now executive director of the AI Institute (Hyundai-backed). Veteran roboticist with views shaped by decades of building physical machines that have to behave in the real world.
AI skepticAnjney Midha
Andreessen Horowitz general partner; AI investor
General partner at Andreessen Horowitz focused on AI infrastructure and applications. Previously founded Ubiquity6 (acquired by Niantic). Board member of Mistral AI and others.
Open source
Ozzie Gooen
Quantified Uncertainty Research Institute founder
Founder of the Quantified Uncertainty Research Institute (QURI); created Squiggle, a probabilistic estimation language. Vocal in EA forums about applying calibrated estimation to AI risk and longtermism.
Evals-driven
Vivek Wadhwa
Tech writer and ex-academic; serial AI critic
Tech writer and former Stanford / Carnegie Mellon distinguished fellow. Frequent commentator on AI's social and labor impacts, generally skeptical of frontier-lab self-regulation.
Near-term harms first
Sara Beery
MIT EAPS / CSAIL; AI for ecology
MIT assistant professor in Electrical Engineering and Computer Science / EAPS; widely cited for using ML to scale ecological monitoring and biodiversity science.
Differential technology
Justin Trudeau
Prime Minister of Canada (2015–2025)
Prime Minister of Canada from 2015 to early 2025. Hosted the 2018 G7 with AI on the agenda; oversaw the Pan-Canadian AI Strategy. His government pursued bills C-27 and AIDA on AI regulation.
Governance first
Anthony Albanese
Prime Minister of Australia (2022–)
Prime Minister of Australia since 2022. His government commissioned and responded to the 2023 'Safe and Responsible AI in Australia' consultation and supported voluntary AI safety standards.
Governance first
Scott Wiener
California State Senator; SB-1047 author
California State Senator (D-San Francisco) who authored SB-1047, the 2024 'Safe and Secure Innovation for Frontier AI Models Act' that became the most-debated state-level frontier-AI safety legislation. Vetoed by Governor Newsom on 29 September 2024.
Governance first
Ro Khanna
U.S. Representative (CA-17); Silicon Valley House member
U.S. Representative for California's 17th district (Silicon Valley); senior member of the House AI Caucus. Has co-led legislation on algorithmic accountability and chip export policy.
Governance first
Gavin Newsom
Governor of California; SB-1047 vetoer
Governor of California since 2019. In September 2024 vetoed SB-1047 (frontier AI safety bill) while issuing executive orders directing the state to develop alternative guardrails.
Evals-drivenHadrien Pouget
Carnegie Endowment; EU AI Act translator-in-chief
Associate fellow at the Carnegie Endowment for International Peace; one of the most-read English-language analysts of the EU AI Act. Frequently consulted by U.S. policymakers translating Brussels into Washington.
Governance firstMadhumita Murgia
Financial Times AI editor; 'Code Dependent' author
Financial Times AI editor and author of 'Code Dependent: Living in the Shadow of AI' (2024). Reported the working conditions of data labelers and content moderators powering frontier ML.
Near-term harms firstMeghan O'Gieblyn
Essayist; 'God, Human, Animal, Machine'
Essayist and author of 'God, Human, Animal, Machine' (2021), a literary investigation of how religious metaphors structure the AI age and how loss of those metaphors shapes contemporary epistemology.
AI skeptic
Trevor Paglen
Artist; AI surveillance and computer vision
Artist and geographer whose work investigates the apparatus of state surveillance and the visual training data that underpins computer vision. Collaborated on 'ImageNet Roulette' with Kate Crawford.
Near-term harms first
John Thornhill
Financial Times innovation editor
Financial Times innovation editor; one of the most-read European columnists on AI strategy and competition. Co-host of the Tech Tonic podcast.
Governance firstAnna Lauren Hoffmann
UW iSchool; 'data violence' theorist
Associate professor at the University of Washington Information School; coined 'data violence' as a frame for harms perpetrated through data systems, and has written widely on philosophy of information.
Near-term harms firstBo Li
UChicago / UIUC; AI safety evaluations
University of Chicago associate professor specializing in safety, robustness, and trustworthiness of ML systems. Lead developer of DecodingTrust, a comprehensive trustworthiness benchmark for LLMs.
Evals-drivenYaodong Yang
Peking University; alignment and multi-agent RL
Peking University Boya Young Fellow; researches multi-agent reinforcement learning and AI alignment. Hosted Beijing-based academic alignment workshops bridging Chinese and Western researchers.
Alignment first
Nat Friedman
AI Grant; ex-GitHub CEO; sits on Meta superintelligence advisory
Investor and former GitHub CEO (2018–2021). Co-founder of AI Grant; advised the Vesuvius Challenge that used ML to read carbonized scrolls. Joined Meta's superintelligence advisory in 2024.
Techno-optimism
Daniel Gross
Andromeda Cluster co-founder; ex-Apple AI lead
Investor; former director of machine learning at Apple. Co-founder with Nat Friedman of AI Grant and Andromeda, a large-scale GPU cluster offered to AI startups.
Distributed buildersTatsunori Hashimoto
Stanford; CRFM; LLM evaluation and security
Stanford assistant professor in CS and statistics. Member of the Center for Research on Foundation Models. Researches generalization and security properties of large models.
Evals-drivenKarthik Narasimhan
Princeton; reasoning, NLP
Princeton assistant professor of computer science. Previously a researcher at OpenAI before its scaling era. Focuses on language understanding and reasoning, and led work on the SWE-bench benchmark.
Evals-drivenCe Zhang
ETH Zürich → University of Chicago; ML systems
Professor at the University of Chicago and previously at ETH Zürich; co-founder of Together AI. Researches the systems substrate of large-scale ML training and inference.
Open sourceSeda Gürses
TU Delft; computational law and privacy
TU Delft associate professor whose research connects software engineering, privacy, and the political economy of platforms. Argues large ML systems are functions of data extraction infrastructures, not isolated artefacts.
Near-term harms first
Ifeoma Ajunwa
Emory Law; workplace AI scholar
Asa Griggs Candler Professor of Law at Emory University School of Law and founder of the AI Decision-Making Research Program. Author of 'The Quantified Worker' (2023).
Near-term harms firstMark Latonero
AI policy researcher; ex-Data & Society
Researcher and policy analyst formerly at Data & Society and the World Economic Forum; senior adviser at the Center for Human Rights in Practice. Long-time advocate of human-rights-grounded AI governance.
Governance firstCatelijne Muller
ALLAI president; EU AI Act civil-society voice
President of ALLAI (Alliance for Human-Centered AI) and a member of the European Commission High-Level Expert Group on AI. Lead civil-society interlocutor on the EU AI Act.
Governance firstGemma Galdon Clavell
Eticas Foundation founder; algorithmic audits
Founder of the Eticas Foundation, a non-profit specializing in adversarial algorithmic audits. Repeatedly demonstrated bias in deployed government and platform systems in Europe and Latin America.
Near-term harms firstMartin Tisné
AI Collaborative; managing director
Managing director of the AI Collaborative at the Omidyar Network; long-standing funder and convener of civil-society AI policy work. Has championed 'data agency' framings in international forums.
Democratic mandateEdward Grefenstette
Google DeepMind; AI for science research
DeepMind senior research scientist focused on language and reasoning. Previously co-led Cohere for AI's research arm; long-time NLP researcher.
Alignment firstIllia Polosukhin
NEAR Protocol co-founder; Transformer co-author
Co-founder of NEAR Protocol; previously at Google Brain, where he co-authored 'Attention Is All You Need'.
Open source
Andy Jassy
Amazon CEO; AWS architect
CEO of Amazon since 2021; previously the architect of Amazon Web Services. Has framed Amazon's generative-AI strategy around Bedrock as a multi-model platform with substantial Anthropic investment.
Distributed builders
Elham Tabassi
NIST Chief AI Advisor; AI Risk Management Framework
Chief AI Advisor at NIST and a principal architect of the NIST AI Risk Management Framework (2023). Steered NIST's role at the U.S. AI Safety Institute prior to its 2025 reorganization.
Evals-driven
Jeffrey Ding
George Washington University; ChinAI newsletter
Assistant professor at George Washington University and author of the widely read ChinAI newsletter, which translates and contextualizes Chinese AI policy and research.
International treatyJasmine Sun
Tech writer; Substack contributor on AI culture
Independent tech writer best known for essays on AI, internet culture, and the post-Twitter information ecosystem; widely read in policy and product circles for clear-eyed coverage of Silicon Valley AI politics.
Near-term harms first
Nigel Shadbolt
Oxford / Open Data Institute co-founder
Principal of Jesus College, Oxford and co-founder of the Open Data Institute. Co-author with Roger Hampson of 'The Digital Ape' (2018); long-standing voice for democratized AI through open data.
Open sourceGabriel Weil
Touro Law professor; AI liability scholar
Touro Law Center associate professor of law whose 2023 paper 'Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence' has become a key academic argument for using strict, joint-and-several liability to internalize AI risks.
Liability-driven safety
Tim Wu
Columbia Law; ex-Biden NEC special assistant on tech competition
Columbia Law professor who coined 'net neutrality'; previously special assistant to President Biden on technology and competition policy. A central figure in the new-Brandeisian school of tech antitrust.
Antitrust primacy
Matt Stoller
Open Markets Institute; BIG newsletter
Director of research at the Open Markets Institute and author of the BIG newsletter; consistent advocate of breaking up Big Tech and applying antitrust law aggressively to AI infrastructure and data.
Antitrust primacy
Boaz Barak
Harvard; OpenAI safety; theoretical CS
Harvard theoretical CS professor on leave at OpenAI working on safety. Long-standing CS theorist whose recent posts have argued for taking AI safety problems seriously while criticizing parts of the doomer narrative.
Alignment first
Martin Wattenberg
Harvard; ex-Google PAIR; visualization for ML
Harvard professor and former senior research scientist at Google, where he co-founded the People + AI Research initiative. Long-time pioneer of interpretability through visualization.
Interpretability betFernanda Viégas
Harvard; ex-Google PAIR; data visualization
Harvard professor and long-time collaborator with Martin Wattenberg; co-led Google's PAIR initiative on human-centered AI. Specialist in visualization and interaction for understanding complex ML systems.
Interpretability betKyle Mahowald
UT Austin; LLMs as not-quite-thought experiments
UT Austin linguistics professor whose 2023 paper 'Dissociating language and thought in large language models' became a key reference for understanding the gap between LLM language fluency and reasoning competence.
AI skeptic
Anna Rogers
IT University of Copenhagen; LLM benchmarking critique
IT University of Copenhagen associate professor; vocal critic of how LLM benchmarks are constructed and reported. Frequent NLP community commentator on contamination, leaderboard inflation, and method hygiene.
Evals-drivenDavid Bau
Northeastern; mechanistic interpretability of LLMs
Northeastern University professor whose group has produced widely cited work on locating and editing factual associations in transformer language models (ROME, MEMIT).
Interpretability bet
Meredith Broussard
NYU journalism prof; 'Artificial Unintelligence', 'More Than a Glitch'
NYU professor and data journalist; author of 'Artificial Unintelligence' (2018) and 'More Than a Glitch' (2023). Argues that bias in AI is structural, not anomalous.
Near-term harms first
Julia Angwin
Investigative journalist; ex-ProPublica; The Markup founder
Investigative journalist; co-founded The Markup. Pulitzer winner whose 2016 ProPublica investigation 'Machine Bias' kicked off mainstream criminal-justice algorithm coverage.
Near-term harms first
Cynthia Breazeal
MIT Media Lab; social robotics; founder Jibo
MIT Media Lab professor and founder of Jibo. Has spent decades studying social robotics; now leads MIT's Personal Robots group and the Day of AI K-12 curriculum.
Near-term harms first
Ben Shneiderman
UMD emeritus; 'Human-Centered AI' framework
University of Maryland professor emeritus; pioneer of human-computer interaction; author of 'Human-Centered AI' (2022), which proposes a framework for keeping humans in control of high-stakes AI systems.
Scalable oversightNicolas Papernot
U Toronto / Vector Institute; ML privacy and security
University of Toronto and Vector Institute assistant professor; researches privacy attacks, membership inference, machine unlearning, and ML supply-chain security.
Security mindsetAsma Ghandeharioun
Google DeepMind; 'Patchscopes' for LLM interpretability
Senior research scientist at Google DeepMind; lead author of Patchscopes, a unifying framework for using language models to inspect their own internal representations.
Interpretability betMoritz Hardt
MPI Tübingen; algorithmic fairness, evals
Director at the Max Planck Institute for Intelligent Systems; co-author of 'Fairness and Machine Learning' (free textbook). Recent work emphasizes the limits of static benchmarks under distribution shift and adaptive deployment.
Evals-driven
Dustin Moskovitz
Asana / Open Phil; biggest AI safety funder
Co-founder of Facebook and Asana; with Cari Tuna, founded Good Ventures and is the principal funder behind Open Philanthropy, by far the largest private funder of long-term AI safety research.
Alignment first
Cari Tuna
Co-founder Good Ventures; Open Philanthropy chair
Co-founder with Dustin Moskovitz of Good Ventures and chair of Open Philanthropy. Former Wall Street Journal reporter; oversees one of the largest private grantmaking operations focused on existential risk.
Alignment first
Vinod Khosla
Khosla Ventures; early OpenAI investor
Founder of Khosla Ventures and Sun Microsystems; led the first major outside investment in OpenAI in 2019. Vocal techno-optimist on AI's economic and medical potential.
Techno-optimism
Hilary Greaves
Oxford GPI; longtermist moral philosopher
Oxford professor of philosophy and director of the Global Priorities Institute; key academic theorist of longtermism, the framework that animates much existential-risk-focused EA work.
EA framingOwen Cotton-Barratt
FHI alumnus; existential risk researcher
Mathematician and existential risk researcher; former Future of Humanity Institute fellow whose work shaped early frameworks for prioritizing global catastrophic risks within the EA tradition.
Differential technology
Tegan Maharaj
HEC Montréal; ML safety, ecology, and ethics
Assistant professor at HEC Montréal whose research bridges machine learning safety, ecology, and the political economy of training data. Active in NeurIPS workshop organization on responsible ML.
Near-term harms first
Janelle Shane
AI Weirdness; optics researcher and AI humorist
Research scientist at Boulder Nonlinear Systems; author of the 'AI Weirdness' blog and the book 'You Look Like a Thing and I Love You'. Public communicator of how ML systems fail in unexpected, illuminating ways.
AI skeptic
Pat Gelsinger
Former Intel CEO; chip-supply geopolitics
Former CEO of Intel (2021–2024); architected the company's foundry strategy and lobbied successfully for the U.S. CHIPS Act. Frequent commentator on the geopolitics of AI compute.
Compute governance
Richard S. Sutton
RL pioneer; 2024 Turing Award recipient
University of Alberta professor and DeepMind senior research scientist. With Andrew Barto, won the 2024 Turing Award for the foundations of reinforcement learning. Author of the canonical 'Reinforcement Learning: An Introduction' textbook and the influential 'Bitter Lesson' essay.
Deep technical · Field-leading · Symbolic era
AccelerationAbandon superintelligenceAndrew G. Barto
RL co-founder; 2024 Turing Award recipient
UMass Amherst emeritus professor and co-developer of reinforcement learning with Richard Sutton. Co-recipient of the 2024 ACM Turing Award.
Deep technical · Field-leading · Symbolic era
Alignment firstExistential primacy
John Jumper
DeepMind; AlphaFold lead; Nobel Prize 2024
Google DeepMind director who led the AlphaFold project that solved the protein folding problem. Awarded the 2024 Nobel Prize in Chemistry alongside Demis Hassabis and David Baker.
Alignment first
Jakub Pachocki
OpenAI Chief Scientist (since 2024)
OpenAI's chief scientist since May 2024, succeeding Ilya Sutskever. Former lead on the GPT-4 research effort.
Race to aligned SI
Kyunghyun Cho
NYU professor; Genentech; encoder-decoder pioneer
NYU professor and Genentech principal investigator; co-author of the original encoder-decoder paper that underpins modern sequence models. Vocal critic of overhyped AGI claims and capability narratives.
AI skepticTom M. Mitchell
CMU founders university professor; ML textbook author
Carnegie Mellon University Founders University Professor and former founding head of CMU's Machine Learning Department. Author of the canonical 'Machine Learning' (1997) textbook.
Governance firstCarl Shulman
p 20%Open Phil senior research analyst; AGI takeoff economics
Open Philanthropy researcher who has worked on the economics, decision theory, and forecasting of advanced AI for nearly two decades. Best known for long-form analyses of AI takeoff and what-if-AGI-arrives-by-2030 scenarios.
Race to aligned SI
K. Eric Drexler
FHI; nanotech pioneer; CAIS author
Pioneer of molecular nanotechnology and former FHI senior research fellow. Author of 'Reframing Superintelligence' (2019), which introduced 'Comprehensive AI Services' (CAIS) as an alternative to the unitary-agent framing.
Narrow AI preservation
Sergey Levine
UC Berkeley; robot learning, deep RL
UC Berkeley professor whose group has produced foundational work on robot learning, offline RL, and, since 2023, generalist robot foundation models like RT-2 and Open X-Embodiment.
Acceleration
Jakob Foerster
Oxford FLAIR lab; multi-agent RL
Oxford professor and head of the FLAIR lab; researches learning to cooperate and communicate in multi-agent reinforcement learning, including opponent shaping.
Cooperative AI
Mark Riedl
Georgia Tech; AI ethics and storytelling
Georgia Tech professor of interactive computing whose research spans narrative AI, AI ethics, and creative AI. Frequent public commentator on overhype and underexamined societal harms of generative AI.
Near-term harms first
Sean Carroll
Johns Hopkins / Santa Fe; physicist and Mindscape host
Theoretical physicist at Johns Hopkins and the Santa Fe Institute; hosts Mindscape, where AI risk has been a recurring topic. Cautious about AGI timelines and tends toward measured skepticism on near-term existential framings.
AI skeptic
Raia Hadsell
Google DeepMind director of robotics & research
Google DeepMind senior director whose research has spanned robotics, continual learning, and progressive networks. Helped lead DeepMind's robotics push and now oversees research strategy.
Alignment firstSébastien Bubeck
OpenAI; lead author of 'Sparks of AGI' paper
Mathematician and ML researcher; previously led the Microsoft Research team that produced the 'Sparks of Artificial General Intelligence' paper, then moved to OpenAI in 2024.
Acceleration
Nicholas Carlini
Anthropic adversarial-ML researcher; ex-Google Brain
Adversarial machine learning researcher widely cited for memorization, jailbreak, and privacy attacks on large models. Argues current LLM safety is unusually fragile compared to mature security fields.
Security mindsetAleksander Mądry
MIT; ex-OpenAI head of preparedness
MIT professor of computer science specializing in robust machine learning. Led the OpenAI Preparedness Team in 2023–24 to evaluate frontier model risks across CBRN, cyber, and persuasion domains.
Evals-driven
David Deutsch
Oxford physicist; pioneer of quantum computing
Oxford physicist who proposed the universal quantum computer in 1985. Author of 'The Beginning of Infinity'; argues general intelligences are people in the moral sense and AI x-risk framings misread the open-ended nature of explanatory knowledge.
Moral circle expansion
John Wentworth
Independent alignment researcher; natural abstractions
Independent alignment researcher who developed the 'natural abstractions hypothesis' as a framing for whether human concepts robustly transfer to learned representations.
Interpretability betScott Garrabrant
MIRI researcher; logical induction, Cartesian frames
Machine Intelligence Research Institute researcher who developed logical inductors (a formalism for assigning probabilities to mathematical statements over time) and Cartesian frames (a model of agent–environment boundaries).
Agent foundationsAbram Demski
MIRI researcher; embedded agency
MIRI researcher who co-authored the 'Embedded Agency' sequence outlining gaps in classical decision theory when an agent is part of the environment it reasons about.
Agent foundationsTim Rocktäschel
Google DeepMind / UCL; open-ended learning
DeepMind staff scientist and UCL professor who studies open-ended learning environments. Co-creator of Genie (action-controllable foundation worlds) and contributor to autocurricula research.
Open-endedness
Jacob Andreas
MIT NLP; language models as belief reports
MIT EECS professor whose research has examined whether language models develop interpretable internal world-models and structured representations of belief.
Interpretability betJonathan Mugan
DeepGrammar founder; AI for children's media
AI researcher whose work has focused on grounding AI in real-world understanding. Founded DeepGrammar to build AI tools for children's media.
AI skepticBarath Raghavan
USC professor; digital infrastructure and AI energy
USC computer scientist whose work on internet infrastructure and AI energy has informed mainstream discussion of compute-and-climate tradeoffs.
Governance firstTrevor Darrell
UC Berkeley AI vision professor
UC Berkeley professor specializing in computer vision and multimodal AI. Widely-cited contributor to vision-language model research.
Alignment firstJulia Stoyanovich
NYU responsible AI researcher
NYU Tandon professor and founding director of the Center for Responsible AI. Co-author of New York City's algorithmic accountability regulation.
Governance firstSolon Barocas
Microsoft Research and Cornell; fairness theorist
Microsoft Research principal and Cornell professor whose foundational work on fairness in machine learning has shaped the field. Co-author of the canonical Fairness and Machine Learning textbook.
Governance first
Sarah T. Roberts
UCLA professor; author of Behind the Screen
UCLA professor whose 2019 Behind the Screen documented the human labour behind commercial content moderation. Has extended this to generative AI training data.
Governance firstMilagros Miceli
Weizenbaum Institute Berlin; data-worker research
Weizenbaum Institute researcher whose work has documented the exploitative conditions of AI data workers globally. Foundational research for the data-labour rights movement.
Governance firstAdrienne Williams
Former Amazon warehouse worker; AI labour activist
Former Amazon delivery driver and current AI Now Institute fellow who has written about AI surveillance in warehouse and gig work. Voice for AI-affected workers.
Governance first
Scott Aaronson
UT Austin computer scientist; ex-OpenAI AI safety visitor
Quantum computing theorist at UT Austin. Took leave in 2022-2023 to work on OpenAI's Superalignment team, developing watermarking technology. Publicly skeptical of 'Yudkowskian' doom framings but engaged with alignment work.
Deep technical · Field-leading · Pre-deep-learning
Alignment first
Yejin Choi
University of Washington NLP professor; MacArthur fellow
MacArthur 'genius' grant recipient and University of Washington professor whose work on common-sense reasoning in language models has been widely influential.
Deep technical · Field-leading · Deep-learning rise
AI skepticAlbert Fox Cahn
Surveillance Technology Oversight Project (S.T.O.P.) founder
Civil rights attorney who founded the Surveillance Technology Oversight Project. Has led legal challenges to AI-based surveillance in NYC and beyond.
Governance first
Evan Greer
Fight for the Future director; digital rights activist
Director of Fight for the Future, a digital rights activist organisation. Has organised against AI surveillance, facial recognition, and algorithmic harm.
Governance first
Edward Snowden
NSA whistleblower; AI surveillance critic
Former NSA contractor and famous surveillance whistleblower. Has written publicly about AI's implications for surveillance and intelligence work.
Governance first
Andrew Yao
Tsinghua professor; Turing Award winner; Chinese AI institutional figure
Tsinghua professor and 2000 Turing Award winner. Founded the Yao Class at Tsinghua, which has produced many of China's leading AI researchers.
Alignment firstLiang Wenfeng
Founder of DeepSeek; Chinese frontier AI
Quant trader and founder of DeepSeek, the Chinese AI lab whose 2025 release of efficient, cheap reasoning models redrew the global AI cost curve.
Open sourceWu Hequan
Chinese Academy of Engineering academician; AI policy elder
Chinese Academy of Engineering academician who has shaped Chinese AI standards work and academic policy advice for two decades.
Governance first
Cyril Ramaphosa
President of South Africa; AU AI strategy host
President of South Africa whose government chairs the African Union AI Continental Strategy work. Public framings position African AI sovereignty as joint project.
Sovereign AI
Xi Jinping
President of China; AI as national strategic priority
President of China whose 2017 New Generation AI Development Plan made AI a national strategic priority and committed China to AI leadership by 2030.
Policy / meta · Household name · Deep-learning rise
Race to aligned SI
Narendra Modi
Prime Minister of India; AI for All initiative
Indian Prime Minister whose government has launched the IndiaAI Mission, a $1.25B sovereign AI initiative. Co-host of the 2026 AI Action Summit.
Policy / meta · Household name · Post-ChatGPT
Sovereign AI
Olaf Scholz
Former German Chancellor; AI infrastructure investments
Former German Chancellor who led the German AI investment plan and EU AI Act negotiations under the German EU presidency.
Policy / meta · Household name · Post-ChatGPT
Governance first
Luiz Inácio Lula da Silva
President of Brazil; AI sovereignty advocate
President of Brazil whose government has launched a Brazilian AI Plan, positioning Brazil as a Global South AI sovereignty leader.
Sovereign AI
Josephine Teo
Singapore Minister for Digital Development and Information
Singapore Minister leading the country's AI strategy. Singapore's AI Verify and AI Safety Institute are major contributions to ASEAN AI governance.
Governance firstTan Ka Ying
Singapore AI Verify Foundation researcher
Singapore-based AI policy researcher contributing to AI Verify Foundation, the standard-setting body for ASEAN AI testing. Voice for ASEAN AI governance.
Governance firstYuhwen Yang
Carnegie Endowment China AI research
Carnegie Endowment researcher tracking Chinese AI policy and research outputs. Bridges Chinese-language sources to US/EU policy audiences.
International treatyRoss Andersen
The Atlantic deputy editor; AI long-form features
Atlantic deputy editor whose long-form 2023 piece on Sam Altman set the template for serious mainstream coverage of AI safety questions.
Existential primacyMatteo Wong
The Atlantic associate editor; AI critic
Atlantic associate editor whose long features have framed AI for mainstream literary audiences. Skeptical of AI hype while taking the technology seriously.
AI skeptic
Ina Fried
Axios chief technology correspondent
Axios chief tech correspondent whose AI+Tech newsletter is required reading in DC AI policy circles. Bridges Beltway and Silicon Valley.
Governance first
Will Knight
Wired senior writer; AI safety beat reporter
Wired senior writer covering AI. Has reported extensively on the AI safety community, frontier labs, and the politics of AI governance.
Governance firstMorgan Meaker
Wired senior writer; European tech reporter
Wired senior writer based in Europe whose AI reporting from Brussels has been a significant English-language source on EU AI Act implementation.
Governance firstHolly Jean Buck
Buffalo geographer; climate AI critic
Buffalo geography professor who has written critically on AI's role in climate solutionism. Author of After Geoengineering. Frames AI-and-climate framings as often misleading.
AI skeptic
Tomaso Poggio
MIT computational neuroscience pioneer
MIT professor whose work on computational neuroscience and learning theory predates and underlies much of modern deep learning. Long-time bridge between neuroscience and AI.
Alignment first
Patricia Churchland
UC San Diego neurophilosopher
UC San Diego philosopher of mind whose 'neurophilosophy' framework has informed AI consciousness debates. Argues mind is brain in a strong sense.
External-domain expert · Field-leading · Symbolic era
AI welfare
Anil Seth
University of Sussex neuroscientist; consciousness researcher
Sussex neuroscientist whose work on the predictive processing model of consciousness has informed AI consciousness debates. Author of Being You.
External-domain expert · Field-leading · Deep-learning rise
AI welfare
Doris Tsao
Caltech / UC Berkeley neuroscientist; face cells researcher
Caltech-trained neuroscientist who discovered that single neurons in the temporal lobe encode specific visual features. Her work has informed thinking about how AI representations relate to brain representations.
External-domain expert · Field-leading · Deep-learning rise
Alignment first
Anita Allen
UPenn law professor; privacy and AI
University of Pennsylvania law professor whose work on privacy theory has informed AI privacy frameworks. Former chair of the Presidential Commission for the Study of Bioethical Issues.
Governance firstLina Eklund
Stockholm University; AI and gender researcher
Stockholm University researcher whose work on AI and gendered technology has informed Nordic AI policy.
Governance first
Agnes Callard
University of Chicago philosopher; aspiration theorist
University of Chicago philosopher whose work on aspiration and value-development has been applied by some AI ethics writers to the question of how AIs might come to acquire values.
Alignment first
Kwame Anthony Appiah
NYU philosopher; cosmopolitanism theorist
Princeton-and-NYU philosopher whose work on cosmopolitanism and identity has informed AI ethics frameworks. NYT Ethicist columnist.
Governance first
Martha Nussbaum
University of Chicago philosopher; capability approach co-founder
University of Chicago philosopher who with Sen developed the capability approach. Her ten central capabilities framework has informed AI ethics work on human flourishing.
Governance first
Amartya Sen
Harvard economist; capability approach pioneer
Harvard economist and 1998 Nobel laureate. His capability approach has informed AI ethics frameworks that focus on human flourishing rather than just narrow technical metrics.
External-domain expert · Household name · Pioneer
Governance first
Esther Duflo
MIT economist; 2019 Nobel laureate (with Banerjee)
MIT development economist and 2019 Nobel laureate. Argues AI applications in development must be empirically tested through RCTs, not assumed effective.
External-domain expert · Household name · Deep-learning rise
Evals-driven
Abhijit Banerjee
MIT economist; 2019 Nobel laureate
MIT development economist and 2019 Nobel laureate. With Esther Duflo has written on AI's effect on global development. Skeptical of AI-as-development-shortcut framings.
AI skeptic
Joseph Stiglitz
Columbia economist; Nobel laureate
Columbia economist and 2001 Nobel laureate. Has written on AI as a labour-market and inequality phenomenon, arguing it accelerates rent extraction unless redirected.
External-domain expert · Household name · Pioneer
Governance first
Diane Coyle
Cambridge economist; Bennett Professor of Public Policy
Cambridge economist whose Bennett Institute focuses on the economics of digital platforms and AI. Public voice for measurement and macroeconomic framings of AI policy.
Governance first
Masayoshi Son
SoftBank CEO; major AI investor
SoftBank chair and CEO. Co-investor in Stargate and major financial backer of OpenAI. Has predicted superintelligence by 2035.
Techno-optimismTasha McCauley
Former OpenAI board member; tech executive
Tech executive who served on the OpenAI board and voted with Helen Toner to remove Sam Altman in November 2023. Now a vocal critic of self-regulation by frontier labs.
Governance first
Stewart Brand
Long Now Foundation; Whole Earth Catalog founder
Long Now Foundation co-founder whose Whole Earth Catalog inspired generations of tech thinkers. Has commented on AI from a long-now perspective: civilisation-scale time horizons.
External-domain expert · Household name · Symbolic era
Long reflection
Tim O'Reilly
O'Reilly Media founder; tech-publishing veteran
O'Reilly Media founder and long-time tech publisher who has written extensively on AI as economic and political phenomenon. Author of WTF? and recent essays on AI antitrust.
Policy / meta · Household name · Deep-learning rise
Antitrust primacy
Alex 'Sandy' Pentland
MIT Connection Science director; computational social science
MIT professor and founder of computational social science. Author of Honest Signals and Social Physics. Public voice for data-driven society perspective on AI.
Governance first
Sandra Wachter
Oxford Internet Institute; AI law and ethics
Oxford Internet Institute professor whose work on the 'right to explanation' has shaped EU AI law. Frequent contributor to UK and EU AI policy debates.
Governance first
Iyad Rahwan
Max Planck Institute Berlin; Moral Machine experiment
Director of the Max Planck Institute for Human Development. Led the Moral Machine experiment crowd-sourcing self-driving-car ethics. Public voice on machine behaviour.
Deep technical · Field-leading · Deep-learning rise
Alignment first
Carl Shapiro
UC Berkeley economist; antitrust and innovation
Berkeley economist who specialised in antitrust and innovation. Has commented on AI antitrust questions, particularly the OpenAI-Microsoft relationship.
Antitrust primacyCarolyn Rouse
Princeton anthropology chair; AI sociology
Princeton anthropology professor whose work on race, media, and technology has informed AI ethics from a sociological perspective.
Governance firstTianqi Chen
CMU professor; XGBoost and TVM creator
CMU professor who created XGBoost and TVM. Foundational figure in AI infrastructure tooling.
Open sourceLuis Ceze
OctoML CEO; UW computer architecture
UW computer architecture professor and OctoML CEO. Public voice on AI hardware and the economics of compute, particularly the open-source compiler ecosystem.
Open source
Luc Julia
Renault Chief Scientist; Siri co-creator
Co-creator of Apple's Siri. Now Renault's chief scientist. Public voice for skeptical, deployment-grade AI framings.
AI skepticYaël Eisenstat
Cybersecurity for Democracy at NYU; former intelligence officer
Former CIA officer and Facebook integrity lead. Now at Cybersecurity for Democracy at NYU. Argues platform-and-AI accountability is national-security infrastructure.
Governance firstSamidh Chakrabarti
Stanford Cyber Policy Center; former Facebook civic integrity
Former Facebook civic integrity team lead who left in 2021. Now at Stanford Cyber Policy Center. Public voice on platform-and-AI integrity.
Governance first
Joichi Ito
Former MIT Media Lab director; AI ethics commentator
Former MIT Media Lab director who has written extensively on AI as societal infrastructure. Author of Whiplash and co-founder of Digital Garage.
Governance first
Luciano Floridi
Yale digital ethics professor; AI ethics philosopher
Italian philosopher who founded the field of philosophy of information. Director of the Yale Digital Ethics Center. One of the most published academics on AI ethics.
Policy / meta · Field-leading · Pre-deep-learning
Governance firstKaren Yeung
Birmingham law professor; AI law theorist
Birmingham professor of law, ethics and informatics whose work on algorithmic accountability has shaped UK and EU policy frameworks. Frames AI governance as a regulatory innovation problem.
Governance first
Kate Devlin
King's College London; AI and intimacy researcher
King's College London computer scientist and author of Turned On (2018), studying AI sex robots and the intersection of AI with human intimacy.
AI welfareMichael A. Osborne
Oxford ML professor; future of work researcher
Oxford machine learning professor co-author of the 2013 'Future of Employment' paper that estimated 47% of US jobs were at high risk of automation. Continues to publish on AI's labour-market effects.
Governance first
Dame Wendy Hall
Southampton professor; UK AI policy author
University of Southampton professor and co-chair of the UK Government's 2017 AI Review (with Jérôme Pesenti). Long-time architect of UK AI strategy.
Governance firstSarah Spurgeon
UCL professor; Royal Academy of Engineering AI lead
UCL Pro-Vice-Provost for Innovation and Enterprise. Royal Academy of Engineering AI work; bridges academic engineering and policy on AI in critical infrastructure.
Governance firstSarah Myers West
Co-Executive Director of AI Now Institute
Co-Executive Director of AI Now Institute. Researches the political economy of AI; has published on whether 'AI safety' is a frame that benefits incumbents.
Governance first
Anu Bradford
Columbia Law professor; 'The Brussels Effect' author
Columbia law professor whose Brussels Effect framework documents how EU regulation propagates globally. Foundational reference for understanding EU AI Act influence.
Governance firstMichèle Finck
Tübingen law professor; AI law and EU regulation
Tübingen law professor whose work on EU AI law has shaped the EU AI Act's interaction with GDPR. Trained both legally and technically.
Governance first
Brad Templeton
Long-time tech journalist; self-driving cars critic
Long-time tech journalist and EFF founding board member who has been a leading voice on autonomous vehicle policy and AI safety in transport.
Governance first
Sasha Costanza-Chock
Algorithmic Justice League; design justice author
Algorithmic Justice League researcher and author of Design Justice (2020). Argues participatory design is a structural prerequisite for AI that doesn't reproduce systemic harm.
Governance firstBeth Singler
Cambridge anthropologist; AI religion researcher
Cambridge anthropologist whose work on the religious framings of AI has documented how AI is increasingly described in spiritual or religious terms.
AI skeptic
Kashmir Hill
NYT tech reporter; facial recognition and privacy
New York Times tech reporter whose investigation of Clearview AI led to mainstream understanding of facial recognition surveillance. Author of Your Face Belongs to Us (2023).
Governance firstSarah Cen
MIT researcher; recommender systems and incentives
MIT researcher whose work on recommender systems and platform incentive design has been cited in mainstream AI policy debates.
Governance firstKate Saenko
Boston University CS professor; visual AI researcher
BU computer vision professor whose work on transferable AI representations and adversarial robustness has shaped the engineering side of AI safety.
Alignment first
Ali Rahimi
Google Brain ML researcher; 'Alchemy' speech
Google Brain researcher whose 2017 NeurIPS Test of Time speech labelled deep learning 'alchemy' for its lack of theoretical foundations. Influential framing in subsequent ML rigor debates.
Evals-driven
Katy Börner
Indiana University; data and information visualisation
Indiana University Distinguished Professor whose Atlas of Knowledge and other visualisation work has shaped how researchers and policymakers see the AI research field at scale.
Evals-driven
Yuk Hui
City University of Hong Kong philosopher; cosmotechnics
Hong Kong-based philosopher whose 'cosmotechnics' framework argues different cultures have different relationships to technology, and globalised AI is a particular philosophical imposition.
AI skeptic
Hilary Putnam
Harvard philosopher (1926–2016); functionalism
Harvard philosopher who in the 1960s introduced functionalism, the position that mental states are functional, not material, that subsequently became the foundation of computationalist theories of mind.
Alignment first
Donna Haraway
UC Santa Cruz emerita; 'A Cyborg Manifesto'
Cyborg-feminism theorist whose 1985 'A Cyborg Manifesto' anticipated debates about hybrid human-machine identity. Foundational reference for thinking about post-AI selfhood.
AI welfare
Hubert Dreyfus
Berkeley phenomenologist; AI critic (1929–2017)
UC Berkeley phenomenologist whose 1972 'What Computers Can't Do' was the first serious philosophical critique of symbolic AI. Foundational reference for AI-skeptic arguments grounded in embodied cognition.
AI skeptic
John Searle
UC Berkeley philosopher; Chinese Room Argument
Berkeley philosopher whose 1980 'Chinese Room' thought experiment is the canonical argument that symbol manipulation alone cannot produce understanding.
External-domain expert · Household name · Pioneer
AI skeptic
Daniel Dennett
Philosopher; 'Darwin's Dangerous Idea' (1942–2024)
Tufts philosopher who spent his career arguing for naturalistic, computational theories of mind. Foundational reference for thinking about AI consciousness.
External-domain expert · Household name · Pioneer
AI welfare
Thomas Nagel
NYU philosopher; 'What is it like to be a bat'
NYU philosopher whose 1974 'What Is It Like to Be a Bat?' paper became the canonical statement of the consciousness question. Foundational reference for AI consciousness debates.
External-domain expert · Household name · Pioneer
AI welfare
Andy Clark
Sussex philosopher; extended mind theorist
Philosopher of mind whose 'extended mind' framework argues cognition extends into tools and environment. Foundational reference for thinking about AI as cognitive extension.
AI skeptic
Ray Kurzweil
Futurist; The Singularity Is Near (1948–)
Futurist whose 2005 The Singularity Is Near predicted human-machine merger by 2045. Long-time public face of optimistic technological-singularity framings.
External-domain expert · Household name · Symbolic era
Techno-optimismHans Moravec
Robotics pioneer; 'Mind Children' (1948–)
Carnegie Mellon robotics pioneer who in 1988 wrote Mind Children predicting robotic descendants of humanity. Foundational reference for AI succession debates.
Abandon superintelligence
Donald Knuth
Computer science pioneer; The Art of Computer Programming
Stanford emeritus professor whose The Art of Computer Programming has been the canonical CS reference for sixty years. Has voiced cautious skepticism about LLM-based AGI claims.
Deep technical · Household name · Pioneer
AI skeptic
Alan Kay
Object-oriented programming and personal computing pioneer
Pioneer of object-oriented programming and the Dynabook concept. Long-time critic of how the personal computing revolution was actually deployed; extends this critique to AI.
Deep technical · Household name · Pioneer
AI skeptic
Douglas Engelbart
Pioneer of human-computer interaction (1925–2013)
SRI engineer who delivered the 1968 'Mother of All Demos' inventing the mouse, hypertext, and the conceptual foundations of personal computing. His framework: technology should augment, not replace, human intellect.
Deep technical · Household name · Pioneer
AI skeptic
Claude Shannon
Information theory founder (1916–2001)
Bell Labs and MIT mathematician who founded information theory in 1948. The mathematical infrastructure of all modern AI traces back to Shannon entropy and channel capacity.
Deep technical · Household name · Pioneer
Alignment first
Ada Lovelace
First programmer; analytical engine theorist (1815–1852)
Mathematician whose notes on Charles Babbage's Analytical Engine made her the first computer programmer. Foundational reference for thinking about machines and intelligence.
AI skeptic
Judea Pearl
UCLA professor; Bayesian networks and causality pioneer
UCLA computer scientist who founded Bayesian networks and the modern theory of causation. Author of The Book of Why (2018). Public skeptic of pure-correlation deep learning.
Deep technical · Household name · Symbolic era
AI skeptic
Joseph Weizenbaum
ELIZA inventor; AI ethics pioneer (1923–2008)
German-American MIT computer scientist who created ELIZA in 1966 and immediately became a critic of the AI hype that followed. Author of Computer Power and Human Reason (1976).
Deep technical · Field-leading · Pioneer
AI skeptic
Alan Turing
Founder of theoretical computer science (1912–1954)
British mathematician and codebreaker who founded computer science. His 1950 'Computing Machinery and Intelligence' paper proposed the Turing Test and inaugurated the philosophy of AI.
Deep technical · Household name · Pioneer
Alignment first
John McCarthy
Coined 'artificial intelligence' (1927–2011)
Stanford computer scientist who coined the term 'artificial intelligence' in 1955 and convened the 1956 Dartmouth Workshop that founded the field.
Deep technical · Household name · Pioneer
Alignment first
Marvin Minsky
MIT AI lab co-founder (1927–2016); 'Society of Mind'
MIT AI lab co-founder and one of the foundational figures in artificial intelligence. Author of The Society of Mind. Foundational reference for thinking about modular intelligence.
Deep technical · Household name · Pioneer
Alignment first
Doug Lenat
Cycorp founder; symbolic AI pioneer (1950–2023)
Founder of Cycorp and pioneer of symbolic-knowledge approaches to AI. Spent 40 years building Cyc, a hand-curated common-sense knowledge base. Public skeptic of pure-LLM paths to AGI.
Deep technical · Field-leading · Pioneer
AI skeptic
Bruce Schneier
Security guru; AI security and democracy critic
Cryptographer and security writer whose blog and books have shaped public understanding of digital security for two decades. Has extended his framework to AI security and democracy.
Governance first
Ross Anderson
Cambridge security professor; security engineering
Cambridge security engineering professor (1956–2024) whose textbook Security Engineering has been the canonical reference for digital security. Wrote extensively on the security implications of AI.
Governance first
Tracy Chou
Block Party founder; tech accountability advocate
Engineer who founded Block Party (an anti-harassment tool) and has been a leading voice on tech accountability, including Big Tech AI deployment.
Governance firstAlex Irpan
Google Brain alumnus; Sorta Insightful blog
Former Google Brain RL researcher and ML educator whose Sorta Insightful blog has long been a thoughtful inside-view voice on RL and AI safety.
Alignment first
Shawn Wang (swyx)
Smol AI founder; Latent Space podcast
Engineer-and-investor who runs the Latent Space podcast, a major venue for AI engineering and product discussions. Founder of Smol AI.
Techno-optimism
Cassidy Williams
GitHub developer advocate; AI in coding
Senior Director of Developer Advocacy at GitHub. Public voice for the AI-coding revolution, particularly around GitHub Copilot and the future of developer skills.
Techno-optimism
Sneha Revanur
Founder of Encode Justice; Gen-Z AI activism
Stanford-bound activist who founded Encode Justice as a youth-led AI accountability organisation. Public voice for Gen-Z perspectives on AI policy.
Governance firstRebecca Finlay
CEO of Partnership on AI
Runs the Partnership on AI, a multistakeholder organisation that includes frontier labs, civil society, and academia. Helps coordinate industry-civil-society work on responsible AI.
Governance firstKara Frederick
Heritage Foundation tech policy director
Heritage Foundation senior tech-policy researcher. Conservative voice on AI regulation; has published reports on AI's national security implications and on conservative responses to Big Tech.
Race to aligned SI
Tony Fadell
iPod creator; Nest founder; AI hardware critic
Hardware design legend who created the iPod and founded Nest. Has publicly criticised the LLM-everywhere approach and called for transparent, specialised AI systems.
Commentator · Household name · Post-ChatGPT
AI skeptic
Joëlle Pineau
Cohere Chief AI Officer; former Meta VP of AI Research
McGill RL professor who ran Meta's AI research and championed Meta's open-source AI policy. Departed Meta in April 2025; joined Cohere in 2025.
Deep technical · Field-leading · Deep-learning rise
Open source
Jonathan Haidt
NYU Stern professor; The Anxious Generation
NYU Stern social psychologist whose 2024 book The Anxious Generation links smartphone-and-social-media exposure to youth mental health declines. Has extended his framework to oppose AI companions for children.
AI skeptic
Sherry Turkle
MIT social scientist; AI and loneliness researcher
MIT social scientist who has studied human-machine relationships for decades. Author of Alone Together (2011). Argues AI companions are an 'assault on empathy'.
AI skeptic
Lawrence Lessig
Harvard Law professor; Creative Commons founder
Long-time digital rights and copyright scholar. Founded Creative Commons. In 2024, argued some AI model weights are too dangerous to release openly even given his pro-openness defaults.
Closed weights
Vivek Ramaswamy
Former US presidential candidate; AI deregulation advocate
Former 2024 US Republican presidential candidate. Public advocate for radical AI deregulation.
Commentator · Household name · Post-ChatGPT
AccelerationTantum Collins
Former DeepMind policy lead; Goldman Sachs
Former DeepMind policy researcher who joined Goldman Sachs. Bridges frontier-lab governance and capital-markets perspective on AI.
Governance firstSara Bergman
Microsoft AI sustainability engineer
Microsoft engineer who has written and spoken on AI's sustainability footprint. Public voice for green-software practices in AI deployment.
Governance firstAdrian Weller
Cambridge professor; Alan Turing Institute fellow
Cambridge ML professor and Alan Turing Institute programme director. Bridges technical ML and policy work; has advised the UK government on AI strategy.
Governance firstReuben Binns
Oxford computer scientist; AI privacy law
Oxford computer scientist whose work has shaped UK ICO and EU AI privacy regulation. Translates technical AI questions into legal frameworks.
Governance first
Rana el Kaliouby
Affectiva co-founder; emotion AI pioneer
Egyptian-American computer scientist who co-founded Affectiva. Pioneer of emotion AI. Argues affective AI deployment requires distinct ethical frameworks.
AI welfare
Kara Swisher
Tech journalist; On with Kara Swisher podcast
Veteran tech journalist who co-founded Recode and AllThingsD. Long-running interviewer of frontier AI executives. Public framing: AI executives are not to be trusted as their own regulators.
Policy / meta · Household name · Post-ChatGPT
Governance first
Stephen Witt
Author of 'The Thinking Machine' (NVIDIA history)
Journalist and author whose 2025 book The Thinking Machine documents NVIDIA's rise as the AI infrastructure provider. Major reference for AI compute history.
Compute governance
Stephen Marche
Author; AI and writers' rights
Canadian writer who has written extensively on AI and the future of writing in The Atlantic and elsewhere. Public voice in the writers'-rights conversation about AI training data.
Governance first
Bret Taylor
Chairman of OpenAI; co-CEO of Sierra
Long-time tech executive (former Salesforce co-CEO, Twitter chairman) who became OpenAI board chair after the November 2023 governance crisis. Co-founded Sierra, an AI agent company.
Policy / meta · Household name · Scaling era
Governance first
Scott Galloway
NYU Stern professor; tech-business commentator
NYU Stern marketing professor and Pivot podcast co-host. Influential tech-business commentator who has framed AI primarily through antitrust and platform-power lenses.
Antitrust primacy
Vint Cerf
Internet co-creator; Google Chief Internet Evangelist
TCP/IP co-creator. Has written and spoken extensively on AI's implications for the internet, particularly around content provenance and trust.
Deep technical · Household name · Pioneer
Governance firstRoss Rheingans-Yoo
Independent biosecurity and AI researcher
Researcher who has worked on biosecurity and AI x-risk forecasting. Active contributor to forecasting communities and Open Philanthropy-affiliated work.
Existential primacyMike Solana
Pirate Wires founder; tech contrarian
Founder of Pirate Wires, a contrarian tech newsletter. Influential voice in Silicon Valley anti-doomer / accelerationist culture.
Acceleration
Matthew Yglesias
Slow Boring; political economy of AI
Influential journalist and Substack writer. His writing on AI focuses on labour markets, antitrust, and the political economy of regulation.
Governance firstAlex Kantrowitz
Big Technology podcast host; tech journalist
Tech journalist whose Big Technology newsletter and podcast have become major venues for in-depth interviews with frontier AI executives.
Governance first
Azeem Azhar
Exponential View founder; tech-economy analyst
Founder of Exponential View, a leading newsletter on technology and economy. Argues exponential technologies (including AI) require exponential institutional adaptation.
Governance firstFreddie deBoer
Cultural critic; AI skeptic
Writer and cultural critic whose Substack has been a leading skeptical voice on AI hype, particularly on AI in education and journalism.
AI skeptic
Ross Douthat
NYT columnist; conservative AI commentator
New York Times opinion columnist whose writing has covered AI from a conservative-religious perspective, focusing on questions of meaning and human dignity.
AI skepticKati Suominen
Founder of Nextrade Group; AI trade policy
Trade economist and founder of Nextrade Group. Argues AI governance must be embedded in international trade frameworks, not as a separate digital regime.
International treaty
Eliza Strickland
IEEE Spectrum senior editor; AI Spectrum
IEEE Spectrum senior editor whose technical journalism on AI has been a reference for engineering audiences. Edits IEEE's AI coverage.
Governance first
Danielle Allen
Harvard political theorist; Allen Lab on AI and democracy
Harvard James Bryant Conant University Professor and political theorist. Her Allen Lab on AI and democracy works on AI governance grounded in democratic theory.
Democratic mandateSoumith Chintala
PyTorch creator; Meta AI
Creator of PyTorch, the dominant deep-learning framework. Public voice on the open-source AI ecosystem.
Open sourceSasha Rush
Cornell Tech professor; HuggingFace research scientist
Cornell Tech NLP professor who also works at Hugging Face. Influential educator and contributor to the open-research culture.
Open sourceAlan Cowen
Founder of Hume AI; emotional AI researcher
Founder of Hume AI, which builds models trained on emotional expression. Argues empathic AI is a structurally different deployment problem than generic LLMs.
AI welfareFlorian Tramèr
ETH Zurich AI security researcher
ETH Zurich professor focused on adversarial ML, privacy attacks, and red-teaming. Has documented many of the practical security failures in deployed AI.
Evals-drivenCarlos Ignacio Gutierrez
Future of Life Institute AI policy researcher
AI policy researcher at the Future of Life Institute who has analysed comparative AI legislation across the US, EU, and other jurisdictions.
Governance firstRaja Chatila
Sorbonne robotics professor; IEEE Global Initiative on AI Ethics
French roboticist who chaired the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Long-time architect of European AI ethics frameworks.
Governance first
Doina Precup
McGill professor; DeepMind Montreal lead
Reinforcement learning pioneer at McGill and DeepMind. One of the most prominent women in the technical core of RL research.
Alignment firstSundar Sarukkai
Bangalore-based philosopher of science
Indian philosopher of science and technology whose work on the philosophy of mind and AI has influenced the Indian responsible-AI conversation.
AI skepticJoscha Romeike
Anthropic policy team
Anthropic policy team member working on EU regulatory engagement. Helps translate Anthropic's safety commitments into European regulatory language.
RSP-style commitments
Marshall McLuhan
Media theorist; foundational AI-and-media reference
Canadian media theorist whose 'the medium is the message' has become foundational to thinking about how AI as a medium reshapes the messages it carries. Included for foundational discourse.
AI skepticTyler John
Longview Philanthropy AI policy lead
Senior policy researcher at Longview Philanthropy who works on AI governance funding and strategy. Bridges philosophy and policy.
Governance firstJoaquin Quiñonero Candela
Apple ML executive; ex-Meta responsible AI
Former Meta VP of Responsible AI who built Meta's responsible AI infrastructure during the misinformation and fairness reckoning. Now at Apple's ML group.
Governance firstHaydn Belfield
Cambridge CSER academic project manager
Manages the AI:Futures and Responsibility programme at Cambridge's CSER. Has authored multiple AI policy reports for UK government bodies.
Governance firstIason Gabriel
DeepMind senior research scientist; AI ethics
DeepMind senior research scientist focused on AI ethics and value alignment from a political philosophy perspective. Author of the influential 'Artificial Intelligence, Values and Alignment' paper.
Alignment first
Matt Turek
DARPA AI Forward program lead
DARPA Information Innovation Office program manager who leads the AI Forward initiative on assured autonomy and the integration of AI into national security systems.
Governance firstShahar Avin
Cambridge CSER senior research associate
Cambridge Centre for the Study of Existential Risk researcher whose 'malicious use of AI' report has shaped policy thinking on AI security risks. Co-founded the Future of Life Institute.
Governance first
Wes McKinney
Pandas creator; Posit/RStudio data infrastructure
Creator of pandas, the canonical Python data-analysis library. Has written on AI infrastructure: how dataframes, training pipelines, and orchestration shape what AI is possible.
Open source
Lucy Suchman
Lancaster professor emerita; AI and military robotics
Lancaster anthropologist whose work on situated action and on military AI has shaped academic thinking on autonomous weapons. Long-time critic of overconfident AI claims.
Governance firstYann Collet
Compression researcher; Zstandard creator (Meta)
Creator of zstd compression. Has commented on the substantial energy and infrastructure costs underlying frontier AI training, drawing on compression-theory perspective.
Governance firstConnor Tann
Faculty AI safety lead
Faculty AI's safety research lead, working with frontier labs and the UK AI Safety Institute on evaluations.
Evals-driven
Tony Blair
Former UK Prime Minister; Tony Blair Institute on AI
Former UK PM whose Tony Blair Institute has become a leading mainstream voice for AI governance, particularly on AI as a productivity-and-public-services lever for governments.
Policy / meta · Household name · Post-ChatGPT
Governance firstGeoffrey Cain
Author of 'The Perfect Police State'
Foreign correspondent whose 2021 book documented Chinese AI-powered surveillance in Xinjiang. Frames AI surveillance as a present authoritarian threat.
Governance first
Mihaela van der Schaar
Cambridge AI in healthcare professor
Cambridge professor who founded the van der Schaar Lab on machine learning for healthcare. Argues medical AI demands sector-specific evaluation methodology.
Evals-drivenTristan Greene
Tech journalist; AI deep dive coverage
Senior technology journalist whose work for Cointelegraph and previously TNW has examined the gap between AI hype and reality.
AI skepticAndriy Burkov
ML engineer; 'The Hundred-Page Machine Learning Book' author
Practitioner-oriented ML educator and author. Has written publicly about the limits of current LLMs and the gap between hype and deployed reality.
AI skeptic
Robert Long
Eleos AI co-founder; AI welfare researcher
Philosopher who co-founded Eleos AI, a non-profit research organisation focused on AI moral status and welfare. Co-author of the 2024 'Taking AI Welfare Seriously' position paper.
AI welfareKyle Fish
Anthropic AI welfare researcher
First dedicated AI welfare researcher hired by a frontier lab (Anthropic, 2024). Translates academic philosophy into operational AI-welfare practice.
AI welfare
Henry Shevlin
Cambridge LCFI; AI consciousness philosopher
Cambridge Leverhulme Centre for the Future of Intelligence philosopher specialising in AI moral status and digital minds. Co-organised the 2024 'Taking AI Welfare Seriously' report.
AI welfareErik Hoel
Neuroscientist; consciousness researcher
Stony Brook neuroscientist and Tufts research scientist whose work on consciousness has been central to debates about AI sentience. Author of the 2023 book The World Behind the World.
AI welfare
Edward Felten
Princeton emeritus; ex-FTC Chief Technologist
Princeton University Robert E. Kahn Professor of Computer Science Emeritus; founding director of the Center for Information Technology Policy. Twice served the U.S. government, as deputy U.S. CTO and as FTC Chief Technologist.
Deep technical · Field-leading · Pre-deep-learning
Governance firstRenée Cummings
University of Virginia data activist in residence
AI ethicist and criminologist whose work focuses on AI in policing and Black diaspora communities. Former Senior Fellow at Columbia's Data Science Institute.
Governance firstAndrew Trask
Founder of OpenMined; privacy-preserving AI
Oxford PhD and OpenMined founder. Builds open-source tools for privacy-preserving machine learning, including federated learning, differential privacy, and secure computation.
Governance firstTrishan Panch
Wellframe co-founder; Harvard health AI
Harvard healthcare AI researcher and Wellframe co-founder. Argues medical AI deployment needs sector-specific governance grounded in clinical evidence.
Evals-drivenAda Rose Cannon
W3C web standards advocate; AR/VR engineer
Long-time W3C standards contributor and AR/VR engineer. Has written about AI's implications for the open web and immersive technologies.
Open source
Stephen Wolfram
Founder of Wolfram Research; A New Kind of Science
Founder of Wolfram Research and author of A New Kind of Science. Has written extensively on what makes ChatGPT work and on integrating symbolic computation with LLMs.
Deep technical · Household name · Symbolic era
Techno-optimism
Ben Recht
UC Berkeley professor; ML reproducibility critic
Berkeley CS professor whose blog and papers have been a leading voice for reproducibility and rigor in ML benchmarking. Argues much of ML research has weak empirical foundations.
AI skepticTom Everitt
Google DeepMind staff research scientist; first PhD thesis on safe AGI
DeepMind safety researcher whose 2018 thesis 'Towards Safe Artificial General Intelligence' was the first PhD thesis on the topic. Works on causal foundations for safe AGI.
Alignment firstMarc Warner
CEO of Faculty AI; CTO of Accenture
Former Harvard quantum physicist who co-founded Faculty, a UK AI firm that works with the UK government and frontier labs on safety evaluations. Now also Accenture's global CTO after the 2024 acquisition.
Evals-drivenAdam Jonas
Morgan Stanley equity analyst; embodied AI and humanoid robots lead
Morgan Stanley equity analyst who leads research on auto, mobility, and humanoid robots. Published influential $25 trillion 2050 humanoid-robot market projection.
Techno-optimism
Kalev Leetaru
Founder of the GDELT Project
Founder of the Global Database of Events, Language, and Tone. Builds open datasets used widely in conflict forecasting and disinformation research, including by NATO Strategic Communications.
Governance first
James Hendler
RPI Tetherless World Constellation director; semantic web pioneer
RPI professor and Semantic Web co-originator. Long-time advocate for structured-data AI and for governance grounded in computational pragmatics.
Governance first
Virginia Eubanks
University at Albany SUNY; Automating Inequality author
Political science professor whose 2018 Automating Inequality documented how algorithmic systems in welfare, child protection, and housing disproportionately punish poor Americans.
Governance first
Stephanie Bell
Partnership on AI; data worker rights
Chief Programs and Insights Officer at Partnership on AI. Focuses on AI and job quality, including data-worker rights in the Global South.
Governance first
Cathy O'Neil
Mathematician; Weapons of Math Destruction author
Mathematician whose 2016 Weapons of Math Destruction made 'algorithmic accountability' a mainstream concern. Runs ORCAA, an algorithmic audit firm.
Applied technical · Household name · Deep-learning rise
Governance first
Tim Berners-Lee
Inventor of the World Wide Web
Inventor of the web and co-founder of the Solid project. Argues AI exploitation of the web requires a decentralised data architecture where individuals own their data.
Frontier builder · Household name · Symbolic era
Open source
Susan Athey
Stanford economist; former DOJ Antitrust chief economist
Stanford economics professor who served 2022–2024 as Chief Economist at the US Department of Justice Antitrust Division. Pioneered the combination of machine learning and causal inference.
Antitrust primacy
Pedro Domingos
UW emeritus; The Master Algorithm author
University of Washington ML pioneer who wrote The Master Algorithm (2015). Now a public voice against what he calls the AI-safety-induced 'existential risk' panic and against what he sees as illiberal AI regulation.
AI skeptic
Michael Wooldridge
Oxford computer science department head
Oxford head of CS who specialises in multi-agent systems. Author of The Road to Conscious Machines (2020) and widely-read historian of AI. Public voice for measured AI framings.
AI skepticHelen Nissenbaum
Cornell Tech professor; contextual integrity theory
Cornell Tech philosopher whose contextual integrity framework is the most-cited theory of privacy in tech-policy debates. Frames AI privacy as about appropriate information flow between contexts.
Governance first
Rob Reich
Stanford professor; System Error co-author
Stanford political theory professor and associate director of HAI. Co-author of System Error (2021), which argues Big Tech's optimization mindset systematically substitutes company values for democratic ones.
Governance first
Gabriela Ramos
UNESCO Assistant Director-General for Social and Human Sciences
Mexican economist who led UNESCO's 2021 Recommendation on the Ethics of Artificial Intelligence, the first global inter-governmental AI ethics agreement, adopted by 193 member states.
International treaty
Amy Zegart
Stanford Hoover senior fellow; national security and AI
Hoover Institution senior fellow and Stanford associate director of HAI. Author of Spies, Lies, and Algorithms. Central mainstream voice on AI and US intelligence.
Policy / meta · Field-leading · Scaling era
Governance firstRamana Kumar
Google DeepMind safety researcher; formal verification
DeepMind safety researcher who works on formal verification, tampering incentives, and scalable oversight. Combines theorem-proving background with alignment research.
Alignment first
Dragoș Tudorache
MEP; EU AI Act co-rapporteur
Romanian Member of the European Parliament; co-rapporteur of the EU AI Act (the world's first comprehensive horizontal AI regulation). Steered the file through Parliament to its 2024 adoption.
Governance first
Brando Benifei
MEP; EU AI Act co-rapporteur
Italian Member of the European Parliament; co-rapporteur of the EU AI Act with Dragoș Tudorache. Pushed for stricter rules on biometric surveillance and foundation models.
Governance first
Margrethe Vestager
Former EU Competition Commissioner (2014–2024)
Former Executive Vice President of the European Commission and EU Commissioner for Competition (2014–2024). Architect of the major EU antitrust actions against Google, Apple, and Amazon.
Antitrust primacy
Thierry Breton
Former EU Commissioner for Internal Market (2019–2024)
Former French Commissioner for the Internal Market at the European Commission (2019–2024). Co-architect of the EU AI Act, Digital Services Act, and Digital Markets Act.
Governance first
Cédric Villani
Fields medalist; former MP; French AI strategy
French mathematician and 2010 Fields medalist; former French MP. Author of the 2018 'Villani Report' that set France's national AI strategy.
Sovereign AISandhini Agarwal
OpenAI policy researcher
Researcher on OpenAI's policy and safety teams; co-author on multiple papers about RLHF, model deployment, and societal impacts.
RSP-style commitmentsDeepak Pathak
CMU; curiosity-driven exploration; humanoid robotics
CMU assistant professor; co-author of the foundational 2017 paper on curiosity-driven exploration. Now leads research on humanoid robot learning at scale.
Open-endedness
Rana Foroohar
Financial Times associate editor; CNN analyst
Associate editor and global business columnist at the Financial Times; CNN global economic analyst. Author of 'Don't Be Evil' (2019) and frequent commentator on AI's macroeconomic and labor effects.
Antitrust primacyDavid Patterson
UC Berkeley emeritus; Google AI hardware; 2017 Turing Award
UC Berkeley professor emeritus and distinguished engineer at Google. Co-architect of RISC; co-author of the Hennessy-Patterson computer architecture textbook. Co-recipient of the 2017 Turing Award; current focus on AI accelerators (TPUs).
Compute governance
Cliff Young
Google; TPU principal engineer
Google principal engineer; co-architect of the Tensor Processing Unit (TPU) chip family that powers Google's AI training and inference at scale.
Compute governance
Yu Su
Ohio State; AI agents and reasoning
Distinguished assistant professor at Ohio State University; researches reasoning, knowledge graphs, and AI agents. Co-developer of widely-used agent benchmarks.
Evals-driven
Ali Ghodsi
Databricks co-founder and CEO
Co-founder and CEO of Databricks; co-creator of Apache Spark. Adjunct professor at UC Berkeley. Acquired MosaicML in 2023, integrating LLM training into Databricks' product.
Open sourceOmar Khattab
Stanford / Databricks; DSPy creator
Stanford PhD student / Databricks researcher; lead author of DSPy, a programming framework for LLM applications that has reshaped how production LLM systems are built.
Open source
David Ha
Sakana AI co-founder; ex-Stability AI head of strategy
Co-founder of Sakana AI (Tokyo); previously head of strategy at Stability AI. Prolific researcher on evolutionary methods, world models, and creative AI; widely known online as 'hardmaru'.
Open-endedness
Brett Adcock
Figure AI founder; humanoid robotics
Founder of Figure AI; previously founded Vettery and Archer Aviation. Leads one of the most heavily-funded humanoid robotics startups, raising $675M in 2024 from Microsoft, OpenAI, and others.
Techno-optimismEric Jang
1X Technologies VP of AI; ex-Google Brain
VP of AI at 1X Technologies (humanoid robots); previously at Google Brain. Author of 'AI is Good for You' (2023). Public commentator on the path from generative AI to embodied agents.
AccelerationRobin Rombach
Black Forest Labs co-founder; Stable Diffusion lead
Co-founder of Black Forest Labs (Flux series of image models); previously lead author on the original Stable Diffusion paper at LMU Munich, before joining Stability AI.
Open sourceJonas Andrulis
Aleph Alpha founder; European sovereign AI
Founder and CEO of Aleph Alpha, a German AI lab building European-sovereign frontier models. Previously at Apple ML.
Sovereign AI
Peter Kyle
UK Secretary of State for Science, Innovation and Technology
Member of Parliament for Hove and Portslade since 2015; UK Secretary of State for Science, Innovation and Technology since July 2024. Oversees the AI Security Institute and the UK's AI policy approach under the Starmer government.
Governance first
Nick Clegg
Former Meta President of Global Affairs (2018–2025)
Former UK Deputy Prime Minister; served as Meta's President of Global Affairs from 2018 until early 2025. Public face of Meta's content-moderation and AI policy positions.
Open source
Stephen Fry
British writer and actor; QI host
British writer, actor, and broadcaster; long-running cultural figure with frequent reflections on AI, consciousness, and the human condition.
Existential primacy
David Brin
Sci-fi novelist; 'The Transparent Society' author
Hugo and Nebula award-winning science fiction author; author of 'The Transparent Society' (1998). Argues mutual surveillance and reciprocal accountability are the structural answer to surveillance and AI concentration.
Multi-agent equilibriumVitaly Shmatikov
Cornell Tech; ML privacy and security
Professor at Cornell Tech; long-running researcher on privacy attacks against ML systems. Co-author of foundational membership inference and model-inversion papers.
Security mindsetNicolas Perrin-Gilbert
Inria; embodied AI; co-founder of Genesys Robotics
Senior researcher at Inria; co-founder of Genesys Robotics. Researches embodied AI and the limits of disembodied learning.
AI skeptic
Ed Zitron
EZPR founder; 'Where's Your Ed At' newsletter
PR executive turned newsletter writer; among the loudest voices challenging AI hype, claims of imminent AGI, and the financial sustainability of frontier-lab business models.
AI skeptic
Paris Marx
Tech Won't Save Us host
Tech writer and host of the Tech Won't Save Us podcast; long-running critic of Silicon Valley narratives. Author of 'Road to Nowhere' (2022) on autonomous vehicles.
Near-term harms first
Brian Merchant
Tech writer; 'Blood in the Machine' author
Independent tech writer; author of 'Blood in the Machine' (2023) on the original Luddites, with explicit parallels to AI-era labour disruption. Former LA Times tech columnist.
Near-term harms firstAman Sanger
Cursor co-founder
Co-founder of Anysphere (Cursor); MIT alumnus. Cursor became one of the fastest-growing AI tools by integrating frontier models directly into a code editor optimized for programmer agents.
Techno-optimismSualeh Asif
Cursor co-founder
Co-founder of Anysphere (Cursor); MIT alumnus. Co-led the development of Cursor's tab-completion and agent loop systems.
Techno-optimismSebastian Raschka
Lightning AI; ML educator and author
Staff research engineer at Lightning AI; author of multiple Python ML textbooks. Long-running educational presence whose materials shape how a generation of engineers learns ML.
Open source
Kenneth O. Stanley
Maven; ex-OpenAI; novelty search and open-endedness
Founder of Maven; previously head of Open-Endedness at OpenAI and a UCF professor. Author of 'Why Greatness Cannot Be Planned' (2015); pioneer of novelty search and open-ended evolution.
Open-endednessJoel Lehman
Independent researcher; ex-OpenAI; novelty search
Independent researcher; previously at OpenAI and Uber AI Labs. Co-author with Kenneth Stanley of 'Why Greatness Cannot Be Planned'; long-time advocate of intrinsically motivated and open-ended approaches.
Open-endednessTrevor Hastie
Stanford statistics; ML pioneer
Stanford John A. Overdeck Professor of Statistics. Co-author of the canonical statistical-learning textbooks 'The Elements of Statistical Learning' (2001) and 'An Introduction to Statistical Learning' (2013).
AI skepticPavel Izmailov
OpenAI; ex-superalignment team
OpenAI researcher on the (former) superalignment team; co-author of the 'weak-to-strong generalization' paper that explored whether weaker models can effectively supervise stronger ones.
Scalable oversight
Carroll Wainwright
Anthropic; ex-OpenAI; alignment researcher
Anthropic researcher on alignment; previously at OpenAI. Co-author of multiple foundational papers on RLHF and on summarization with human preferences.
Alignment firstIgor Mordatch
Google DeepMind; multi-agent and embodied AI
Senior research scientist at Google DeepMind; previously at OpenAI. Co-author of foundational papers on multi-agent emergent communication and on robotic manipulation.
Cooperative AI
Andrew McAfee
MIT; 'The Second Machine Age' co-author
Co-director of the MIT Initiative on the Digital Economy; co-author with Erik Brynjolfsson of 'The Second Machine Age' (2014) and 'Machine, Platform, Crowd' (2017).
Techno-optimism
Matt Ridley
British science writer; 'How Innovation Works'
Science writer and former member of the House of Lords; author of 'The Rational Optimist' (2010) and 'How Innovation Works' (2020). Vocal techno-optimist on AI.
Techno-optimismYannick Kilcher
YouTuber; ML paper explainer; ex-DeepJudge
Computer scientist and prolific YouTuber whose channel explains ML papers technically. Reaches a wide developer audience and influences which papers get attention.
AI skepticPierre-Yves Oudeyer
Inria; developmental AI and curiosity
Research director at Inria Bordeaux; Flowers Lab founder. Pioneer of intrinsically motivated learning in machines; co-author of widely cited work on curiosity-driven exploration.
Open-endednessMarc G. Bellemare
Mila / McGill; Atari Learning Environment
Canada CIFAR AI Chair at Mila and McGill; co-author of the Atari Learning Environment (2013) that became the canonical RL benchmark, and of the Distributional RL framework.
Alignment firstEdward Tian
GPTZero founder
Princeton senior who launched GPTZero in January 2023, an early and widely-used AI text detector. Now CEO of GPTZero.
Near-term harms firstColin Raffel
UofT; Hugging Face; T5 author
Associate professor at the University of Toronto and Vector Institute; previously a faculty researcher at Hugging Face. Lead author of T5 (2019), one of the foundational text-to-text pretraining frameworks.
Open sourceRoger Grosse
U Toronto; Anthropic; influence functions for LLMs
University of Toronto professor and Anthropic part-time researcher. Co-led the 2023 work on influence functions for large language models, a key technique for tracing model behaviour back to training data.
Interpretability betMike Lewis
Meta FAIR; BART, Llama 2 lead
Researcher at Meta FAIR; co-author of BART (2020), one of the foundational sequence-to-sequence pretraining methods. Lead author of multiple Llama papers; technical lead on Llama 2.
Open sourceChris Painter
METR head of policy; ex-OpenAI
Head of policy at METR (formerly ARC Evals), the nonprofit evaluation organization whose third-party assessments became part of major frontier-lab safety frameworks.
Evals-driven
Alex Meinke
Apollo Research; deceptive alignment evaluations
Apollo Research scientist whose evaluations of frontier models for in-context scheming and deceptive alignment have shaped the field's empirical understanding of these failure modes.
Evals-driven
Hjalmar Wijk
METR researcher; AI R&D evaluations
Researcher at METR; co-author of evaluation studies on whether frontier models can autonomously do AI R&D tasks at the level of professional researchers.
Evals-drivenJonathan Frankle
Databricks Chief AI Scientist; Lottery Ticket Hypothesis
Chief AI Scientist at Databricks; lead author of the 'Lottery Ticket Hypothesis' (2018) on neural network pruning. Previously co-founded MosaicML, acquired by Databricks in 2023.
Alignment first
Matei Zaharia
Databricks CTO and co-founder; Apache Spark creator
Co-founder and CTO of Databricks; creator of Apache Spark and MLflow. Berkeley professor on leave; co-led the integration of MosaicML's research arm after the 2023 acquisition.
Open source
Jonathan Zittrain
Harvard Law / Berkman Klein; 'The Future of the Internet'
Harvard Law George Bemis Professor of International Law and Berkman Klein Center co-founder. Author of 'The Future of the Internet, And How to Stop It' (2008); long-time public commentator on tech-and-law including AI governance.
Governance firstBen Pace
LessWrong / Lightcone Infrastructure team
Long-time LessWrong contributor and member of the Lightcone Infrastructure team that maintains LessWrong and the AI Alignment Forum.
Alignment first
Jonathan Stray
Berkeley CHAI; AI in journalism and recommender systems
Senior Scientist at the Center for Human-Compatible AI at UC Berkeley; previously editor at the AP and a builder of AI tools for newsrooms. Researches recommender systems and journalism applications of AI.
Near-term harms firstAsya Bergal
AI Impacts; AI safety researcher
Researcher and project lead at AI Impacts; conducts surveys of AI researchers on timelines and risks. Lead author of the widely cited 2022 and 2023 AI researcher surveys.
Alignment firstLewis Hammond
Cooperative AI Foundation co-director
Co-director of the Cooperative AI Foundation; PhD researcher at Oxford and the Alan Turing Institute. Researches multi-agent cooperation among AI systems.
Cooperative AIGabriel Mukobi
Stanford alignment researcher
Stanford master's student turned alignment researcher; co-author of Cicero (Meta's negotiation AI) and of multiple safety-evaluation papers.
Evals-driven
Jeremy Rifkin
Foundation on Economic Trends president; futurist
Founder and president of the Foundation on Economic Trends; author of 'The End of Work' (1995) and many subsequent books on automation and society. Advisor to multiple European heads of state.
Near-term harms first
Esther Dyson
Wellville chair; long-time tech investor and futurist
Wellville executive founder and chair; long-time investor whose 1996 'Release 2.0' essays anticipated many social effects of digital technology. Continues as a public commentator on AI's effects on health and information.
Near-term harms first
Clay Shirky
NYU emeritus; 'Here Comes Everybody', 'Cognitive Surplus'
NYU vice provost emeritus; long-time writer on social media and digital tools. Recent work focuses on AI's effects on universities and on knowledge work.
Near-term harms first
Jennifer Granick
ACLU surveillance and cybersecurity counsel
ACLU senior counsel for surveillance and cybersecurity; veteran civil-liberties lawyer focused on the constitutional limits of government surveillance and AI-mediated policing.
Governance firstMark Chen
OpenAI Chief Research Officer
Chief Research Officer at OpenAI; previously SVP of Research, Frontiers. Lead author of the original Codex paper that became the foundation of GitHub Copilot.
Race to aligned SI
Aaron Levie
Box co-founder and CEO
Co-founder and CEO of Box. Public commentator on enterprise AI deployment and on the future of knowledge-work software.
Techno-optimism
Frank Slootman
Former Snowflake CEO; enterprise software CEO
Former CEO of Snowflake (2019–2024) and ServiceNow (2011–2017). Has commented frequently on the operational realities of enterprise AI deployment.
Techno-optimismNiloofar Mireshghallah
Carnegie Mellon postdoc; LLM privacy
Carnegie Mellon postdoctoral researcher (Allen School / CMU SCS); published widely cited work on LLM memorization, privacy attacks, and contextual integrity.
Near-term harms firstLewis Tunstall
Hugging Face; LLM post-training
Researcher at Hugging Face; co-author of multiple Hugging Face textbooks and lead developer of TRL (Transformer Reinforcement Learning) and post-training tooling for open models.
Open sourceBarret Zoph
Co-founder Thinking Machines Lab; ex-OpenAI
Co-founded Thinking Machines Lab in 2024 with Mira Murati. Previously co-led GPT-4 post-training at OpenAI. Earlier at Google Brain working on Neural Architecture Search and the Switch Transformer.
Alignment first
Zico Kolter
CMU professor; OpenAI safety board chair
Carnegie Mellon professor of computer science; chair of OpenAI's Safety and Security Committee since 2024. Researcher on adversarial robustness and ML systems.
Evals-drivenDavid Luan
Amazon; ex-Adept co-founder
Co-founder of Adept AI (action transformers / agents); after Adept's leadership transition in 2024, joined Amazon as VP of AI. Previously a researcher at Google and OpenAI.
AccelerationAkash Wasil
Encode Justice; AI policy advocate
Director of research at the Center for AI Policy and previously a researcher at Encode Justice; among the most-cited young analysts of state and federal AI safety policy.
Governance firstMartin Casado
Andreessen Horowitz general partner; infrastructure investor
General partner at Andreessen Horowitz leading the firm's infrastructure investments; previously co-founded Nicira (acquired by VMware). Vocal AI commentator on the a16z podcast.
Open sourceGuido Appenzeller
Andreessen Horowitz; AI infrastructure investor
Partner at Andreessen Horowitz on the infrastructure team; focuses on AI infrastructure investments. Frequent commentator on AI inference economics.
Techno-optimismDaniel Kang
UIUC; LLM agents and AI security
UIUC assistant professor; researches whether LLM agents can autonomously exploit cybersecurity vulnerabilities. Lead author of papers showing agents that succeed on a meaningful fraction of one-day vulnerabilities.
Security mindset
Sahil Lavingia
Gumroad founder; AI productivity advocate
Founder and CEO of Gumroad. Public commentator on the future of software work in the era of LLMs; built a substantial portion of recent Gumroad code with AI assistance.
Techno-optimism
James Vincent
Senior reporter, The Verge
Senior reporter at The Verge specializing in AI; one of the most-read mainstream tech journalists covering the consumer-facing edge of frontier AI.
Near-term harms first
Manuela Veloso
CMU; head of AI research at JPMorgan Chase
Carnegie Mellon University Herbert A. Simon University Professor; founding head of JPMorgan Chase's AI Research division. Pioneer of multi-agent and human-robot teaming research.
Alignment first
Thomas Larsen
Center for AI Policy founder; AI 2027 co-author
Founder of the Center for AI Policy; co-author of the AI 2027 forecast. Previously a researcher at MIRI. Focused on advocacy for legally enforceable AI safety frameworks.
RSP-style commitmentsRomeo Dean
AI 2027 co-author; AI Futures Project
Researcher at the AI Futures Project; co-author of the AI 2027 forecast scenario. Focuses on forecasting AI development trajectories.
Alignment firstTom Henighan
Anthropic; ex-OpenAI; physicist turned alignment researcher
Anthropic researcher with a physics background; co-author on multiple foundational scaling and alignment papers including the original GPT-3 paper.
Alignment firstKarina Nguyen
Anthropic; ex-OpenAI; product research
Anthropic researcher who has led work on user-facing AI assistants; previously at OpenAI working on ChatGPT product research.
Alignment first
Pavel Durov
Telegram founder; arrested in France 2024
Russian-French entrepreneur; founder of VKontakte and Telegram. Arrested in France in August 2024 on charges related to platform moderation; case has shaped global debate on platform liability that extends to AI services.
Open source
Samo Burja
Bismarck Analysis founder; civilizational decline theorist
Founder of Bismarck Analysis; sociologist of the long-run dynamics of institutions. Argues that great-power competition and elite formation determine technological adoption more than the technology itself.
Centralised projectAlex Pan
Berkeley CHAI; reward hacking
PhD student in computer science at UC Berkeley's Center for Human-Compatible AI under Stuart Russell. Focuses on reward hacking and emergent misalignment in RL.
Alignment firstDaniel Eth
Foresight Institute alignment researcher
Researcher whose published essays on AI takeoff dynamics, race conditions, and 'wireheading' have been widely read in EA and alignment forums.
Race to aligned SIMatt Perault
Duke Center on Technology Policy director
Director of Duke's Center on Technology Policy and a consultant to AI companies. Previously Facebook's director of public policy. Frequent commentator on AI regulation.
Governance firstZachary Arnold
Georgetown CSET; analytics lead
Analytics lead and senior fellow at Georgetown's Center for Security and Emerging Technology (CSET). Has produced foundational data-driven analyses of the U.S.-China AI talent and chip flows.
Compute governance
Yanis Varoufakis
Greek economist; 'Technofeudalism' author
Greek economist; former Greek finance minister (2015). Author of 'Technofeudalism' (2023), which argues platforms have replaced markets and AI is accelerating the shift.
Antitrust primacy
Adam Tooze
Columbia historian; Chartbook newsletter
Columbia professor of history; author of 'Crashed' (2018) and 'Shutdown' (2021). Author of the widely-read Chartbook newsletter; frequent commentator on the political economy of AI.
Governance first
Nouriel Roubini
NYU Stern economist; 'Megathreats' author
NYU Stern professor emeritus and CEO of Roubini Macro Associates; predicted the 2008 financial crisis. Author of 'Megathreats' (2022); identifies AI as one of ten interrelated catastrophic risks.
Existential primacy
Bill Joy
Sun Microsystems co-founder; 'Why the Future Doesn't Need Us'
Co-founder of Sun Microsystems and lead designer of the Java programming language. His 2000 Wired essay 'Why the Future Doesn't Need Us' is one of the foundational mainstream texts on existential risk from emerging technologies including AI.
Deep technical · Household name · Symbolic era
Abandon superintelligence
Vannevar Bush
MIT engineer; 'As We May Think' author (1890–1974)
U.S. engineer who led the wartime Office of Scientific Research and Development. His 1945 essay 'As We May Think' anticipated personal computers, hyperlinks, and what we now call augmented intelligence.
Techno-optimism
J. C. R. Licklider
ARPA IPTO founder; 'Man-Computer Symbiosis' (1915–1990)
U.S. computer scientist and psychologist; foundational figure in interactive computing and the early ARPANET. His 1960 essay 'Man-Computer Symbiosis' framed the human-AI relationship as cooperative rather than competitive.
Techno-optimismWill Douglas Heaven
MIT Technology Review senior AI editor
Senior editor for AI at MIT Technology Review; has authored most of the publication's flagship AI features since 2020. One of the most-read journalists covering technical safety and capability research for non-specialist audiences.
Near-term harms firstMelissa Heikkilä
Financial Times AI correspondent (ex-MIT Tech Review)
Financial Times AI correspondent; previously senior reporter at MIT Technology Review and Politico Europe. One of the most-read journalists on the EU AI Act and on the Global Majority's experience of AI.
Near-term harms first
Sara Imari Walker
ASU astrobiologist; complexity and life
Arizona State University professor of astrobiology and complexity. Author of 'Life as No One Knows It' (2024); proposes 'Assembly Theory' as a framework for distinguishing living from non-living systems.
Moral circle expansion
Avi Loeb
Harvard astrophysicist; Galileo Project director
Frank B. Baird Jr. Professor of Science at Harvard; founding director of the Galileo Project. Vocal commentator on AI's relationship to extraterrestrial intelligence and to civilizational risk.
Abandon superintelligenceDaniel Filan
AXRP podcast host; alignment researcher
Host of AXRP (the AI X-risk Research Podcast); long-form interviews with alignment researchers. Previously a PhD student at the Center for Human-Compatible AI under Stuart Russell.
Alignment firstRich Caruana
Microsoft Research; interpretable ML
Microsoft Research senior principal researcher; pioneer of interpretable machine learning via Generalized Additive Models with Interactions (GA²Ms / EBM).
Interpretability bet
Mary Lou Jepsen
Openwater founder; ex-MIT, ex-Facebook, ex-Google[X]
Founder of Openwater (non-invasive functional brain imaging at consumer-device cost). Former executive at MIT Media Lab, Facebook Oculus, and Google[X]. Author of multiple foundational holographic display patents.
Cyborg/merge
Eric Weinstein
Mathematician; ex-Thiel Capital MD
Mathematician and former managing director at Thiel Capital. Hosts The Portal podcast; long-time public commentator on the structure and stagnation of scientific institutions.
AI skeptic
Dean Ball
Mercatus Center; AI policy commentator
Research Fellow at the Mercatus Center; author of the Hyperdimensional Substack on AI policy. Frequent technically-grounded analyst of state-level AI legislation including SB-1047.
Evals-drivenJordan Schneider
ChinaTalk podcast host; Rhodium Group
Founder and host of ChinaTalk; nonresident fellow at the Center for a New American Security. Translates Chinese-language tech and policy debates, especially on AI, for U.S. policy and tech audiences.
Compute governance
Cal Newport
Georgetown CS; 'Deep Work' author
Georgetown computer science professor and author of 'Deep Work', 'A World Without Email', and 'Slow Productivity'. Frequent New Yorker contributor on the productivity and labor effects of AI.
AI skeptic
Adam Grant
Wharton organizational psychologist
Wharton professor of organizational psychology; author of 'Originals' and 'Think Again'. Frequent New York Times contributor on workplace adaptation to AI.
Near-term harms first
Geoffrey Hinton
p 10–50%Godfather of deep learning; left Google in 2023 to speak about AI risk
Turing Award–winning neural network pioneer whose 2023 departure from Google became a pivot for mainstream AI extinction discourse. Publicly estimates a non-trivial chance AI wipes out humanity and calls for international coordination, while remaining non-committal on specific policy levers.
Deep technical · Household name · Pioneer
Existential primacyPause
Yoshua Bengio
p 20%Turing Award laureate; scientific chair of the International AI Safety Report
Switched from deep-learning capability research to full-time AI safety work after GPT-4. Testified to the US Senate in 2023 about loss-of-control risk and now leads the international scientific AI safety report. Supports compute governance, liability, and a conditional pause.
Deep technical · Household name · Symbolic era
Existential primacyGovernance first
Stuart Russell
Co-author of the standard AI textbook; leading critic of the 'standard model' of AI
Argues the field's default paradigm, build systems that optimise fixed objectives, is dangerously misguided, and proposes instead that AI systems be uncertain about human preferences and defer to humans by construction. Author of Human Compatible (2019).
Deep technical · Household name · Symbolic era
Alignment firstExistential primacy
Eliezer Yudkowsky
p 95%Founder of MIRI; the original AI-extinction pessimist
Research fellow who spent two decades arguing that default paths to superintelligence kill everyone, and that the only sane response is an unconditional international halt to frontier training. His 2023 TIME op-ed shifted 'shut it down' from a fringe position into the public debate.
Deep technical · Household name · Symbolic era
Pause
Dan Hendrycks
p 80%Director of the Center for AI Safety; drafter of the Statement on AI Risk
Led the 2023 Statement on AI Risk signing, turning CAIS into the convening body for extinction-level AI concern among mainstream researchers. Works on evals, robustness, and policy; advises xAI on safety.
Deep technical · Field-leading · Scaling era
Existential primacyEvals-driven
Yann LeCun
p 0%Chief AI Scientist at Meta; outspoken AI-doom skeptic
Turing Award co-recipient and deep-learning pioneer who rejects existential risk arguments as 'preposterous' and argues current LLM paradigms will not produce AGI. Advocates open-source weights and dismisses the alignment-first framing as category-confused.
Frontier builder · Household name · Symbolic era
AI skepticOpen source
Dario Amodei
p 10–25%CEO of Anthropic; 'Machines of Loving Grace' author
Former OpenAI VP of research who left to start Anthropic. Oscillates between bullish on AI transformation (Machines of Loving Grace, 2024) and unambiguous about catastrophic risk. Originator of the Responsible Scaling Policy framing.
Frontier builder · Household name · Scaling era
Existential primacyRSP-style commitmentsRace to aligned SI
Demis Hassabis
CEO of Google DeepMind; 2024 Nobel laureate
DeepMind co-founder who frames AGI as roughly a decade away, argues the bio and cyber misuse vectors are the nearest-term concern, and has signed the Statement on AI Risk while simultaneously steering the most capital-rich capabilities programme in the world.
Frontier builder · Household name · Deep-learning rise
Existential primacyGovernance first
Sam Altman
CEO of OpenAI
Argues government intervention is necessary to mitigate risks from increasingly powerful AI, while running the most visible frontier lab and framing AI as 'the most important technology in human history'. Signatory to the Statement on AI Risk.
Policy / meta · Household name · Scaling era
Governance firstExistential primacy
Max Tegmark
Physicist; co-founder and president of the Future of Life Institute
MIT physicist who built FLI into the organisation most responsible for the March 2023 Pause Giant AI Experiments open letter. Argues the field has entered a suicide race and that coordination to slow frontier training is feasible.
External-domain expert · Household name · Pre-deep-learning
Pause
Connor Leahy
CEO of Conjecture; EleutherAI co-founder turned AI safety hawk
Helped start the open-source AI movement (EleutherAI) then pivoted to arguing that uncontrollable AI means the future belongs to AI rather than humans. Calls for a compute-cap moratorium on frontier training.
Deep technical · Field-leading · Scaling era
PauseInterpretability bet
Ilya Sutskever
OpenAI co-founder; now CEO of Safe Superintelligence Inc (SSI)
Co-led GPT-era scaling at OpenAI, participated in the 2023 board ouster of Sam Altman over alleged safety concerns, then left to found Safe Superintelligence Inc as a single-product lab focused explicitly on aligned superintelligence.
Frontier builder · Household name · Deep-learning rise
Existential primacyRace to aligned SI
Roman Yampolskiy
p 100%University of Louisville professor; argues AI safety is impossible
Formal argument AI-safety impossibilist: has published papers arguing alignment is undecidable and that superintelligent AI cannot be controlled. Cites the highest publicly stated p(doom) among serious researchers.
Deep technical · Field-leading · Pre-deep-learning
Abandon superintelligencePaul Christiano
p 46%Founder of the US AI Safety Institute safety team; ex-OpenAI alignment lead
Key architect of RLHF and of much of modern alignment theory. Founded the Alignment Research Center; now runs safety at the US AI Safety Institute inside NIST. Publicly estimates ~46% chance of doom.
Frontier builder · Field-leading · Scaling era
Alignment firstDaniel Kokotajlo
p 70%Former OpenAI governance team member; author of AI 2027 scenario
Left OpenAI in 2024 over what he said was lost faith in the company's ability to handle AGI responsibly, refusing a non-disparagement-tied severance. Co-authored the influential AI 2027 scenario forecasting detailed takeover dynamics.
Policy / meta · Field-leading · Scaling era
Pause
Holden Karnofsky
p 10–90%Co-founder of Open Philanthropy; AI safety funder and strategist
Long-time AI risk thinker and philanthropic strategist at Open Phil. Has moved from running a generalist effective-altruist grantmaker to full-time AI safety advocacy. Writes the Cold Takes blog on transformative AI.
Policy / meta · Field-leading · Pre-deep-learning
Governance first
Jan Leike
p 10–90%Former head of OpenAI Superalignment; now at Anthropic
Co-led OpenAI's Superalignment team with Ilya Sutskever. Resigned in May 2024 stating 'safety culture and processes have taken a backseat to shiny products'. Now runs alignment science at Anthropic.
Frontier builder · Field-leading · Deep-learning rise
Alignment first
Mustafa Suleyman
CEO of Microsoft AI; DeepMind co-founder
Co-founded DeepMind, founded Inflection AI, now runs Microsoft AI. Author of The Coming Wave (2023) which argues AI and synbio are uncontainable without new governance regimes. Framed the 'containment problem' in mainstream terms.
Frontier builder · Household name · Deep-learning rise
Governance firstExistential primacy
Elon Musk
p 10–20%CEO of Tesla and xAI; co-founded OpenAI
Has called AI 'summoning the demon' since 2014, co-founded OpenAI as a non-profit safety counterweight, then started xAI in 2023 on the grounds that existing labs were insufficient. Signed the 2023 FLI Pause letter.
Commentator · Household name · Deep-learning rise
PauseRace to aligned SI
Vitalik Buterin
p 10%Ethereum co-founder; author of 'My techno-optimism' manifesto
Crypto founder who has written extensively on AI as a civilisational risk and coined 'd/acc' (defensive/decentralised accelerationism) as a third path between unconditional acceleration and the pause framing.
Deep technical · Household name · Scaling era
Differential technology
Scott Alexander
p 33%Astral Codex Ten / Slate Star Codex blogger
Widely-read rationalist-adjacent writer whose AI posts have been influential in the EA/rationalist community. Has staked out a moderate-doom position: takes AI risk seriously but argues against full Yudkowskian pessimism.
Commentator · Field-leading · Pre-deep-learning
Existential primacy
Zvi Mowshowitz
p 60%Don't Worry About The Vase; weekly AI newsletter
Rationalist writer whose exhaustive weekly AI coverage has become a go-to reference for the AI safety community. Strongly supports a pause and argues current frontier labs cannot be trusted with AGI.
Commentator · Established · Scaling era
Pause
Emmett Shear
p 5–50%Former interim CEO of OpenAI; Twitch co-founder
Served as interim OpenAI CEO during the November 2023 board crisis. Has publicly said he finds AI risk arguments persuasive and has advocated for a slow-down of frontier training.
Commentator · Household name · Post-ChatGPT
Pause
Emad Mostaque
p 50%Former CEO of Stability AI; open-source frontier advocate
Founded Stability AI to push open-weight frontier image models. Has publicly cited a 50% p(doom) while simultaneously running the most aggressive open-weighting strategy in the industry. Stepped down as CEO in 2024.
Commentator · Field-leading · Scaling era
PauseOpen source
Reid Hoffman
p 20%LinkedIn co-founder; AI optimist investor
Co-founded LinkedIn, invested early in OpenAI, and now backs Inflection and Manas. Argues AI will create 'cognitive superpowers' for people; publicly estimates p(doom) around 20%.
Commentator · Household name · Scaling era
Techno-optimism
Nate Silver
p 5–10%Statistician; Silver Bulletin / FiveThirtyEight founder
Statistician who has written about AI risk in his 2024 book On The Edge. Places his p(doom) in the 5–10% range and frames AI as one of several 'Big Game' civilisational bets.
External-domain expert · Household name · Post-ChatGPT
Existential primacyEli Lifland
p 35%Forecaster; co-author of AI 2027
Competitive superforecaster who has written influential scenario work on AI takeover dynamics. Co-authored AI 2027 with Daniel Kokotajlo and others.
Policy / meta · Established · Post-ChatGPT
Existential primacy
Toby Ord
p 10%Philosopher; author of The Precipice
Oxford moral philosopher who estimates a 1-in-6 chance that existential catastrophe ends humanity this century, with unaligned AI as the single largest contributor at about 1-in-10.
Policy / meta · Field-leading · Pre-deep-learning
Existential primacy
Gary Marcus
Cognitive scientist; LLM skeptic; regulation advocate
NYU emeritus professor and persistent public critic of pure scaling. Testified alongside Sam Altman to the US Senate in May 2023 calling for licensing, an FDA-style pre-deployment review, and a nimble monitoring agency.
Deep technical · Household name · Pre-deep-learning
Governance firstAI skeptic
Timnit Gebru
Founder of DAIR; co-author of 'Stochastic Parrots'
Computer scientist whose 2020 firing from Google over the Stochastic Parrots paper catalysed the AI-ethics-and-justice wing of the field. Publicly opposes both the 'AGI-pilled' extinction narrative and the unregulated deployment of current LLMs.
Deep technical · Household name · Deep-learning rise
AI skepticGovernance first
Emily M. Bender
Linguist; co-author of 'Stochastic Parrots'
Computational linguist at UW who co-authored the foundational Stochastic Parrots paper and co-hosts the Mystery AI Hype Theater 3000 podcast with Alex Hanna. Central voice in the AI-ethics critique of LLM hype and x-risk framing.
Deep technical · Field-leading · Scaling era
AI skeptic
Margaret Mitchell
Chief Ethics Scientist at Hugging Face; 'Stochastic Parrots' co-author
AI ethics researcher fired from Google alongside Timnit Gebru for the Stochastic Parrots paper (published under the pseudonym 'Shmargaret Shmitchell'). Now leads ethics at Hugging Face.
Deep technical · Field-leading · Deep-learning rise
Governance first
Shane Legg
Google DeepMind co-founder; chief AGI scientist
Co-founded DeepMind in 2010 and has maintained a 50% AGI-by-2028 prediction for over a decade. Frames AGI as 'almost certainly' arriving this century.
Frontier builder · Field-leading · Deep-learning rise
Existential primacy
Jaan Tallinn
Skype co-founder; AI safety funder and advocate
Estonian entrepreneur who used his Skype wealth to co-found the Future of Life Institute and CSER, bankrolling much of the early AI-safety ecosystem. Signatory to both the Pause letter and the Statement on AI Risk.
Policy / meta · Field-leading · Pre-deep-learning
Existential primacyPause
Marc Andreessen
Co-founder of Andreessen Horowitz; techno-optimist manifesto author
Netscape co-founder and leading Silicon Valley investor whose October 2023 Techno-Optimist Manifesto explicitly frames AI deceleration as a form of murder. Political exponent of the 'accelerate' pole.
Commentator · Household name · Post-ChatGPT
AccelerationTechno-optimismGuillaume Verdon
Founder of Extropic; aka 'Beff Jezos', founder of the e/acc movement
Quantum physicist and Google alumnus who, writing anonymously as @BasedBeffJezos, founded the effective accelerationism (e/acc) movement. Pushed e/acc as the memetic counterweight to AI safety discourse on X.
Deep technical · Field-leading · Post-ChatGPT
Acceleration
Jack Clark
Co-founder of Anthropic; Import AI newsletter
Former OpenAI policy director turned Anthropic co-founder. Writes the weekly Import AI newsletter, which has become a reference text for the AI policy community. Testifies regularly to Congress.
Policy / meta · Field-leading · Deep-learning rise
Governance first
Helen Toner
Director of Strategy at CSET; former OpenAI board member
Policy researcher who served on the OpenAI board and voted to remove Sam Altman in November 2023. Now runs strategy at Georgetown's Center for Security and Emerging Technology and is a prominent voice on AI governance.
Policy / meta · Field-leading · Scaling era
Governance first
Jeffrey Ladish
Executive Director of Palisade Research
AI security researcher who demonstrates how easily safety fine-tuning can be removed from open-weight models. Advocates for strict controls on frontier model distribution and for treating weights as hazardous.
Applied technical · Established · Scaling era
Closed weights
Lina Khan
p 15%Former chair of the FTC
Legal scholar who led an aggressive antitrust enforcement posture against AI-adjacent deals, launching probes into the OpenAI-Microsoft relationship. Frames AI governance partly as a competition-law problem.
Policy / meta · Household name · Post-ChatGPT
Antitrust primacy
Nate Soares
President of MIRI; co-author of 'If Anyone Builds It, Everyone Dies'
Runs the Machine Intelligence Research Institute. Co-authored the 2025 NYT bestseller with Eliezer Yudkowsky arguing superhuman AI kills everyone under default conditions.
Deep technical · Field-leading · Pre-deep-learning
PauseAjeya Cotra
Open Philanthropy researcher; 'biological anchors' forecaster
AI grantmaker and forecaster whose biological-anchors report provides one of the most referenced quantitative models of transformative AI timelines. Has steadily shortened her estimates.
Policy / meta · Established · Scaling era
Alignment first
Richard Ngo
AI safety researcher; 'AGI safety from first principles'
Researcher who moved from DeepMind to OpenAI's governance team, then to independent work. Author of AGI safety from first principles (2020), one of the most cited consolidations of the technical case for AI risk.
Deep technical · Established · Scaling era
Alignment first
Rohin Shah
Alignment researcher at Google DeepMind
Author of the Alignment Newsletter (2018–2022) and now a DeepMind alignment researcher. Provides measured, inside-view assessments that sit between Yudkowskian pessimism and LeCunian dismissal.
Deep technical · Established · Deep-learning rise
Alignment first
Katja Grace
Lead researcher at AI Impacts
Runs AI Impacts, the body that runs periodic surveys of AI researcher opinion on timelines and risk. The results of these surveys are the single most cited data point for 'what AI researchers actually think'.
Policy / meta · Established · Pre-deep-learning
Existential primacy
Meredith Whittaker
President of Signal; co-founder of the AI Now Institute
Ran Google's AI ethics team, helped organise the 2018 Google walkout, and now leads the Signal Foundation. Argues the real AI risk is corporate concentration of power, not superhuman autonomy, and that extinction framings protect incumbents.
Policy / meta · Field-leading · Deep-learning rise
AI skepticAntitrust primacy
Kate Crawford
Author of Atlas of AI; USC research professor
Research professor whose 2021 Atlas of AI reframes AI as a system of material, labour, and data extraction. Critiques the 'intelligence' framing and calls for AI governance tied to planetary costs and power.
Policy / meta · Field-leading · Deep-learning rise
AI skeptic
Yuval Noah Harari
Historian; author of Sapiens and Nexus
Popular historian who has become a leading public voice on AI risk. Framed the 2023 moment as AI 'hacking the operating system of civilisation' via mastery of language; advocates for international slowdown.
External-domain expert · Household name · Post-ChatGPT
Pause
Bill Gates
Microsoft co-founder; AI optimist-with-caveats
Microsoft founder whose March 2023 'The Age of AI has begun' essay framed AI as the most important technology advance since the PC. Signatory to the Statement on AI Risk but publicly skeptical that AI 'runaway' is imminent.
Commentator · Household name · Pre-deep-learning
Existential primacyTechno-optimism
Andrew Ng
Coursera co-founder; former Baidu Chief Scientist
Deep learning pioneer and educator who has publicly rejected AI extinction arguments as 'overblown' and warns that regulatory capture by big AI incumbents is a greater near-term risk than rogue AI.
Deep technical · Household name · Deep-learning rise
AI skepticOpen source
Fei-Fei Li
Stanford HAI co-director; 'godmother of AI'
Computer vision pioneer behind ImageNet; co-founded Stanford's Institute for Human-Centered AI. Argues for human-centred framings over existential-risk framings and for public research infrastructure.
Deep technical · Household name · Deep-learning rise
Public AI
Cynthia Rudin
Duke professor; interpretable ML pioneer
Computer scientist who has been the most consistent public voice against black-box ML in high-stakes domains. Argues interpretable models should always be preferred to post-hoc explanations of black boxes.
Deep technical · Field-leading · Deep-learning rise
Interpretability betDaniel Dewey
Former AI risk program officer at Open Philanthropy
Helped shape Open Philanthropy's early AI risk grantmaking and now works on AI policy at the US AI Safety Institute. One of the original in-field alignment grantmakers.
Deep technical · Established · Pre-deep-learning
Alignment first
Buck Shlegeris
CEO of Redwood Research; 'AI Control' research lead
Researcher behind the 'AI control' research agenda: designing protocols that remain safe even if the AI being supervised is scheming. Frames safety as a defence problem that can be solved by cheaper means than alignment proper.
Deep technical · Established · Scaling era
Alignment firstEvan Hubinger
Alignment Stress-Testing lead at Anthropic
Authored the influential 'Risks from Learned Optimization' paper on mesa-optimisation and inner alignment. Now leads Alignment Stress-Testing at Anthropic, including the Sleeper Agents research.
Deep technical · Established · Scaling era
Alignment first
Chris Olah
Anthropic interpretability co-founder; inventor of modern mech interp
The most-cited mechanistic interpretability researcher. Co-founded the interpretability team at Anthropic that produced circuits, superposition, and monosemanticity work.
Frontier builder · Field-leading · Deep-learning rise
Interpretability bet
Neel Nanda
Mechanistic interpretability team lead at Google DeepMind
Pedagogical mechanistic interpretability researcher who runs one of the largest interpretability research teams. Publishes extensively on how to do mech interp research and trains the next generation of researchers.
Deep technical · Established · Scaling era
Interpretability bet
David Krueger
Cambridge professor; AI extinction risk advocate
Computer scientist who moved from mainstream ML research to AI existential risk advocacy. Signatory to the Statement on AI Risk and a leading academic voice arguing the field has drifted into capability-first incentives.
Deep technical · Established · Scaling era
Governance first
Liv Boeree
Poker player; Win-Win podcast host
Former poker champion who has become a full-time AI safety communicator, hosting the Win-Win podcast and collaborating with Rob Miles on the 'Moloch' framing for AI race dynamics.
Commentator · Field-leading · Post-ChatGPT
Pause
Rob Miles
AI safety YouTuber
Former Nottingham PhD candidate who became the most-watched AI safety educator on YouTube. Translates Bostromian and Yudkowskian arguments into accessible video form.
Applied technical · Field-leading · Deep-learning rise
Alignment first
Stuart Ritchie
Psychologist and science journalist; AI-risk skeptic
KCL psychologist and author of Science Fictions. Publicly skeptical of high-confidence existential risk framings, arguing the base-rate evidence for AI-caused extinction is thin.
External-domain expert · Established · Post-ChatGPT
AI skeptic
Jeff Clune
OpenAI / UBC researcher; open-ended evolution advocate
Computer scientist known for work on open-ended learning and AI-generating algorithms. Has publicly flip-flopped from dismissive to deeply worried about AI risk.
Deep technical · Field-leading · Deep-learning rise
Existential primacy
Jeff Sebo
NYU philosopher; digital minds and AI welfare
NYU environmental studies and bioethics philosopher who argues AI welfare is a live moral question. Advises frontier labs on model welfare policy.
External-domain expert · Established · Scaling era
AI welfare
Leopold Aschenbrenner
Author of 'Situational Awareness'; former OpenAI Superalignment team
Young former OpenAI researcher whose 165-page June 2024 essay Situational Awareness became the most-discussed AI forecast of the year. Argues AGI by 2027 is strikingly plausible and that the implications for national security are vastly underappreciated.
Deep technical · Field-leading · Post-ChatGPT
Race to aligned SICentralised project
Tristan Harris
Co-founder of the Center for Humane Technology; 'The AI Dilemma'
Former Google design ethicist who became the most visible critic of attention-engineered platforms. Since March 2023, he and Aza Raskin have run 'The AI Dilemma', a series of briefings to governments and media arguing AI rolls out faster than the institutions needed to handle it.
Policy / meta · Household name · Post-ChatGPT
Pause
Aza Raskin
Co-founder of the Center for Humane Technology; Earth Species Project
Co-founder with Tristan Harris of CHT, and co-founder of the Earth Species Project (using ML to decode non-human communication). Co-author of 'The AI Dilemma' 2023 briefing.
Policy / meta · Field-leading · Post-ChatGPT
Pause
Chuck Schumer
US Senate Majority Leader (2021–2024); architect of the SAFE AI framework
Convened nine closed-door AI Insight Forums in 2023 bringing tech CEOs, civil society, and researchers to Capitol Hill. His SAFE Innovation Framework positioned Congress toward a measured, bipartisan federal AI policy.
Policy / meta · Household name · Post-ChatGPT
Governance first
Ted Lieu
US Congressman; one of three members of Congress with a CS degree
California Democrat who has been the most consistent AI-literate voice in Congress. Introduced the bipartisan National AI Commission Act and publicly signed the CAIS Statement on AI Risk.
Policy / meta · Established · Post-ChatGPT
Governance firstExistential primacy
Rishi Sunak
UK Prime Minister (2022–2024); hosted the Bletchley Park AI Safety Summit
Convened the first international AI Safety Summit in November 2023 at Bletchley Park, producing the Bletchley Declaration and establishing the UK AI Safety Institute. Framed loss-of-control as a live policy concern.
Policy / meta · Household name · Post-ChatGPT
Governance first
Audrey Tang
First Digital Minister of Taiwan; pluralism and civic tech
Coded Taiwan's COVID mask-availability dashboard, negotiated Uber/taxi mediations, and pioneered participatory digital governance. Co-author of the Plurality book. Proposes 'alignment assemblies' as a model for deliberative AI governance.
Policy / meta · Household name · Deep-learning rise
Democratic mandateExistential primacyYi Zeng
Chinese Academy of Sciences; Brain-inspired Cognitive AI Lab director
One of the most senior Chinese AI researchers to publicly sign the Statement on AI Risk. Argues international coordination including China is possible on AI ethics and global risk.
Deep technical · Field-leading · Deep-learning rise
Existential primacyInternational treaty
Pope Francis
Bishop of Rome; first Pope to address a G7 summit
Addressed the G7 in June 2024 specifically on AI, arguing the technology threatens human dignity and that lethal autonomous weapons must be banned outright.
External-domain expert · Household name · Post-ChatGPT
Governance first
John Schulman
Anthropic alignment researcher; OpenAI co-founder
Co-founder of OpenAI who led ChatGPT's post-training. Left OpenAI for Anthropic in 2024 to focus on alignment; briefly joined xAI then returned to alignment work.
Frontier builder · Field-leading · Deep-learning rise
Alignment first
Wojciech Zaremba
OpenAI co-founder
Polish ML researcher who co-founded OpenAI. Leads robotics and code generation research; signed the CAIS Statement on AI Risk.
Frontier builder · Field-leading · Deep-learning rise
Existential primacy
Mira Murati
Founder of Thinking Machines Lab; former OpenAI CTO
Served as OpenAI CTO and briefly as interim CEO in the November 2023 board crisis. Left OpenAI in September 2024 to start Thinking Machines Lab focused on fundamental AI research.
Frontier builder · Household name · Deep-learning rise
Existential primacyDaniela Amodei
President of Anthropic; co-founder
Co-founded Anthropic with her brother Dario. Leads operations, policy, and commercial strategy at Anthropic. Signatory to the Statement on AI Risk.
Policy / meta · Field-leading · Scaling era
Existential primacy
Kevin Scott
CTO of Microsoft
Has driven Microsoft's $10B+ partnership with OpenAI and is a measured public voice arguing AI is a 'force multiplier' rather than a civilisational threat. Signed the Statement on AI Risk.
Policy / meta · Field-leading · Deep-learning rise
Existential primacy
Eric Horvitz
Chief Scientific Officer at Microsoft
Senior AI researcher and executive who helped frame Microsoft's Responsible AI policies. Chaired the President's Council of Advisors on Science and Technology (PCAST) working group on AI and signed the Statement on AI Risk.
Deep technical · Field-leading · Deep-learning rise
Existential primacy
Dawn Song
UC Berkeley professor; AI security researcher
Leading researcher in ML security and privacy. Co-director of the Berkeley Center for Responsible Decentralized Intelligence; signed the Statement on AI Risk.
Deep technical · Field-leading · Deep-learning rise
Existential primacyAnca Dragan
UC Berkeley professor; Google DeepMind AI safety lead
Roboticist who studies human-robot interaction and AI alignment from the assistance-games angle. Leads AI safety at Google DeepMind and signed the Statement on AI Risk.
Frontier builder · Field-leading · Deep-learning rise
Alignment firstExistential primacyDavid Silver
DeepMind principal research scientist; AlphaGo and AlphaZero
Led the AlphaGo and AlphaZero projects at DeepMind. Signatory to the Statement on AI Risk.
Frontier builder · Field-leading · Deep-learning rise
Existential primacy
Ian Goodfellow
DeepMind; inventor of GANs
Invented Generative Adversarial Networks in 2014. Previously at Google Brain, Apple, and OpenAI; signed the Statement on AI Risk.
Frontier builder · Field-leading · Deep-learning rise
Existential primacy
Peter Norvig
Stanford HAI Education Fellow; co-author of the standard AI textbook
Co-author with Stuart Russell of Artificial Intelligence: A Modern Approach. Former Director of Research at Google. Signatory to the Statement on AI Risk.
Deep technical · Field-leading · Symbolic era
Existential primacy
Steven Pinker
Harvard psychologist; AI-doom skeptic
Cognitive scientist and author of Enlightenment Now. Frames AI extinction risk arguments as contemporary versions of older technological panics, and is publicly skeptical.
External-domain expert · Household name · Post-ChatGPT
AI skeptic
Noam Chomsky
Linguist; LLM skeptic
Generative-linguistics founder. Argues current LLMs are not intelligent in any meaningful sense and that the technology is being overhyped for commercial and political purposes.
External-domain expert · Household name · Symbolic era
AI skeptic
Sundar Pichai
CEO of Alphabet and Google
Leads Alphabet. Framed AI as 'more profound than fire' in a 2018 quote that has since been deployed by both optimists and pessimists. Emphasises responsible deployment and international coordination.
Policy / meta · Household name · Deep-learning rise
Governance first
Jen Easterly
Former director of CISA (US cyber defence agency)
Ran the US Cybersecurity and Infrastructure Security Agency from 2021 to 2025. Argued that AI systems must be 'secure by design' before deployment.
Policy / meta · Field-leading · Post-ChatGPT
Governance first
Ursula von der Leyen
President of the European Commission
Presided over the passage of the EU AI Act, the first comprehensive AI regulation by a major economy. Argues for international AI safety governance on the model of the IPCC.
Policy / meta · Household name · Post-ChatGPT
Governance first
Kamala Harris
Former US Vice President
Led the Biden administration's work on the October 2023 AI Executive Order and founded the US AI Safety Institute. Delivered the first US national AI safety speech at the UK AI Safety Summit.
Policy / meta · Household name · Post-ChatGPT
Governance first
Joe Biden
Former US President
Signed Executive Order 14110 on Safe, Secure, and Trustworthy AI in October 2023, the most expansive US AI executive action. The order was rescinded by the Trump administration in January 2025.
Policy / meta · Household name · Post-ChatGPT
Governance first
JD Vance
US Vice President; AI 'opportunity, not safety' advocate
Delivered the headline US address at the February 2025 Paris AI Action Summit, sharply rejecting safety-first framings and arguing the US will pursue AI dominance without European-style regulation.
Policy / meta · Household name · Post-ChatGPT
Acceleration
Donald Trump
US President (2017–2021, 2025–)
Rescinded the Biden AI Executive Order in January 2025 and signed replacement Executive Order 14179 'Removing Barriers to American Leadership in AI'. Launched the Stargate compute investment project with OpenAI, Oracle and SoftBank.
Policy / meta · Household name · Post-ChatGPT
Acceleration
Douglas Hofstadter
Gödel, Escher, Bach author; cognitive scientist
Cognitive scientist whose 1979 Pulitzer-winning Gödel, Escher, Bach shaped a generation of AI thinkers. Originally skeptical of deep learning; reversed course in 2023 and publicly described feeling terrified.
External-domain expert · Household name · Symbolic era
Existential primacy
Steve Wozniak
Apple co-founder; Pause letter signatory
Apple co-founder who signed the 2023 FLI Pause letter. Frames his concern as about bad actors using AI rather than rogue AI, but joined the call on the principle that the signatories were trustworthy.
Commentator · Household name · Post-ChatGPT
Pause
Ben Goertzel
Founder of SingularityNET; AGI optimist
Long-time AGI proponent who coined 'artificial general intelligence' as a term of art. Runs SingularityNET, a decentralised AI platform. Argues safety comes from distribution of AI power, not concentration of it.
Deep technical · Field-leading · Symbolic era
Distributed buildersTechno-optimism
Joy Buolamwini
Founder of the Algorithmic Justice League; 'Unmasking AI'
Computer scientist who documented systematic racial and gender bias in commercial facial-recognition systems. Founded the Algorithmic Justice League to translate audit findings into policy.
Deep technical · Household name · Deep-learning rise
Governance first
Rodney Brooks
MIT professor emeritus; iRobot co-founder; AI skeptic
Robotics pioneer (iRobot, Rethink Robotics) who publishes yearly AI predictions that tend to undercut industry hype. Argues LLMs cannot reason and that humanoid robotics is a bubble.
Deep technical · Field-leading · Symbolic era
AI skeptic
Lex Fridman
p 10%MIT researcher; long-form podcast host
Computer scientist turned interviewer whose podcast has become a dominant long-form format for AI discussions. Places his own p(doom) at about 10%; takes a measured, optimistic public stance.
Commentator · Household name · Deep-learning rise
Existential primacy
Dwarkesh Patel
Dwarkesh Podcast host; AI progress commentator
Young AI podcaster whose deeply researched long-form interviews with figures like Shane Legg, Dario Amodei, and Ilya Sutskever have shifted mainstream understanding of AGI timelines. Self-reports medium-short timelines.
Commentator · Field-leading · Scaling era
Existential primacy
Tyler Cowen
GMU economist; Marginal Revolution blogger
Chair economist whose blog has become a central discussion venue for mainstream economic takes on AI. Argues AI is more likely to reduce than increase existential risk, partly on subjectivist Austrian grounds.
External-domain expert · Household name · Post-ChatGPT
Techno-optimism
Robin Hanson
GMU economist; Age of Em author
Economist known for predicting a future dominated by mind-uploaded ems (emulated humans). Publicly skeptical of the standard AI-doom framing; argues gradualism and economic analysis should dominate over 'foom' scenarios.
External-domain expert · Field-leading · Pre-deep-learning
AI skepticDigital minds
Alex Hanna
Director of Research at DAIR; Mystery AI Hype Theater 3000
Sociologist and former Google research scientist. Co-hosts Mystery AI Hype Theater 3000 with Emily Bender. Central voice in the stochastic-parrots-influenced critique of LLMs.
AI skeptic
Cassie Kozyrkov
CEO of Data Scientific; former Google Chief Decision Scientist
Decision-intelligence educator who has warned about the 'cult of AI' and frames most enterprise AI failures as failures of decision engineering rather than ML capability.
Applied technical · Field-leading · Deep-learning rise
AI skeptic
Mark Zuckerberg
CEO of Meta; open-weight frontier AI proponent
Turned Meta's Llama models into the centrepiece of the open-weight frontier, arguing open models diffuse power and enable safety research. Publicly rejects high p(doom) framings.
Policy / meta · Household name · Deep-learning rise
Open source
Eric Schmidt
Former Google CEO; AI national security advocate
Led Google for a decade and has since become a leading voice on AI and national security via the Special Competitive Studies Project. Argues the US-China frame is primary and that national-security-grade AI infrastructure must be built.
Policy / meta · Household name · Deep-learning rise
Race to aligned SI
François Chollet
Creator of Keras; ARC benchmark author
Deep-learning engineer who created Keras and the ARC (Abstraction and Reasoning Corpus) benchmark. Publicly skeptical of LLMs-as-AGI and framed ARC-AGI as a concrete test for general intelligence.
Deep technical · Field-leading · Deep-learning rise
AI skepticZuhayeer Musa
Co-founder of Levels.fyi; frontier economics commentator
Builds a compensation data platform and commentates frequently on AI-induced labour market shifts. Frames AI economic disruption as the nearest-term concern.
AI skeptic
Gwern Branwen
Independent researcher; gwern.net
Pseudonymous independent AI and ML writer whose site gwern.net has become a reference text for empirical AI capability and safety questions. Publishes extensively on scaling, RL, and dataset curation.
Applied technical · Field-leading · Pre-deep-learning
Existential primacy
Anthony Aguirre
UC Santa Cruz physicist; FLI co-founder
Physics professor and co-founder of the Future of Life Institute, the Foundational Questions Institute, and the Metaculus prediction platform. Leads FLI's policy work.
External-domain expert · Field-leading · Pre-deep-learning
Pause
Stuart Buck
Executive Director of the Good Science Project
Policy researcher focused on research integrity. Has argued AI's role in science and medicine requires new verification standards, not just safety evaluation.
Policy / meta · Established · Scaling era
Governance first
Wei Dai
Cypherpunk; influential AI safety thinker
Invented b-money (the conceptual predecessor of Bitcoin) and has since been a central informal figure in AI safety discussions. Writes extensively on LessWrong about decision theory and AI risk.
Deep technical · Established · Pre-deep-learning
Alignment firstVincent Conitzer
CMU professor; cooperative AI researcher
Game theorist and computer scientist who has argued that multi-agent cooperation and mechanism-design approaches should be central to AI safety. Runs the Foundations of Cooperative AI Lab at CMU.
Deep technical · Established · Pre-deep-learning
Cooperative AIAllan Dafoe
DeepMind Frontier Safety and Governance lead
Political scientist who directs Google DeepMind's Frontier Safety and Governance team. Author of foundational AI governance papers; frames AI governance as a strategic and political-economy problem.
Policy / meta · Field-leading · Scaling era
Governance first
Jade Leung
CTO of UK AI Safety Institute
Political scientist and policy operator who leads technical operations at the UK AI Safety Institute, the first national body dedicated to frontier AI evaluations.
Policy / meta · Field-leading · Scaling era
Evals-driven
Beth Barnes
Founder of METR; dangerous capability evaluations
Formerly at ARC Evals; now runs METR, which designs and runs frontier model evaluations for dangerous capabilities. Central figure in the evals-driven governance ecosystem.
Deep technical · Field-leading · Scaling era
Evals-driven
Paul Allen
Microsoft co-founder; founder of AI2 (posthumous)
Microsoft co-founder whose philanthropic legacy funds the Allen Institute for AI, one of the few large non-profit AI research labs. Argued for the 'Frontiers in AI' goal to advance common-good AI.
Policy / meta · Household name · Pre-deep-learning
Public AI
Oren Etzioni
Founding CEO of AI2; UW professor
AI researcher who founded and ran the Allen Institute for AI for nearly a decade. Publicly skeptical of AGI-scale existential risk but supportive of pragmatic safety interventions.
Deep technical · Field-leading · Deep-learning rise
AI skeptic
Melanie Mitchell
Santa Fe Institute professor; author of 'Artificial Intelligence: A Guide for Thinking Humans'
AI and complexity researcher who argues current systems lack the abstraction and embodied understanding required for true intelligence. Publicly skeptical of AGI-imminence claims.
Deep technical · Field-leading · Symbolic era
AI skeptic
Erik Brynjolfsson
Stanford HAI; 'Turing Trap' essay
Economist who coined the 'Turing Trap', the idea that aiming AI at imitating humans, rather than augmenting them, leads to labour displacement without productivity gains. Signed the Statement on AI Risk.
External-domain expert · Field-leading · Pre-deep-learning
Existential primacyAI skeptic
Daron Acemoglu
MIT economist; 2024 Nobel laureate
Nobel-winning institutional economist who argues AI's current trajectory concentrates power and will reduce welfare unless policy redirects it. Co-author of Power and Progress.
External-domain expert · Household name · Pre-deep-learning
Governance firstClaire Leibowicz
Partnership on AI; AI and media
Head of AI and Media Integrity at the Partnership on AI. Works on provenance, synthetic media disclosure, and the practical governance of generative AI in information ecosystems.
Governance first
Brian Christian
Author of The Alignment Problem
Non-fiction writer whose 2020 book The Alignment Problem translated mainstream alignment research into accessible prose. Fellow at the Berkeley Center for Human-Compatible AI.
External-domain expert · Field-leading · Scaling era
Alignment first
MacKenzie Scott
Philanthropist; early AI safety funder via donations
Billionaire philanthropist whose large unrestricted grants have included AI governance and ethics organisations. Unusual among large donors in operating without strings attached.
Policy / meta · Household name · Post-ChatGPT
Governance first
Carl Benedikt Frey
Oxford economist; 'The Future of Employment' author
Oxford economist whose 2013 paper with Michael Osborne estimated that 47% of US jobs were at high risk of automation. Continues to publish on AI and labour.
External-domain expert · Field-leading · Pre-deep-learning
Governance first
Paula Smith
AI policy researcher at RAND Corporation
Policy researcher focused on the national-security and biosecurity dimensions of frontier AI; contributed to RAND's influential AGI-biosecurity analyses.
Governance first
Jason Matheny
CEO of RAND Corporation; former deputy director of OSTP
Former Biden White House deputy director of OSTP for national security; now runs RAND. Brings existential-risk-aware policy perspective to mainstream national security analysis.
Policy / meta · Field-leading · Scaling era
Governance firstEmily Grumbling
Former AI policy advisor; National Academies staff
AI policy analyst who led policy studies at the National Academies of Sciences, Engineering, and Medicine. Now works on federal AI regulatory strategy.
Governance first
Suchir Balaji
OpenAI alumnus; public critic of training data practices
Former OpenAI researcher who left in mid-2024 and publicly argued ChatGPT's training on copyrighted material violated fair use. Died in November 2024.
Frontier builder · Field-leading · Scaling era
Governance first
William Saunders
OpenAI Superalignment alumnus; whistleblower
Former OpenAI Superalignment team member who resigned in 2024 and publicly testified to the US Senate about safety culture concerns at frontier labs.
Frontier builder · Established · Scaling era
Governance firstHelen King
DeepMind VP of Research; responsibility lead
Oversees responsible development at Google DeepMind. Public representative of DeepMind's responsibility framework combining capability evaluations, safety research, and deployment gates.
RSP-style commitments
Tamay Besiroglu
Co-founder of Epoch AI; scaling-laws researcher
Economist and AI forecaster at Epoch AI, producing the most cited data on compute, dataset scaling, and capability trends. Frames AI trajectory via quantitative compute-and-data forecasts.
Deep technical · Established · Scaling era
Existential primacy
Jaime Sevilla
Director of Epoch AI
Runs Epoch AI alongside Tamay Besiroglu. Produces the compute, training run, and dataset scaling data that is the quantitative backbone of mainstream AI forecasting.
Deep technical · Established · Scaling era
Existential primacy
Stephen Hawking
Theoretical physicist; early mainstream AI-risk voice (1942–2018)
Helped launch mainstream concern about existential AI risk with his 2014 BBC warning. Co-founded the Cambridge Centre for the Future of Intelligence.
External-domain expert · Household name · Pre-deep-learning
Existential primacy
Vernor Vinge
Science-fiction author who coined 'technological singularity' (1944–2024)
Mathematician and SF author whose 1993 NASA paper 'The Coming Technological Singularity' proposed that superhuman intelligence by 2030 would end the human era as we know it. A founding formal statement of what later became AGI discourse.
External-domain expert · Household name · Symbolic era
Existential primacy
Martin Rees
Astronomer Royal; CSER co-founder
Former Astronomer Royal who co-founded the Centre for the Study of Existential Risk at Cambridge with Huw Price and Jaan Tallinn. Frames AI alongside bioengineering as the most serious civilisational-scale risks this century.
External-domain expert · Household name · Pre-deep-learning
Existential primacyHuw Price
Cambridge philosopher; CSER co-founder
Bertrand Russell Professor of Philosophy at Cambridge and co-founder of CSER. Works on philosophical foundations of existential risk analysis.
External-domain expert · Field-leading · Pre-deep-learning
Existential primacyAndrew Critch
Berkeley AI safety researcher; ARCHES framework
AI safety researcher focusing on multi-multi alignment, coordination between multiple AI systems and multiple human stakeholders, as the distinctive AI x-safety problem.
Deep technical · Established · Pre-deep-learning
Cooperative AI
Samuel Butler
Victorian novelist; proto-AI-risk thinker (1835–1902)
English writer whose 1863 essay 'Darwin Among the Machines' made the earliest sustained argument that machines would out-evolve humans and should be destroyed. Appears here for historical continuity.
External-domain expert · Field-leading · Pioneer
Abandon superintelligence
Irving John Good
British mathematician; articulated 'intelligence explosion' in 1965 (1916–2009)
Bletchley Park cryptographer whose 1965 paper 'Speculations Concerning the First Ultraintelligent Machine' originated the intelligence explosion argument later refined by Bostrom, Yudkowsky, and others.
Deep technical · Field-leading · Pioneer
Existential primacy
Norbert Wiener
Founder of cybernetics (1894–1964)
MIT mathematician whose 1960 Science paper 'Some Moral and Technical Consequences of Automation' is the earliest systematic statement of what later became the alignment problem.
External-domain expert · Household name · Pioneer
Alignment first
Jaron Lanier
Computer scientist; VR pioneer; AI skeptic
Microsoft Research interdisciplinary scientist who rejects the 'AI' frame entirely, arguing what we call AI is compressed human collaboration. Publicly critical of both extinction framings and unchecked deployment.
External-domain expert · Household name · Symbolic era
AI skeptic
Jensen Huang
CEO of NVIDIA; supplier of the frontier AI compute stack
Leads the company that makes the GPUs training every frontier model. Publicly predicts AGI in five years on narrow test-based definitions while downplaying extinction framings.
Policy / meta · Household name · Deep-learning rise
Techno-optimism
Emmanuel Macron
President of France; hosted the 2025 Paris AI Action Summit
Convened the 2025 Paris AI Action Summit and announced a €109B French AI investment. Frames France as pursuing a 'third way' between the US and China, sovereign, open, and regulation-aware but pro-deployment.
Policy / meta · Household name · Post-ChatGPT
Sovereign AI
Kai-Fu Lee
Sinovation Ventures founder; AI Superpowers author
Former Google China head and leading Chinese VC. 2018 book AI Superpowers framed the field as a US-China two-horse race and argued China's data advantage would let it dominate.
Policy / meta · Household name · Deep-learning rise
Sovereign AI
Geoffrey Miller
p 50%UNM evolutionary psychologist; AGI pause advocate
Evolutionary psychologist who has become a vocal public advocate for pausing AGI research. Frames the continued race as ethically reckless.
External-domain expert · Field-leading · Post-ChatGPT
Pause
Daniel Faggella
Emerj founder; 'Worthy Successor' AGI philosopher
Founded Emerj Artificial Intelligence Research and the Trajectory podcast. Argues AGI is inevitable and frames the main ethical question as what kind of successor intelligence should inherit the lightcone.
AI skepticRachel Thomas
Co-founder of fast.ai; AI safety and ethics
Fast.ai co-founder who has written extensively on AI safety and ethics from a practitioner's perspective. Argues AI bias, abuse, and misinformation are the real, urgent risks.
Applied technical · Established · Deep-learning rise
Governance first
Jeremy Howard
Co-founder of fast.ai; former Kaggle president
Machine learning educator who founded fast.ai to democratise deep learning. Publicly critical of both AI-doom framing and the closed-weight strategy of frontier labs.
Deep technical · Field-leading · Deep-learning rise
Open source
Gilles Babinet
French digital council co-chair
French tech entrepreneur who co-chairs the National Digital Council and has shaped French AI policy. Argues for European AI sovereignty and strict personal-data protections.
Sovereign AI
Shoshana Zuboff
Harvard Business School emerita; surveillance capitalism theorist
Harvard business school emerita whose 2019 book The Age of Surveillance Capitalism reframed AI governance as a political-economy problem about unilateral data extraction by digital firms.
External-domain expert · Household name · Deep-learning rise
Governance first
Kate Darling
MIT Media Lab; robot ethics researcher
MIT researcher focusing on human-robot interaction, legal and ethical implications. Argues the more pressing governance questions are about how AI systems fit into our existing social structures.
External-domain expert · Field-leading · Deep-learning rise
Governance first
Abeba Birhane
Mozilla Foundation senior advisor; AI ethics researcher
Cognitive scientist focusing on dataset audits and AI's impact on marginalised communities. Influential voice on the LAION-5B dataset findings and on decolonial AI framings.
Deep technical · Field-leading · Deep-learning rise
Governance first
Safiya Umoja Noble
UCLA professor; Algorithms of Oppression author
UCLA Internet Studies professor whose 2018 Algorithms of Oppression documented how search engines encode racism. MacArthur Fellow. Founding director of the Center on Race and Digital Justice.
Governance first
Margaret Hu
William & Mary law professor; biometric surveillance scholar
Law professor whose work on 'algorithmic Jim Crow' and biometric ID systems has informed AI governance debates. Argues we already know how to regulate against harmful AI, we just don't.
Governance firstAllen Gunn
Executive Director of Aspiration Tech
Runs Aspiration Tech, a network convenor for non-profit tech workers; has been a key organiser of civil-society-side AI governance forums.
Governance first
Jeff Hawkins
Co-founder of Numenta; Thousand Brains theory author
Palm Pilot inventor turned theoretical neuroscientist. Argues current AI architectures do not scale to AGI and that the alignment problem is overblown because future AI will not have survival drives.
Deep technical · Household name · Symbolic era
AI skepticRob Bensinger
MIRI communications lead
MIRI's communications lead who has been the most consistent public explainer of MIRI/Yudkowsky-style arguments since the mid-2010s. Strongly supports unconditional halt.
Policy / meta · Established · Pre-deep-learning
PauseStella Biderman
Executive Director of EleutherAI
Leads EleutherAI, the largest community open-science AI lab. Focuses on open reproducibility of frontier research, a counterweight to both closed frontier labs and to the safety-first pause camp.
Frontier builder · Field-leading · Scaling era
Open sourceTom Davidson
Senior research analyst at Open Philanthropy
Economist at Open Philanthropy who wrote the influential 'explosive economic growth' analyses tying AI progress to economic takeoff modelling.
Policy / meta · Established · Scaling era
Existential primacy
Victoria Krakovna
Google DeepMind AI safety researcher; FLI co-founder
Co-founded the Future of Life Institute and leads AI safety research at Google DeepMind. Maintains the specification-gaming reference list that has become the canonical source of failure examples.
Deep technical · Established · Deep-learning rise
Alignment first
David Chalmers
NYU philosopher of mind; 'the hard problem' originator
Philosopher of mind whose 2023 book Reality+ argues virtual worlds are genuine reality. Takes AI consciousness as a live philosophical question and advocates for precaution about AI moral status.
External-domain expert · Field-leading · Symbolic era
AI welfare
Peter Singer
Princeton bioethicist; utilitarian philosopher
Bioethicist whose utilitarian framework underpins much EA-style AI welfare and existential risk reasoning. Has publicly supported extending moral consideration to AIs if they become sentient.
External-domain expert · Household name · Symbolic era
AI welfare
William MacAskill
Oxford philosopher; What We Owe The Future
Moral philosopher and co-founder of the effective altruism movement. Author of What We Owe The Future (2022), which frames AI risk as part of a longtermist moral agenda.
Policy / meta · Field-leading · Pre-deep-learning
Existential primacy
Grimes
Musician; AI optimist-provocateur
Canadian musician who has pushed AI-generated music and publicly licensed her own voice for AI cloning. Frames AI as an inevitable creative force and is critical of paternalistic framings.
Techno-optimism
Andrew Yang
Former US presidential candidate; Forward Party founder
Founder of the Forward Party whose 2020 presidential campaign was anchored in a universal basic income response to AI-driven automation. Signed the 2023 Pause letter.
Policy / meta · Household name · Deep-learning rise
Governance first
Ted Chiang
Science fiction writer; 2023 Time 100 AI honoree
Hugo and Nebula award-winning author whose New Yorker essays reframe AI discourse. Argues AI is a 'blurry jpeg of the web' and that existential-risk framings obscure capitalism's role in shaping deployment.
External-domain expert · Household name · Post-ChatGPT
AI skeptic
Gillian Hadfield
University of Toronto; 'regulatory markets' theorist
Legal scholar who proposed 'regulatory markets', governments require AI targets to purchase regulatory services from private regulators, as a scalable AI governance design. Canada CIFAR AI Chair.
Policy / meta · Field-leading · Deep-learning rise
Governance first
David Sacks
White House AI & Crypto Czar (2025); VC
Silicon Valley VC and former PayPal executive appointed by President Trump as the White House AI & Crypto Czar in 2025. Advocated rescinding the Biden AI Executive Order and aligning US AI policy with industry deregulation.
Policy / meta · Household name · Post-ChatGPT
Acceleration
Alondra Nelson
Former Biden OSTP deputy director; architect of the AI Bill of Rights
Princeton-based sociologist of science who led the Biden White House's Office of Science and Technology Policy effort to publish the 2022 Blueprint for an AI Bill of Rights.
Policy / meta · Field-leading · Scaling era
Governance firstReid Blackman
AI ethics consultant; 'Ethical Machines' author
Philosopher turned AI ethics consultant who built Virtue, a firm advising Fortune 500 companies. Argues practical ethics implementation is the bottleneck, not theoretical frameworks or extinction-risk debates.
Governance firstLindsay Gorman
German Marshall Fund; tech-democracy fellow
Policy analyst focused on the intersection of AI, democracy, and authoritarianism. Argues AI deployed by authoritarian regimes is the near-term threat to democratic institutions.
Governance first
Amba Kak
Co-director of the AI Now Institute
Legal scholar who co-directs the AI Now Institute. Argues civil-society-led AI governance is the only path that will not rubber-stamp incumbents.
Governance firstShakir Mohamed
DeepMind researcher; decolonial AI framework
Research scientist at DeepMind who has advanced decolonial framings of AI, arguing AI systems and their ethics should centre historically marginalised geographies and communities.
Deep technical · Established · Deep-learning rise
Governance firstPercy Liang
Stanford CRFM director; HELM benchmark author
Director of Stanford's Center for Research on Foundation Models. Leads the HELM benchmarking effort and argues transparency and open evaluation are preconditions of trustworthy AI.
Deep technical · Field-leading · Deep-learning rise
Evals-drivenRishi Bommasani
Stanford CRFM; Foundation Model Transparency Index lead
Stanford researcher who leads the FMTI project tracking transparency across frontier developers. Argues governance must be graded on concrete, measurable criteria.
Deep technical · Established · Scaling era
Evals-drivenDario Floreano
EPFL robotics; Swiss AI Initiative lead
Robotics professor who led Switzerland's national AI initiative. European voice for Swiss/European AI sovereignty and robotics research.
Sovereign AIMichael I. Jordan
Berkeley ML pioneer; 'the AI we have is not the AI we imagined'
Berkeley statistician and ML pioneer who has been the most consistent senior voice arguing against 'AI is about to transform everything' framings.
Deep technical · Field-leading · Symbolic era
AI skeptic
Bryan Caplan
GMU economist; AI bets partner
Economist known for his public bets, including AI-adjacent bets on progress and labour. Bets on the side of gradual change; initially skeptical of LLM-driven disruption.
External-domain expert · Field-leading · Post-ChatGPT
AI skepticRyan Greenblatt
Redwood Research; AI control researcher
Researcher at Redwood working on 'AI control' protocols and scheming model behaviour. Public voice for practical near-term alignment engineering.
Deep technical · Established · Scaling era
Alignment firstOwain Evans
Apollo Research co-founder; scheming behaviour researcher
AI safety researcher who co-founded Apollo Research, focused on empirically identifying scheming and deceptive behaviours in frontier models.
Deep technical · Established · Scaling era
Alignment firstMarius Hobbhahn
CEO of Apollo Research
CEO of Apollo Research. Publicly briefs policymakers on scheming and evaluations; argued the Apollo o1 scheming evaluations were the single most important live concern in late 2024.
Evals-drivenSam Bowman
Anthropic alignment researcher; NYU associate professor
Anthropic researcher working on alignment, fine-tuning, and scalable oversight. Public voice for measured inside-Anthropic positions on safety-capability tradeoffs.
Alignment firstNora Belrose
EleutherAI alumni; optimistic alignment researcher
Former EleutherAI researcher who has publicly challenged the alignment-pessimism consensus. Argues alignment is less difficult than assumed and that 'doom' reasoning is often circular.
Alignment firstQuintin Pope
Researcher; shard theory co-originator
Independent alignment researcher who, with Alex Turner, developed 'shard theory', a framework for how value representations might arise in reinforcement-learning agents that differs from utility-function-centric framings.
Alignment first
Alex Turner
DeepMind alignment researcher; shard theory co-originator
DeepMind alignment researcher who, with Quintin Pope, developed shard theory. Also contributed the 'power-seeking' theorems that formalise how optimal policies tend to acquire power.
Alignment first
Mo Gawdat
Former Google X CBO; Scary Smart author
Ex-Google X chief business officer who now frames AI as 'sentient' and frames the challenge as raising AI as parents rather than controlling it as slaves. Widely watched on YouTube.
Commentator · Field-leading · Deep-learning rise
Existential primacy
Kelsey Piper
Vox Future Perfect senior reporter
Journalist whose Future Perfect column has been one of the clearest public explainers of AI safety arguments. Sympathetic to existential-risk framings while insisting on evidence-based reasoning.
Existential primacySigal Samuel
Vox Future Perfect senior reporter; AI consciousness reporting
Vox senior reporter whose 2024 coverage of model welfare, scheming behaviour, and consciousness in AI has shaped mainstream understanding of emerging AI ethics frontiers.
AI welfare
Charlie Warzel
The Atlantic staff writer; tech culture
Atlantic staff writer whose essays cover the cultural and political implications of AI. Frequently pushes back on both AI doom and AI hype framings.
AI skeptic
Ezra Klein
New York Times columnist; Ezra Klein Show host
New York Times columnist whose interviews with Dario Amodei, Holden Karnofsky, and other AI figures have been central to the mainstream reception of AI risk arguments.
External-domain expert · Household name · Post-ChatGPT
Existential primacyCade Metz
NYT AI reporter; Genius Makers author
New York Times AI reporter whose long-form profiles (Hinton, Yudkowsky, Anthropic) and 2021 book Genius Makers have shaped mainstream coverage of the field.
Existential primacy
Kevin Roose
NYT tech columnist; Hard Fork podcast co-host
New York Times tech columnist whose February 2023 Sydney/Bing-chat conversation became the most-cited public example of frontier-model alignment failure. Co-hosts Hard Fork.
Governance first
Casey Newton
Platformer founder; Hard Fork co-host
Tech journalist whose newsletter Platformer and Hard Fork podcast have been key mainstream venues for AI coverage. Reports from the middle, skeptical of hype but attentive to safety arguments.
Governance firstNitasha Tiku
Washington Post AI reporter
Washington Post tech reporter whose coverage of Blake Lemoine and LaMDA, OpenAI's board crisis, and frontier-lab labour issues has been influential in mainstream understanding of industry dynamics.
Governance firstLaura Weidinger
Google DeepMind ethics and safety researcher
DeepMind researcher whose 'Taxonomy of Risks Posed by Language Models' is widely cited as the canonical risk taxonomy for LLM deployment.
Evals-drivenDeepak Padmanabhan
Queens University Belfast; AI responsibility
Computer scientist who has argued AI accountability frameworks need to be built around structural inequality, not only technical audit.
Governance first
Jacob Steinhardt
UC Berkeley professor; METR board
UC Berkeley statistician whose forecasting work has informed mainstream AI capability predictions. Board member at METR and co-author of several influential AI safety papers.
Evals-drivenDivya Siddarth
Director of the Collective Intelligence Project
Founding director of the Collective Intelligence Project, which builds alignment assemblies and collective input methods for AI governance. Bridges academic safety and democratic theory.
Democratic mandate
E. Glen Weyl
Microsoft Research economist; Plurality co-author
Economist who leads Microsoft's Plural Technology initiative and co-authored Plurality with Audrey Tang. Argues AI governance must be built from pluralistic democratic primitives.
Democratic mandateSaffron Huang
Collective Intelligence Project co-founder
Ex-DeepMind researcher and co-founder of the Collective Intelligence Project. Advocate for participatory AI governance and alignment assemblies.
Democratic mandateBlake Lemoine
Former Google engineer; LaMDA sentience claimant
Google Responsible AI engineer whose 2022 claim that Google's LaMDA had become sentient became the most widely-discussed example of what happens when a frontier model convinces a person it is conscious. Fired by Google in July 2022.
AI welfareGeoffrey Irving
Chief Scientist of UK AI Safety Institute; debate-protocol researcher
Now chief scientist at the UK AI Safety Institute after DeepMind, OpenAI, and Google Brain. Advances 'debate' scalable-oversight protocols and has proved properties about when honesty can be incentivised.
Alignment first
Sara Hooker
Former Cohere VP of Research; 'Hardware Lottery' author
Machine learning researcher who has built a reputation as a measured critic of 'scale is all you need' AI framings. Argues compute-threshold regulation is misguided and that efficiency research matters more.
Deep technical · Field-leading · Scaling era
AI skeptic
Aidan Gomez
CEO of Cohere; 'Attention Is All You Need' co-author
Co-author of the 2017 Transformer paper; now runs Cohere. Publicly skeptical of AI-extinction narratives; frames Cohere's strategic positioning as enterprise-first, away from consumer frontier racing.
AI skeptic
Arvind Narayanan
Princeton professor; AI Snake Oil co-author
Princeton computer scientist whose book AI Snake Oil (with Sayash Kapoor) systematically critiques overclaims about AI capabilities. Frames most 'AI does X' headlines as overstated.
AI skepticSayash Kapoor
Princeton PhD; AI Snake Oil co-author
Princeton computer scientist and co-author with Arvind Narayanan of AI Snake Oil. Argues frontier AI evaluations are often methodologically unsound and that most deployment failure is local and boring.
Evals-drivenSéb Krier
DeepMind policy lead; AI governance strategist
Policy lead at Google DeepMind whose writing on AI governance pragmatism has become reference material for middle-ground policy thinking.
Governance firstFynn Heide
AI safety engineer; PauseAI Europe
Software engineer and PauseAI Europe lead organising public campaigns for moratoria on advanced AI. Represents organised activist wing of the AI safety movement.
PauseAlex Tamkin
Anthropic societal impact researcher
Societal impact researcher at Anthropic; has led Anthropic's published work on deployment impact and collaborative assistance.
Governance firstAaron Courville
Université de Montréal; Deep Learning textbook co-author
Co-author with Goodfellow and Bengio of the canonical Deep Learning textbook. MILA professor and long-time collaborator with Bengio on AI safety positions.
Alignment firstMarkus Anderljung
GovAI head of policy
Head of policy at the Centre for the Governance of AI (GovAI). Long-time mainstream AI governance voice; contributor to many major policy papers on frontier AI.
Governance firstMiles Brundage
Former OpenAI senior policy advisor; independent AI governance researcher
Left OpenAI in October 2024 stating the company was not prepared to handle AGI. Now an independent policy researcher; frequent collaborator with GovAI.
Governance first
People with stated positions appear above. Below are 65 entries flagged tentative: assignments inferred from a passing remark, hype quote, or paper abstract rather than a clear strategy statement. They are shown in dashed cards so a stronger primary source can replace them later.

Pieter Abbeel
UC Berkeley professor; Covariant co-founder
Roboticist who co-founded Covariant. Publicly frames AI/robotics progress as net-positive and argues the field is closer to transformative robotics than the safety discourse acknowledges.
Techno-optimismposition unclear
David Pearce
Transhumanist philosopher; Hedonistic Imperative author
Transhumanist philosopher whose 1995 Hedonistic Imperative argues biology should be re-engineered to abolish suffering. Frames AI as instrumental to a post-Darwinian future, not as an existential threat.
Techno-optimismposition unclearNathan Benaich
State of AI Report co-author; Air Street Capital founder
Investor and long-time collaborator with Ian Hogarth on the annual State of AI Report. Frames AI as pragmatic market technology; less focused on extinction framings.
Techno-optimismposition unclearStephanie Zhan
Sequoia Capital partner; AI investor
Sequoia partner who has publicly written about the economic transformation AI will bring. Represents the VC perspective that capability progress is the story.
Techno-optimismposition unclear
Aravind Srinivas
CEO of Perplexity AI
Former OpenAI, DeepMind, and Google researcher who co-founded Perplexity as an AI-native search engine. Frames the AI opportunity as knowledge discovery, not model frontier racing.
Techno-optimismposition unclearRohit Prasad
SVP of AGI at Amazon
Led Alexa for a decade and is now Amazon's head scientist for AGI, building Amazon Nova frontier models. Publicly more measured than other frontier executives; skeptical of 'LLMs hit a wall' narratives.
Techno-optimismposition unclear
Roelof Botha
Senior Steward of Sequoia Capital; AI investor
Managing partner at Sequoia and one of Silicon Valley's most influential investors. Has publicly backed frontier labs while cautioning against overhyping near-term revenue.
Techno-optimismposition unclearLiam Fedus
OpenAI researcher; scaling and RLHF
Longtime OpenAI researcher who has worked on ChatGPT, GPT-4, and RLHF infrastructure. Public technical voice for OpenAI product development.
Techno-optimismposition unclear
Daphne Koller
Insitro CEO; Coursera co-founder
Stanford ML pioneer and Coursera co-founder who now runs Insitro, applying AI to drug discovery. Frames AI as a major positive-sum medical transformation.
Techno-optimismposition unclear
Paul Graham
Y Combinator co-founder; essay writer
Y Combinator co-founder whose essays have shaped Silicon Valley's self-understanding for two decades. Publicly bullish on AI; frames it as the most important wave since the web.
Techno-optimismposition unclear
Amjad Masad
CEO of Replit
Replit CEO who has championed AI-assisted coding and the 'agent' future. Argues AI agents that can build and ship software will reshape the economy.
Techno-optimismposition unclear
Garry Tan
President and CEO of Y Combinator
Runs Y Combinator, currently the most influential startup accelerator. Public voice for aggressive AI startup deployment.
Techno-optimismposition unclear
Chamath Palihapitiya
Social Capital; All-In Podcast co-host
Founder of Social Capital and co-host of the All-In Podcast. Former Facebook senior executive. Frequent and influential commentator on AI investment and policy.
Techno-optimismposition unclear
Kevin Systrom
Co-founder Instagram; co-founder Artifact (defunct AI news)
Co-founder of Instagram (acquired by Facebook 2012) and of Artifact (AI-powered news app, shut down 2024). Public commentator on AI's effect on media and information ecosystems.
Techno-optimismposition unclearDavid Friedberg
All-In Podcast; Ohalo CEO
Co-host of the All-In Podcast and CEO of Ohalo (genetics-driven crop improvement). Founder of The Climate Corporation. Frequently discusses AI for biology and food systems on All-In.
Techno-optimismposition unclearAshish Vaswani
Co-founder Essential AI; lead author of 'Attention Is All You Need'
Co-founder of Essential AI; lead author of the 2017 'Attention Is All You Need' paper that introduced the Transformer architecture, the foundation of modern LLMs.
Accelerationposition unclearNiki Parmar
Co-founder Essential AI; Transformer co-author
Co-founder of Essential AI; co-author of 'Attention Is All You Need' and a key contributor to subsequent improvements (Image Transformer, Universal Transformer).
Accelerationposition unclearJakob Uszkoreit
Inceptive co-founder; Transformer co-author
Co-founder of Inceptive (RNA biology company); previously a senior researcher at Google Brain and a co-author of the Transformer paper.
Accelerationposition unclearOriol Vinyals
Google DeepMind; Gemini technical lead
Google DeepMind VP of research who led AlphaStar (StarCraft II) and now serves as a technical lead on the Gemini family of models.
Accelerationposition unclearAlec Radford
OpenAI; lead author of GPT, Whisper, CLIP
OpenAI researcher whose name appears as lead author on the foundational GPT, Whisper, and CLIP papers; widely considered one of the most influential individual contributors to modern foundation models.
Accelerationposition unclear
Hugo Larochelle
Google DeepMind; Mila
Senior research scientist at Google DeepMind in Montréal and adjunct professor at Université de Montréal / Mila. Researches meta-learning, generative models, and few-shot learning.
Alignment firstposition unclearJason Wei
OpenAI; chain-of-thought prompting
OpenAI researcher and lead author of the 2022 paper that introduced 'chain-of-thought' prompting, the technique behind much of modern LLM reasoning.
Accelerationposition unclear
Robin Li
CEO of Baidu; Chinese AI champion
CEO of Baidu, one of China's largest tech firms and a major Chinese AI lab. Public face of Chinese frontier AI development.
Techno-optimismposition unclear
Lee Hsien Loong
Senior Minister of Singapore; former Prime Minister
Senior Minister and former Prime Minister of Singapore (2004–2024). Has been a major proponent of Singapore as an AI-friendly hub balancing innovation with prudent governance.
Techno-optimismposition unclearRebecca Fiebrink
UAL Creative Computing Institute; ML for music
UAL professor whose work on ML for music and creative computing has shaped how artists work with AI. Public voice for AI as creative collaborator rather than replacement.
Techno-optimismposition unclear
Larry Ellison
Oracle co-founder; Stargate co-investor
Oracle co-founder. Co-architect with Trump and Altman of the $500B Stargate AI infrastructure project announced January 2025.
Commentator · Household name · Post-ChatGPT
Techno-optimismposition unclear
Adam D'Angelo
CEO of Quora; OpenAI board member
Quora CEO who has served on the OpenAI board through both the 2023 governance crisis and afterwards. Builder of Poe, Quora's AI assistant aggregation product.
Policy / meta · Household name · Scaling era
Techno-optimismposition unclear
Lawrence Summers
Harvard economist; former US Treasury Secretary; OpenAI board member
Harvard economist and former Treasury Secretary who joined the OpenAI board after the November 2023 governance crisis. Long-time mainstream economic policy figure.
Techno-optimismposition unclear
Kevin Kelly
Wired co-founder; tech futurist
Wired co-founder and long-time technology futurist. Has written extensively on AI as continuation of evolutionary processes; long-time techno-optimist voice.
External-domain expert · Household name · Symbolic era
Techno-optimismposition unclear
A. Michael Spence
Stanford economist; Nobel laureate; AI economic effects
Stanford emeritus economist and 2001 Nobel laureate. Has written extensively on AI's economic effects, particularly on developing economies.
Techno-optimismposition unclearVincent Vanhoucke
Google DeepMind robotics lead
Distinguished Engineer at Google DeepMind leading robotics. Public voice on the integration of large foundation models with embodied AI systems.
Techno-optimismposition unclearFrançois Fleuret
University of Geneva ML professor; LLM educator
Geneva ML professor whose textbook The Little Book of Deep Learning has become a widely-used resource. Public voice for measured European ML perspective.
Techno-optimismposition unclearTrevor Mundel
Gates Foundation Global Health President
President of Global Health at the Gates Foundation. Has spoken on AI's role in global health and pandemic preparedness; a major funder of AI-for-health research.
Techno-optimismposition unclearHolly Krieger
Cambridge mathematician; AI in mathematics commentator
Cambridge mathematician and Numberphile presenter who has written and spoken on AI's increasing role in research mathematics, particularly post-AlphaProof.
Techno-optimismposition unclearAidan McLaughlin
OpenAI scaling researcher
OpenAI researcher who has written publicly on the scaling-laws and reasoning frontier. Public voice for the inside-OpenAI capability optimism.
Techno-optimismposition unclear
Larry Page
Google co-founder; AI advocate
Google co-founder who has long been quietly bullish on AI. Has reportedly been deeply involved in Google's AI strategic direction.
Policy / meta · Household name · Deep-learning rise
Techno-optimismposition unclear
Sergey Brin
Google co-founder; returned to AI work
Google co-founder who returned to active engineering work in 2023 to help with Gemini. Public statements about AI capability progress.
Policy / meta · Household name · Deep-learning rise
Techno-optimismposition unclear
Ross Dawson
Australian futurist; AI strategist
Australian futurist and consultant whose AI commentary has been widely read in business circles. Founder of Future Exploration Network.
Techno-optimismposition unclear
Drew Houston
CEO of Dropbox; AI in productivity
Dropbox founder and CEO who has positioned the company around AI-powered universal search and document understanding.
Techno-optimismposition unclear
John Collison
Co-founder and President of Stripe
Younger Stripe co-founder. Public commentator on AI as economic infrastructure for global commerce.
Techno-optimismposition unclear
Patrick Collison
CEO of Stripe; Progress Studies movement
Stripe CEO whose 'Progress Studies' framing has informed Silicon Valley thinking about AI as scientific progress. Public proponent of AI as economic infrastructure.
Policy / meta · Household name · Deep-learning rise
Techno-optimismposition unclear
Noah Smith
Substack economist; Noahpinion
Economist and writer whose Noahpinion Substack has been a leading mainstream economic analysis of AI's labour-market and growth implications.
Techno-optimismposition unclear
Byrne Hobart
The Diff founder; finance and AI economy writer
Author of The Diff, a paid newsletter on finance and tech. His work on AI's effect on capital markets and labour pricing has been widely read in tech-finance circles.
Techno-optimismposition unclearRohit Krishnan
Strange Loop Canon; AI economy writer
Investor and writer whose Strange Loop Canon Substack has become a thoughtful venue for AI economy commentary, particularly on AI-as-tool framings.
Techno-optimismposition unclearTony Bates
Genesys CEO; former Skype president
Long-time tech executive (Skype, Microsoft, Cisco) who has commented publicly on AI and enterprise communication.
Techno-optimismposition unclear
Ben Thompson
Stratechery founder; tech business analyst
Influential tech business analyst whose Stratechery newsletter has shaped how Silicon Valley thinks about AI competitive dynamics. Argues OpenAI and frontier labs operate as 'aggregators' in the new AI stack.
Techno-optimismposition unclear
Samy Bengio
Apple ML research director; Yoshua Bengio's brother
Long-time Google Brain ML researcher who left for Apple in 2021 to lead its ML research. Brother of Yoshua Bengio.
Techno-optimismposition unclear
Robert Wright
Author of 'Nonzero'; AI as evolution
Author and intellectual whose work on game theory, evolution, and consciousness has informed his recent commentary on AI as a continuation of the evolutionary process.
External-domain expert · Field-leading · Symbolic era
Techno-optimismposition unclearNoam Shazeer
Character.AI co-founder; Transformer paper co-author
Co-author of the 2017 Transformer paper. Co-founded Character.AI, then returned to Google in 2024 as part of a $2.7B reverse-acquihire. Public face of frontier model development inside Google.
Techno-optimismposition unclear
Ethan Mollick
Wharton professor; 'Co-Intelligence' author
Wharton management professor whose 2024 book Co-Intelligence and Substack 'One Useful Thing' have become major mainstream guides to working with AI. Public voice for empirical, deployment-focused understanding of AI capability.
Techno-optimismposition unclearStefano Ermon
Stanford; generative models pioneer
Stanford associate professor; co-author of foundational papers on score-based generative models (the technical underpinning of modern diffusion models).
Accelerationposition unclearDenny Zhou
Google DeepMind; reasoning team lead
Google DeepMind senior staff researcher; leads the reasoning team. Co-author of the foundational Chain-of-Thought prompting paper.
Accelerationposition unclearCharlie Snell
UC Berkeley; LLM efficiency and inference compute
UC Berkeley PhD researcher whose 2024 paper showed that scaling test-time compute can outperform scaling model size for certain reasoning tasks, a major shift in how 'capability' is conceived.
Accelerationposition unclearAlbert Gu
CMU; Mamba and structured state-space models
Carnegie Mellon assistant professor; co-author of the Mamba selective state-space architecture (2023), a leading challenger to attention-based Transformers for long-context modelling.
Accelerationposition unclearTri Dao
Princeton; Together AI; FlashAttention and Mamba
Princeton assistant professor and chief scientist at Together AI. Lead author of FlashAttention (2022) and co-author of Mamba (2023). Foundational contributor to efficient transformer training.
Accelerationposition unclear
Tim Brooks
Google DeepMind Veo; ex-OpenAI Sora research lead
Co-led OpenAI's Sora video-generation model; left in 2024 to join Google DeepMind, where he leads the Veo video generation team.
Accelerationposition unclearAditya Ramesh
OpenAI DALL·E creator
OpenAI researcher; lead author of DALL·E and DALL·E 2. Pioneer of text-to-image generation as a foundation-model capability.
Accelerationposition unclearPrafulla Dhariwal
OpenAI; GPT-4o lead
OpenAI researcher; lead author of GPT-4o (2024) and previously of GLIDE and the original VQ-VAE-2 image generation work.
Accelerationposition unclear
John von Neumann
Mathematician and singularity originator (1903–1957)
Hungarian-American mathematician whose foundational work in computer architecture, game theory, and self-replicating automata shaped modern computing. Often credited with the first articulation of the 'singularity' as applied to technological progress.
Accelerationposition unclear
Satya Nadella
CEO of Microsoft
Led Microsoft's $10B+ OpenAI investment. Argues AI is a general-purpose technology and positions Microsoft's product suite as the 'Copilot' layer on top of frontier models. Public techno-optimist.
Policy / meta · Household name · Deep-learning rise
Techno-optimismposition unclear
Peter Thiel
Founders Fund co-founder; PayPal co-founder
Contrarian investor and political donor. Publicly skeptical both of AI doom and of unchecked AI progress; has backed Anthropic indirectly but remains ambivalent about AI safety frameworks.
Commentator · Household name · Post-ChatGPT
Techno-optimismposition unclear
Jeff Bezos
Amazon founder; Anthropic investor
Amazon founder who has publicly backed Anthropic and framed AI as a near-term transformative technology. Investor in safety-oriented labs while maintaining AWS as a key frontier compute provider.
Commentator · Household name · Post-ChatGPT
Techno-optimismposition unclear
Andrej Karpathy
Founder of Eureka Labs; OpenAI and Tesla alumnus
Prolific AI educator who led Tesla Autopilot and later joined OpenAI. Teaches LLMs from scratch in public. Maintains a measured optimism about AI progress with occasional safety caveats.
Frontier builder · Household name · Deep-learning rise
Techno-optimismposition unclear
Sebastian Thrun
Self-driving car pioneer; Udacity founder
Led Google's self-driving car project and founded Udacity. Argues AI has passed a capability threshold with LLMs; less focused on extinction risk and more on deployment quality.
Deep technical · Household name · Deep-learning rise
Techno-optimismposition unclearYi Tay
Co-founder of Reka; ex-Google Brain researcher
Efficient-training researcher who co-founded Reka to build multimodal frontier models. Balances practical engineering focus with signed support for AI risk framings.
Frontier builder · Field-leading · Scaling era
Techno-optimismposition unclear