strategy tag
Differential technology.
Preferentially develop protective technology over dangerous
stated endorsers
5
no opposers yet
profiled endorsers
1
248 on the board total
endorser mean p(doom)
10%
n=1 · median 10%
quotes by endorsers
6
just for this tag
People on the record.
5David Rolnick
McGill / Mila; Climate Change AI co-founder
Argues climate is the highest-leverage public-interest AI application and the most under-funded; calls for differential investment in protective rather than only optimizing technology.
Machine learning has high-impact applications across the climate change problem space, from forecasting and emissions reduction to behavioural change. We outline these systematically.
Ilan Gur
ARIA UK CEO; ex-ARPA-E
Argues frontier R&D agencies should make calculated bets on long-shot research that markets will not fund; oriented ARIA's first programs around safe scalable AI and biotech.
ARIA exists to fund the research that markets and traditional grantmakers will not. AI safety done well is exactly that kind of bet, high-impact, hard to fund any other way.
Owen Cotton-Barratt
FHI alumnus; existential risk researcher
Helped formulate the differential technology development framing, accelerate beneficial protective tech relative to dangerous tech, as a strategic prescription.
Differential technological development is the project of trying to ensure that risk-reducing technologies are developed before risk-increasing ones.

Sara Beery
MIT EAPS / CSAIL; AI for ecology
Argues AI's most under-funded high-leverage application is biodiversity and ecosystem monitoring; pushes for differential investment in tech that protects rather than only optimizes consumption.
AI for biodiversity monitoring could give us the planetary-scale measurement infrastructure we have never had. The question is whether we will choose to fund it.

Vitalik Buterin
Ethereum co-founder; author of 'My techno-optimism' manifesto
Coined 'd/acc' in his 2023 post 'My techno-optimism': prefer defensive, decentralised, democratic, differential technology development.
“I think we should be much more wary of superintelligent AI than we are. I recently calibrated that my actual p(doom) is something like 10%.”
Build tech that differentially accelerates defense against catastrophic risk over attack.
Context: Core thesis of 'My techno-optimism'.