strategy tag
EA framing.
Explicitly EA-grounded prioritisation of existential risk
stated endorsers
3
no opposers yet
profiled endorsers
0
248 on the board total
endorser p(doom)
·
no estimates on record
quotes by endorsers
3
just for this tag
People on the record.
3Benjamin Todd
Founder of 80,000 Hours
Argues talented graduates should treat AI safety as one of the highest-impact career paths; has steered 80k's pipeline toward this since at least 2017.
AI safety is plausibly the most important problem of our time. The best way to help is often to switch career paths into it, even when the personal cost is significant.

Hilary Greaves
Oxford GPI; longtermist moral philosopher
Argues that the long-run effects of present actions dominate the moral calculus; AI x-risk is one of the load-bearing applications of this view.
Strong longtermism is the thesis that the most important feature of our actions today is their effects on the very long-run future.
Nick Beckstead
Future Fund co-founder; FHI alumnus
Argues the long-run effects of present choices on the trajectory of civilization carry overwhelming moral weight, and that this implies existential-risk reduction, including from AI, is a top priority.
If we have any positive credence that civilization could last very long and reach a very high level of value, the expected value of shaping the far future dominates expected value calculations for our near-term actions.