AGI Strategies

strategy tag

EA framing.

Explicitly EA-grounded prioritisation of existential risk

stated endorsers

3

no opposers yet

profiled endorsers

0

248 on the board total

endorser p(doom)

·

no estimates on record

quotes by endorsers

3

just for this tag

People on the record.

3

Benjamin Todd

Founder of 80,000 Hours

endorses

Argues talented graduates should treat AI safety as one of the highest-impact career paths; has steered 80k's pipeline toward this since at least 2017.

AI safety is plausibly the most important problem of our time. The best way to help is often to switch career paths into it, even when the personal cost is significant.
articleWhy AI safety is one of the highest-priority cause areas· 80,000 Hours· 2023· faithful paraphrase
Hilary Greaves

Hilary Greaves

Oxford GPI; longtermist moral philosopher

endorses

Argues that the long-run effects of present actions dominate the moral calculus; AI x-risk is one of the load-bearing applications of this view.

Strong longtermism is the thesis that the most important feature of our actions today is their effects on the very long-run future.
§ paperThe Case for Strong Longtermism· Global Priorities Institute· 2021· faithful paraphrase

Nick Beckstead

Future Fund co-founder; FHI alumnus

endorses

Argues the long-run effects of present choices on the trajectory of civilization carry overwhelming moral weight, and that this implies existential-risk reduction, including from AI, is a top priority.

If we have any positive credence that civilization could last very long and reach a very high level of value, the expected value of shaping the far future dominates expected value calculations for our near-term actions.
§ paperOn the Overwhelming Importance of Shaping the Far Future· Rutgers University Dissertation· 2013· faithful paraphrase