person
Carl Shulman
Open Phil senior research analyst; AGI takeoff economics
Open Philanthropy researcher who has worked on the economics, decision theory, and forecasting of advanced AI for nearly two decades. Best known for long-form analyses of AI takeoff and what-if-AGI-arrives-by-2030 scenarios.
p(doom)
- 20%2023
Definition used: Existential catastrophe from AI; rough rather than precise.
Carl Shulman on the moral status of AI · Dwarkesh Podcast
Strategy positions
Race to aligned SIendorses
Build aligned superintelligence first, before adversariesArgues a fast software-driven takeoff is plausible, that aligned AI labs racing ahead of unaligned ones is one of the load-bearing strategies, and that the economics of compute will dominate political reactions.
If you have AGI which can do most cognitive work, you very rapidly get superintelligence. The compounding from AI doing AI research is enormous and historically unprecedented.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Carl Shulman's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-25.