person
Lukas Finnveden
Open Philanthropy; AI safety analyst
Open Philanthropy researcher whose detailed analyses of AI takeoff dynamics, training data running out, and alignment training methods have been widely cited in EA circles.
current Senior Research Analyst, Open Philanthropy
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closeArgues alignment research must trace specific failure modes in concrete detail; favours quantitative scenario analysis over generic existential framings.
Plausible scenarios for AI takeoff include software-only feedback loops where AIs do AI research. Whether this leads to alignment failure depends on details that haven't been carefully argued.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Lukas Finnveden's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-25.