Host of AXRP (the AI X-risk Research Podcast); long-form interviews with alignment researchers. Previously a PhD student at the Center for Human-Compatible AI under Stuart Russell.
Solve technical alignment before capability thresholds close
Argues alignment research is technical, tractable, and best advanced through careful engagement with specific research agendas; uses AXRP to surface those agendas in detail.
What I want from AI safety research is the same thing I want from any other research: clear problem statements, clear progress, and a community that holds itself to the standards of the rest of science.
Other people whose strategy tags overlap with Daniel Filan's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.