person
Peter Railton
Michigan ethicist; AI moral learning researcher
Michigan moral philosopher who has argued that reinforcement learning analogues in AI could form the basis for genuinely moral AI agents. Engages AI safety philosophically.
current Arthur F. Thurnau Professor of Philosophy, University of Michigan
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closeArgues that moral learning analogues in AI are a live research program for alignment.
Moral learning in humans draws on the same reinforcement-learning machinery we are now building into AI systems. That's not an accident; it is the alignment problem.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Peter Railton's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-25.