AGI Strategies

person

Peter Railton

Michigan ethicist; AI moral learning researcher

Michigan moral philosopher who has argued that reinforcement learning analogues in AI could form the basis for genuinely moral AI agents. Engages AI safety philosophically.

current Arthur F. Thurnau Professor of Philosophy, University of Michigan

Strategy positions

Alignment firstendorses

Solve technical alignment before capability thresholds close

Argues that moral learning analogues in AI are a live research program for alignment.

Moral learning in humans draws on the same reinforcement-learning machinery we are now building into AI systems. That's not an accident; it is the alignment problem.
articlePeter Railton on moral learning and AI· University of Michigan· 2023· loose paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Peter Railton's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Aaron Courville

    shared 1 · J=1.00

    Université de Montréal; Deep Learning textbook co-author

  • Adam Jermyn

    shared 1 · J=1.00

    Anthropic; previously astrophysics

  • Adam Kalai

    shared 1 · J=1.00

    Microsoft Research; AI fairness and safety

  • Agnes Callard

    Agnes Callard

    shared 1 · J=1.00

    University of Chicago philosopher; aspiration theorist

  • Ajeya Cotra

    shared 1 · J=1.00

    Open Philanthropy researcher; 'biological anchors' forecaster

  • Alan Turing

    Alan Turing

    shared 1 · J=1.00

    Founder of theoretical computer science (1912–1954)

Record last updated 2026-04-25.