person
Thomas Krendl Gilbert
Cornell Tech ethicist; reinforcement learning ethics
AI ethicist who studies the governance and moral dimensions of reinforcement learning systems. Argues the norms governing RLHF shape what AI values become.
current Postdoctoral Fellow in AI Ethics, Cornell Tech
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closeWrites on the ethical dynamics of reward learning from human feedback; argues RLHF is a social process, not just a technical one.
Reinforcement learning from human feedback is a political process. The human feedback comes from somewhere; whose feedback wins shapes what values the model has.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Thomas Krendl Gilbert's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-25.