person

Iyad Rahwan
Max Planck Institute Berlin; Moral Machine experiment
Director of the Max Planck Institute for Human Development. Led the Moral Machine experiment crowd-sourcing self-driving-car ethics. Public voice on machine behaviour.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Director of Center for Humans and Machines, Max Planck Institute. Moral Machine experiment on autonomous-vehicle ethics. Active publisher.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
Moral Machine got mainstream press; less press personally.
vintage
Deep-learning rise
Came up post-AlexNet. ImageNet, AlphaGo, transformer paper. DeepMind, Google Brain, FAIR establish the modern lab template.
Moral Machine 2014. Max Planck CHM. Computational social science of AI ethics in the deep-learning era.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Alignment firstendorses
Solve technical alignment before capability thresholds closeArgues 'machine behaviour' is a distinct field of study, alongside human behaviour. Argues social-science methods should be used to study AI.
Machines now exhibit behaviours that need to be studied with the methods of behavioural science, not only with the methods of computer science.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Iyad Rahwan's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-25.