person

Abeba Birhane
Mozilla Foundation senior advisor; AI ethics researcher
Cognitive scientist focusing on dataset audits and AI's impact on marginalised communities. Influential voice on the LAION-5B dataset findings and on decolonial AI framings.
Profile
expertise
Deep technical
Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.
Mozilla senior advisor; Trinity College Dublin. Empirical audits of dataset bias and computer-vision harms. Strong publication record.
recognition
Field-leading
Widely known inside the AI and AI-safety community. Appears repeatedly in top venues, podcasts, or governance forums. Not a household name to outsiders.
TIME 100 AI 2023; TED talks; recognised in AI ethics community.
vintage
Deep-learning rise
Came up post-AlexNet. ImageNet, AlphaGo, transformer paper. DeepMind, Google Brain, FAIR establish the modern lab template.
PhD UCD 2022; critical-AI work on training data and bias matures in deep-learning era.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Governance firstendorses
Lead with regulation, treaties, liability regimesArgues dataset-level audits are the tractable governance lever and that 'AGI' rhetoric is harmful to minoritised users.
The dataset is the system. Audit the dataset.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Abeba Birhane's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-24.