person

Boaz Barak
Harvard; OpenAI safety; theoretical CS
Harvard theoretical CS professor on leave at OpenAI working on safety. Long-standing CS theorist whose recent posts have argued for taking AI safety problems seriously while criticizing parts of the doomer narrative.
current Researcher (on leave from Harvard), OpenAI; Gordon McKay Professor of Computer Science, Harvard University
Strategy positions
Alignment firstmixed
Solve technical alignment before capability thresholds closeArgues alignment is a real and tractable technical problem, that progress is faster than worst-case predictions assumed, and that the most useful safety work happens inside frontier labs.
I joined OpenAI because I think the most interesting and important alignment research is happening on actual frontier models. Working from the outside has limits.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Boaz Barak's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-25.