person
Kyle Mahowald
UT Austin; LLMs as not-quite-thought experiments
UT Austin linguistics professor whose 2023 paper 'Dissociating language and thought in large language models' became a key reference for understanding the gap between LLM language fluency and reasoning competence.
current Assistant Professor of Linguistics, University of Texas at Austin
Strategy positions
AI skepticmixed
AGI risk narratives overstated; real harms are mundane and currentArgues LLMs are excellent at the formal patterns of language but unevenly competent at the functional reasoning behind it; pushes back on conflating fluency with thinking.
We argue that LLMs are good at formal linguistic competence but inconsistent at functional linguistic competence: the latter requires more than next-token prediction.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Kyle Mahowald's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-25.