Bold claim: language may be less about thinking and more about a built-in brain API that translates thought into words. And this is where the debate gets real: is language a separate engine from thought, or the actual road map that thought travels on? For 15 years, MIT neuroscientist Ev Fedorenko has compiled evidence of a dedicated language network in the human brain, and her work reveals intriguing parallels to how large language models (LLMs) operate.
Introduction
Even in an era where large language models and AI chatbots are everywhere, the idea that fluent writing can emerge from an without-feeling machine can feel jarring. Many people equate finding the right words with thinking itself, so they assume language and thought are tightly entwined. Yet what if the brain harbors a system that behaves, in some ways, like an LLM—an essentially mindless language processor tucked inside the mind?
Before ChatGPT became a household name, Fedorenko began probing how language is wired in the adult brain. She describes a specialized network she calls the language network, which maps how words connect to meanings. Her findings suggest that humans may carry a biological version of an LLM—an unconscious engine that handles language input and output while the deeper thinking happens elsewhere.
“You can think of the language network as a set of pointers,” Fedorenko explains. “It’s like a map that points to where different kinds of meaning live in the brain. It’s basically a sophisticated parser that helps assemble ideas, and then the real thinking happens beyond its boundaries.”
Over the past decade and a half, Fedorenko has built a body of biological evidence for this language network in her MIT lab. Unlike an LLM, the human language network doesn’t produce coherent text by simply generating plausible word sequences; instead, it translates external stimuli (speech, writing, sign language) into representations of meaning stored elsewhere in the brain (including episodic memory and social knowledge, faculties that LLMs lack). The network itself isn’t expansive: had its tissue been poured together, it would be roughly the size of a strawberry. Yet when damaged, the consequences are severe: language breakdown can occur even when other cognitive abilities stay intact, a phenomenon known as aphasia.
Fedorenko’s fascination with language began early. Growing up in the Soviet Union in the 1980s, her mother insisted on learning five languages in addition to Russian. Despite hardships tied to political upheaval and scarcity, she excelled academically and earned a full scholarship to Harvard. There, she initially pursued linguistics, then added psychology. “Linguistics classes were interesting but felt like puzzle-solving, not truly explaining how things work in reality.”
Three years into her MIT graduate studies, Fedorenko pivoted toward neuroscience. She started collaborating with Nancy Kanwisher, who had identified the fusiform face area—a brain region specialized for recognizing faces. Fedorenko aimed to discover a language counterpart. The task was daunting; the field’s foundations seemed weak, and some peers were skeptical. Yet persistence paid off, and her research began to clarify how language-specific brain regions function.
In 2024, Fedorenko published a comprehensive Nature Reviews Neuroscience article defining the human language network as a “natural kind”: an integrated cluster of regions, exclusively specialized for language, present in every typical adult brain.
Quanta spoke with Fedorenko about how the language network resembles the digestive system, what is known about the language decoder, and whether the idea of an internal LLM holds up. The conversation has been condensed and clarified for readability.
What is the language network?
A core set of brain areas forms an interconnected system for computing linguistic structure. This network stores mappings between words and meanings and the rules governing word combinations. When learning a language, these mappings and rules are what you acquire, enabling flexible use of language. Once learned, you can translate thoughts into word sequences in any known language.
This may sound abstract, but the language network is considered a physical, “natural kind”—a truly identifiable brain organ. For example, there are three frontal-language areas in most people, primarily on the left side. Additional language-related tissue sits along the middle temporal gyrus in the temporal lobe. Together, these regions constitute the core language network.
Several lines of evidence reveal the network’s unity. When people undergo functional MRI while processing language versus control tasks, these regions co-activate. Researchers have scanned roughly 1,400 individuals to build probabilistic maps showing where language-related tissue tends to lie. While exact anatomy varies between people, the general pattern remains consistent: some tissue in the left frontal and temporal regions reliably engages in linguistic computation.
How does this differ from other language-associated brain areas, like Broca’s area?
Broca’s area is controversial as a language center. It is better described as an articulatory-motor planning region: it helps coordinate the movements required to produce speech. It remains engaged when producing even nonsense words, indicating it does not itself generate linguistic content but translates language representations into motor commands. In other words, Broca’s area lies downstream of the language network, preparing the mouth and vocal apparatus for expression.
You’ve described language as not being the same as thought. If the language network isn’t producing speech and isn’t the seat of thought, what exactly is its role?
The language network acts as an interface between perceptual and motor systems and higher-level, abstract representations of meaning and reasoning. Language has two main functions: production and comprehension. In production, a fuzzy thought is matched with a bank of words, phrases, and grammatical rules to express the intended meaning, after which the motor system executes the utterance, writing, or signing. In comprehension, auditory or visual input is perceptually processed to extract a sequence of words, which the language network then parses, mapping familiar chunks to stored meanings. In both cases, the network stores and updates form-to-meaning mappings, enabling flexible use of language across different modalities and contexts.
Why does such a system exist? To convey thoughts to others. There is no telepathy here—language serves as a communicative bridge.
How deeply specialized is this brain system? Are there individual cells in the language network that respond to specific utterances, akin to concept cells in other brain regions?
The evidence suggests a distributed system, given language’s rich contextuality. Still, there may be cells that respond to particular language aspects. A UCLA preprint from Itzhak Fried’s group reports single-cell responses showing language-selective activity in both written and spoken forms, aligning with findings from fMRI and intracranial recordings. The language network would be the place to look for such cells.
What kinds of patterns does the brain learn in this network?
The language network operates at an abstract level comparable to the brain’s general object-recognition systems. It resembles higher-level visual areas, which store abstract representations such as object shapes or face templates, enabling recognition while remaining separate from real-world knowledge. The famous example of Noam Chomsky’s colorless green ideas sleep furiously illustrates this distinction: the language network responds to such sentences similarly to meaningful ones because it processes structure and rules, not world knowledge. While the network handles language, it is not a fully contained world of meaning—it remains a relatively shallow system without full external grounding.
Is there really an internal LLM in every brain? Is the language network itself the same as an early LLM?
The analogy is strong: the language network shares many properties with early LLMs, learning regularities of language and how words relate to one another. Fluent-sounding speech can obscure a lack of coherence, similar to superficially convincing AI outputs. This observation holds even in fully healthy brains, where fluent language may mask gaps in deeper understanding.
Does this mean human language is produced by something mindless, like a chat model?
Even Fedorenko admits this is surprising. She initially expected language to be central to high-level thought, potentially relying on domain-general, hierarchical processors found in other cognitive domains such as mathematics or music. Instead, evidence accumulated over more than a decade shows that the language network is distinctly specialized for language. By 2011, it became clear that the network’s components are language-specific rather than broad, domain-general processors. Scientists have gradually updated their beliefs to align with these findings.
If this view holds, the brain hosts a specialized, somewhat modular language system that operates with remarkable fluency but relies on an increasingly abstract set of rules and representations. The comparison to LLMs is not about literal replication but about parallel structures: both systems build, manipulate, and apply patterns of language to connect form and meaning.
Next article
What Are Lie Groups?