Our Brain Rhythms Reflect the “Little Voice in the Head”
Priscila Borges
Imagine for a moment:
...you’re reading a text
...you’re repeating the word “eggs” in your head while searching for eggs in a supermarket
...you’re rehearsing a future phone conversation
...you’re silently saying to yourself “okay” before starting a difficult task
All of these examples refer to some form of “inner speech”, a term used to indicate the inner voice that most people hear in their mind’s ear every once in a while. As it turns out, some people experience inner speech more often than others. Moreover, not all inner speech is the same: some are more like a telegraphic code, while others are more like full grammatical sentences, for example. In a previous study, we found that these individual differences matter for how fast people can identify and compare words and objects.
In our latest study, we looked at the consequences of inner speech traits for the patterns of neural activity during visual word and object recognition. For that, we used electroencephalography (EEG), a neuroimaging method that allows electrical brain activity to be captured with a high temporal precision. The EEG data was recorded while participants performed a picture verification task where they first saw a word and then a picture and were asked to indicate whether the two matched.
We found that rhythmic brain activity in the frequency window around 8-12 Hz ( “alpha”) and 13-20 Hz ( “beta”) varied according to participants’ self-reported inner speech proneness. Specifically, people who reported being more prone to experiencing inner speech had higher alpha and beta activity after seeing the word cues than people who were less inner-speech-prone. These results follow the patterns observed in previous EEG studies where researchers enhanced the influence of language on picture verification performance by presenting spoken words instead of non-verbal sounds as the first stimulus.
In this previous literature, the effects of language were explained based on the label-feedback hypothesis (Lupyan, 2012). According to it, linguistic information can influence the kinds of perceptual features that are highlighted or suppressed when people process sensory stimuli. Specifically, language would make perception more “categorical”; that is, it would make us more attentive to the visual and conceptual features that are more useful for identifying the object category being processed while reducing the weight of features that are more redundant for categorization. For example, when processing the word “cat”, features like “whiskers” would be more strongly activated than features like “four legs”, as only “whiskers” helps differentiate cats from similar animals belonging to different categories, e.g., “dog”. In contrast, a non-verbal cue like the sound of a cat meowing would not have this constraining effect. This is because such cues always depend on the characteristics of the specific category exemplar, such as the size or the breed of the cat, whereas a word like “cat” is the same for all members of the “cat” category. Thus, words would trigger more abstract, prototypical representations, which are more helpful in tasks where one needs to rapidly identify or compare different object categories.
Combined with their faster response times in the picture verification, the stronger alpha and beta activity for participants with higher inner speech propensity suggests that the brain might use these rhythms to implement the kind of categorical warping of perception that is triggered by (inner) words, reflecting the selective activation of features that are more helpful for diagnosing the object category being processed.