A core property of human semantic processing is the rapid, facilitatory influence of prior input on extracting the meaning of what comes next, even under conditions of minimal awareness. Previous work has shown a number of neurophysiological indices of this facilitation, but the mapping between time course and localization-critical for separating automatic semantic facilitation from other mechanisms-has thus far been unclear. In the current study, we used a multimodal imaging approach to isolate early, bottom-up effects of context on semantic memory, acquiring a combination of electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) measurements in the same individuals with a masked semantic priming paradigm. Across techniques, the results provide a strikingly convergent picture of early automatic semantic facilitation. Event-related potentials demonstrated early sensitivity to semantic association between 300 and 500 ms; MEG localized the differential neural response within this time window to the left anterior temporal cortex, and fMRI localized the effect more precisely to the left anterior superior temporal gyrus, a region previously implicated in semantic associative processing. However, fMRI diverged from early EEG/MEG measures in revealing semantic enhancement effects within frontal and parietal regions, perhaps reflecting downstream attempts to consciously access the semantic features of the masked prime. Together, these results provide strong evidence that automatic associative semantic facilitation is realized as reduced activity within the left anterior superior temporal cortex between 300 and 500 ms after a word is presented, and emphasize the importance of multimodal neuroimaging approaches in distinguishing the contributions of multiple regions to semantic processing.
Publications by Type: Journal Article
2013
When a word is preceded by a supportive context such as a semantically associated word or a strongly constraining sentence frame, the N400 component of the ERP is reduced in amplitude. An ongoing debate is the degree to which this reduction reflects a passive spread of activation across long-term semantic memory representations as opposed to specific predictions about upcoming input. We addressed this question by embedding semantically associated prime-target pairs within an experimental context that encouraged prediction to a greater or lesser degree. The proportion of related items was used to manipulate the predictive validity of the prime for the target while holding semantic association constant. A semantic category probe detection task was used to encourage semantic processing and to preclude the need for a motor response on the trials of interest. A larger N400 reduction to associated targets was observed in the high than the low relatedness proportion condition, consistent with the hypothesis that predictions about upcoming stimuli make a substantial contribution to the N400 effect. We also observed an earlier priming effect (205-240 msec) in the high-proportion condition, which may reflect facilitation because of form-based prediction. In summary, the results suggest that predictability modulates N400 amplitude to a greater degree than the semantic content of the context.
Words that are semantically congruous with their preceding discourse context are easier to process than words that are semantically incongruous with their context. This facilitation of semantic processing is reflected by an attenuation of the N400 event-related potential (ERP). We asked whether this was true of emotional words in emotional contexts where discourse congruity was conferred through emotional valence. ERPs were measured as 24 participants read twosentence scenarios with critical words that varied by emotion (pleasant, unpleasant, or neutral) and congruity (congruous or incongruous). Semantic predictability, constraint, and plausibility were comparable across the neutral and emotional scenarios. As expected, the N400 was smaller to neutral words that were semantically congruous (vs. incongruous) with their neutral discourse context. No such N400 congruity effect was observed on emotional words following emotional discourse contexts. Rather, the amplitude of the N400 was small to all emotional words (pleasant and unpleasant), regardless of whether their emotional valence was congruous with the valence of their emotional discourse context. However, consistent with previous studies, the emotional words produced a larger late positivity than did the neutral words. These data suggest that comprehenders bypassed deep semantic processing of valence-incongruous emotional words within the N400 time window, moving rapidly on to evaluate the words’ motivational significance.
2012
Just as syntax differentiates coherent sentences from scrambled word strings, the comprehension of sequential images must also use a cognitive system to distinguish coherent narrative sequences from random strings of images. We conducted experiments analogous to two classic studies of language processing to examine the contributions of narrative structure and semantic relatedness to processing sequential images. We compared four types of comic strips: (1) Normal sequences with both structure and meaning, (2) Semantic Only sequences (in which the panels were related to a common semantic theme, but had no narrative structure), (3) Structural Only sequences (narrative structure but no semantic relatedness), and (4) Scrambled sequences of randomly-ordered panels. In Experiment 1, participants monitored for target panels in sequences presented panel-by-panel. Reaction times were slowest to panels in Scrambled sequences, intermediate in both Structural Only and Semantic Only sequences, and fastest in Normal sequences. This suggests that both semantic relatedness and narrative structure offer advantages to processing. Experiment 2 measured ERPs to all panels across the whole sequence. The N300/N400 was largest to panels in both the Scrambled and Structural Only sequences, intermediate in Semantic Only sequences and smallest in the Normal sequences. This implies that a combination of narrative structure and semantic relatedness can facilitate semantic processing of upcoming panels (as reflected by the N300/N400). Also, panels in the Scrambled sequences evoked a larger left-lateralized anterior negativity than panels in the Structural Only sequences. This localized effect was distinct from the N300/N400, and appeared despite the fact that these two sequence types were matched on local semantic relatedness between individual panels. These findings suggest that sequential image comprehension uses a narrative structure that may be independent of semantic relatedness. Altogether, we argue that the comprehension of visual narrative is guided by an interaction between structure and meaning.
We aimed to determine whether semantic relatedness between an incoming word and its preceding context can override expectations based on two types of stored knowledge: real-world knowledge about the specific events and states conveyed by a verb, and the verb’s broader selection restrictions on the animacy of its argument. We recorded event-related potentials on post-verbal Agent arguments as participants read and made plausibility judgments about passive English sentences. The N400 evoked by incoming animate Agent arguments that violated expectations based on real-world event/state knowledge, was strongly attenuated when they were semantically related to the context. In contrast, semantic relatedness did not modulate the N400 evoked by inanimate Agent arguments that violated the preceding verb’s animacy selection restrictions. These findings suggest that, under these task and experimental conditions, semantic relatedness can facilitate processing of post-verbal animate arguments that violate specific expectations based on real-world event/state knowledge, but only when the semantic features of these arguments match the coarser-grained animacy restrictions of the verb. Animacy selection restriction violations also evoked a P600 effect, which was not modulated by semantic relatedness, suggesting that it was triggered by propositional impossibility. Together, these data indicate that the brain distinguishes between real-world event/state knowledge and animacy-based selection restrictions during online processing.
Active reading requires coordination between frequent eye movements (saccades) and short fixations in text. Yet, the impact of saccades on word processing remains unknown, as neuroimaging studies typically employ constant eye fixation. Here we investigate eye-movement effects on word recognition processes in healthy human subjects using anatomically constrained magnetoencephalography, psychophysical measurements, and saccade detection in real time. Word recognition was slower and brain responses were reduced to words presented early versus late after saccades, suggesting an overall transient impairment of word processing after eye movements. Response reductions occurred early in visual cortices and later in language regions, where they colocalized with repetition priming effects. Qualitatively similar effects occurred when words appeared early versus late after background movement that mimicked saccades, suggesting that retinal motion contributes to postsaccadic inhibition. Further, differences in postsaccadic and background-movement effects suggest that central mechanisms also contribute to postsaccadic modulation. Together, these results suggest a complex interplay between visual and central saccadic mechanisms during reading.
We measured Event-Related Potentials (ERPs) and naming times to picture targets preceded by masked words (stimulus onset asynchrony: 80 ms) that shared one of three different types of relationship with the names of the pictures: (1) Identity related, in which the prime was the name of the picture ("socks" - ), (2) Phonemic Onset related, in which the initial segment of the prime was the same as the name of the picture ("log" - ), and (3) Semantically related in which the prime was a co-category exemplar and associated with the name of the picture ("cake" - ). Each type of related picture target was contrasted with an Unrelated picture target, resulting in a 3×2 design that crossed Relationship Type between the word and the target picture (Identity, Phonemic Onset and Semantic) with Relatedness (Related and Unrelated). Modulation of the N400 component to related (versus unrelated) pictures was taken to reflect semantic processing at the interface between the picture’s conceptual features and its lemma, while naming times reflected the end product of all stages of processing. Both attenuation of the N400 and shorter naming times were observed to pictures preceded by Identity related (versus Unrelated) words. No ERP effects within 600 ms, but shorter naming times, were observed to pictures preceded by Phonemic Onset related (versus Unrelated) words. An attenuated N400 (electrophysiological semantic priming) but longer naming times (behavioral semantic interference) were observed to pictures preceded by Semantically related (versus Unrelated) words. These dissociations between ERP modulation and naming times suggest that (a) phonemic onset priming occurred late, during encoding of the articulatory response, and (b) semantic behavioral interference was not driven by competition at the lemma level of representation, but rather occurred at a later stage of production.
Accurately communicating self-relevant and emotional information is a vital function of language, but we have little idea about how these factors impact normal discourse comprehension. In an event-related potential (ERP) study, we fully crossed self-relevance and emotion in a discourse context. Two-sentence social vignettes were presented either in the third or the second person (previous work has shown that this influences the perspective from which mental models are built). ERPs were time-locked to a critical word toward the end of the second sentence which was pleasant, neutral, or unpleasant (e.g., A man knocks on Sandra’s/your hotel room door. She/You see(s) that he has agift/tray/gunin his hand.). We saw modulation of early components (P1, N1, and P2) by self-relevance, suggesting that a self-relevant context can lead to top-down attentional effects during early stages of visual processing. Unpleasant words evoked a larger late positivity than pleasant words, which evoked a larger positivity than neutral words, indicating that, regardless of self-relevance, emotional words are assessed as motivationally significant, triggering additional or deeper processing at post-lexical stages. Finally, self-relevance and emotion interacted on the late positivity: a larger late positivity was evoked by neutral words in self-relevant, but not in non-self-relevant, contexts. This may reflect prolonged attempts to disambiguate the emotional valence of ambiguous stimuli that are relevant to the self. More broadly, our findings suggest that the assessment of emotion and self-relevance are not independent, but rather that they interactively influence one another during word-by-word language comprehension.
2011
Animacy is known to play an important role in language processing and production, but debate remains as to how it exerts its effects: 1) through links to syntactic ordering, 2) through inherent differences between animate and inanimate entities in their salience/lexico-semantic accessibility, 3) through links to specific thematic roles. We contrasted these three accounts in two event related potential (ERP) experiments examining the processing of direct object arguments in simple English sentences. In Experiment 1, we found a larger N400 to animate than inanimate direct object arguments assigned the Patient role, ruling out the second account. In Experiment 2 we found no difference in the N400 evoked by animate direct object arguments assigned the Patient role (prototypically inanimate) and those assigned the Experiencer role (prototypically animate), ruling out the third account. We therefore suggest that animacy may impact processing through a direct link to syntactic linear ordering, at least on post-verbal arguments in English. We also examined processing on direct object arguments that violated the animacy-based selection restriction constraints of their preceding verbs. These violations evoked a robust P600, which was not modulated by thematic role assignment or reversibility, suggesting that the so-called semantic P600 is driven by overall propositional impossibility, rather than thematic role reanalysis.
This study examined neural activity associated with establishing causal relationships across sentences during on-line comprehension. ERPs were measured while participants read and judged the relatedness of three-sentence scenarios in which the final sentence was highly causally related, intermediately related, and causally unrelated to its context. Lexico-semantic co-occurrence was matched across the three conditions using a Latent Semantic Analysis. Critical words in causally unrelated scenarios evoked a larger N400 than words in both highly causally related and intermediately related scenarios, regardless of whether they appeared before or at the sentence-final position. At midline sites, the N400 to intermediately related sentence-final words was attenuated to the same degree as to highly causally related words, but otherwise the N400 to intermediately related words fell in between that evoked by highly causally related and intermediately related words. No modulation of the late positivity/P600 component was observed across conditions. These results indicate that both simple and complex causal inferences can influence the earliest stages of semantically processing an incoming word. Further, they suggest that causal coherence, at the situation level, can influence incremental word-by-word discourse comprehension, even when semantic relationships between individual words are matched.