Active reading requires coordination between frequent eye movements (saccades) and short fixations in text. Yet, the impact of saccades on word processing remains unknown, as neuroimaging studies typically employ constant eye fixation. Here we investigate eye-movement effects on word recognition processes in healthy human subjects using anatomically constrained magnetoencephalography, psychophysical measurements, and saccade detection in real time. Word recognition was slower and brain responses were reduced to words presented early versus late after saccades, suggesting an overall transient impairment of word processing after eye movements. Response reductions occurred early in visual cortices and later in language regions, where they colocalized with repetition priming effects. Qualitatively similar effects occurred when words appeared early versus late after background movement that mimicked saccades, suggesting that retinal motion contributes to postsaccadic inhibition. Further, differences in postsaccadic and background-movement effects suggest that central mechanisms also contribute to postsaccadic modulation. Together, these results suggest a complex interplay between visual and central saccadic mechanisms during reading.
MEG
A core property of human semantic processing is the rapid, facilitatory influence of prior input on extracting the meaning of what comes next, even under conditions of minimal awareness. Previous work has shown a number of neurophysiological indices of this facilitation, but the mapping between time course and localization-critical for separating automatic semantic facilitation from other mechanisms-has thus far been unclear. In the current study, we used a multimodal imaging approach to isolate early, bottom-up effects of context on semantic memory, acquiring a combination of electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) measurements in the same individuals with a masked semantic priming paradigm. Across techniques, the results provide a strikingly convergent picture of early automatic semantic facilitation. Event-related potentials demonstrated early sensitivity to semantic association between 300 and 500 ms; MEG localized the differential neural response within this time window to the left anterior temporal cortex, and fMRI localized the effect more precisely to the left anterior superior temporal gyrus, a region previously implicated in semantic associative processing. However, fMRI diverged from early EEG/MEG measures in revealing semantic enhancement effects within frontal and parietal regions, perhaps reflecting downstream attempts to consciously access the semantic features of the masked prime. Together, these results provide strong evidence that automatic associative semantic facilitation is realized as reduced activity within the left anterior superior temporal cortex between 300 and 500 ms after a word is presented, and emphasize the importance of multimodal neuroimaging approaches in distinguishing the contributions of multiple regions to semantic processing.
Although there is broad agreement that top-down expectations can facilitate lexical-semantic processing, the mechanisms driving these effects are still unclear. In particular, while previous electroencephalography (EEG) research has demonstrated a reduction in the N400 response to words in a supportive context, it is often challenging to dissociate facilitation due to bottom-up spreading activation from facilitation due to top-down expectations. The goal of the current study was to specifically determine the cortical areas associated with facilitation due to top-down prediction, using magnetoencephalography (MEG) recordings supplemented by EEG and functional magnetic resonance imaging (fMRI) in a semantic priming paradigm. In order to modulate expectation processes while holding context constant, we manipulated the proportion of related pairs across 2 blocks (10 and 50% related). Event-related potential results demonstrated a larger N400 reduction when a related word was predicted, and MEG source localization of activity in this time-window (350-450 ms) localized the differential responses to left anterior temporal cortex. fMRI data from the same participants support the MEG localization, showing contextual facilitation in left anterior superior temporal gyrus for the high expectation block only. Together, these results provide strong evidence that facilitatory effects of lexical-semantic prediction on the electrophysiological response 350-450 ms postonset reflect modulation of activity in left anterior temporal cortex.
We used Magnetoencephalography (MEG) in combination with Representational Similarity Analysis to probe neural activity associated with distinct, item-specific lexico-semantic predictions during language comprehension. MEG activity was measured as participants read highly constraining sentences in which the final words could be predicted. Before the onset of the predicted words, both the spatial and temporal patterns of brain activity were more similar when the same words were predicted than when different words were predicted. The temporal patterns localized to the left inferior and medial temporal lobe. These findings provide evidence that unique spatial and temporal patterns of neural activity are associated with item-specific lexico-semantic predictions. We suggest that the unique spatial patterns reflected the prediction of spatially distributed semantic features associated with the predicted word, and that the left inferior/medial temporal lobe played a role in temporally “binding” these features, giving rise to unique lexico-semantic predictions.
It has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used magnetoencephalography (MEG) and electroencephalography (EEG), in combination with representational similarity analysis, to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate-constraining verbs was greater than following inanimate-constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.
We used magnetoencephalography (MEG) and event-related potentials (ERPs) to track the time-course and localization of evoked activity produced by expected, unexpected plausible, and implausible words during incremental language comprehension. We suggest that the full pattern of results can be explained within a hierarchical predictive coding framework in which increased evoked activity reflects the activation of residual information that was not already represented at a given level of the fronto-temporal hierarchy (“error” activity). Between 300 and 500 ms, the three conditions produced progressively larger responses within left temporal cortex (lexico-semantic prediction error), whereas implausible inputs produced a selectively enhanced response within inferior frontal cortex (prediction error at the level of the event model). Between 600 and 1,000 ms, unexpected plausible words activated left inferior frontal and middle temporal cortices (feedback activity that produced top-down error), whereas highly implausible inputs activated left inferior frontal cortex, posterior fusiform (unsuppressed orthographic prediction error/reprocessing), and medial temporal cortex (possibly supporting new learning). Therefore, predictive coding may provide a unifying theory that links language comprehension to other domains of cognition.
We used MEG and EEG to examine the effects of Plausibility (anomalous vs. plausible) and Animacy (animate vs. inanimate) on activity to incoming words during language comprehension. We conducted univariate event-related and multivariate spatial similarity analyses on both datasets. The univariate and multivariate results converged in their time course and sensitivity to Plausibility. However, only the spatial similarity analyses detected effects of Animacy. The MEG and EEG findings largely converged between 300–500 ms, but diverged in their univariate and multivariate responses to anomalies between 600–1000 ms. We interpret the full set of results within a predictive coding framework. In addition to the theoretical significance, we discuss the methodological implications of the convergence and divergence between the univariate and multivariate results, as well as between the MEG and EEG results. We argue that a deeper understanding of language processing can be achieved by integrating different analysis approaches and techniques.