Prediction
Language and thought dysfunction are central to the schizophrenia syndrome. They are evident in the major symptoms of psychosis itself, particularly as disorganized language output (positive thought disorder) and auditory verbal hallucinations (AVHs), and they also manifest as abnormalities in both high-level semantic and contextual processing and low-level perception. However, the literatures characterizing these abnormalities have largely been separate and have sometimes provided mutually exclusive accounts of aberrant language in schizophrenia. In this review, we propose that recent generative probabilistic frameworks of language processing can provide crucial insights that link these four lines of research. We first outline neural and cognitive evidence that real-time language comprehension and production normally involve internal generative circuits that propagate probabilistic predictions to perceptual cortices - predictions that are incrementally updated based on prediction error signals as new inputs are encountered. We then explain how disruptions to these circuits may compromise communicative abilities in schizophrenia by reducing the efficiency and robustness of both high-level language processing and low-level speech perception. We also argue that such disruptions may contribute to the phenomenology of thought-disordered speech and false perceptual inferences in the language system (i.e., AVHs). This perspective suggests a number of productive avenues for future research that may elucidate not only the mechanisms of language abnormalities in schizophrenia, but also promising directions for cognitive rehabilitation.
Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such ’authentic cadence’ melody was matched to a ’non-cadential’ (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in music and language.
In two event-related potential experiments, we asked whether comprehenders used the concessive connective, even so, to predict upcoming events. Participants read coherent and incoherent scenarios, with and without even so, e.g. ‘Elizabeth had a history exam on Monday. She took the test and aced/failed it. (Even so), she went home and celebrated wildly’, as they rated coherence (Experiment 1) or simply answered intermittent comprehension questions (Experiment 2). The semantic function of even so was used to reverse real-world knowledge predictions, leading to an attenuated N400 to coherent versus incoherent target words (‘celebrated’). Moreover, its pragmatic communicative function enhanced predictive processing, leading to more N400 attenuation to coherent targets in scenarios with than without even so. This benefit however, did not come for free: the detection of failed event predictions triggered a later posterior positivity and/or an anterior negativity effect, and
Probabilistic prediction plays a crucial role in language comprehension. When predictions are fulfilled, the resulting facilitation allows for fast, efficient processing of ambiguous, rapidly-unfolding input; when predictions are not fulfilled, the resulting error signal allows us to adapt to broader statistical changes in this input. We used functional Magnetic Resonance Imaging to examine the neuroanatomical networks engaged in semantic predictive processing and adaptation. We used a relatedness proportion semantic priming paradigm, in which we manipulated the probability of predictions while holding local semantic context constant. Under conditions of higher (versus lower) predictive validity, we replicate previous observations of reduced activity to semantically predictable words in the left anterior superior/middle temporal cortex, reflecting facilitated processing of targets that are consistent with prior semantic predictions. In addition, under conditions of higher (versus lower) predictive validity we observed significant differences in the effects of semantic relatedness within the left inferior frontal gyrus and the posterior portion of the left superior/middle temporal gyrus. We suggest that together these two regions mediated the suppression of unfulfilled semantic predictions and lexico-semantic processing of unrelated targets that were inconsistent with these predictions. Moreover, under conditions of higher (versus lower) predictive validity, a functional connectivity analysis showed that the left inferior frontal and left posterior superior/middle temporal gyrus were more tightly interconnected with one another, as well as with the left anterior cingulate cortex. The left anterior cingulate cortex was, in turn, more tightly connected to superior lateral frontal cortices and subcortical regions-a network that mediates rapid learning and adaptation and that may have played a role in switching to a more predictive mode of processing in response to the statistical structure of the wider environmental context. Together, these findings highlight close links between the networks mediating semantic prediction, executive function and learning, giving new insights into how our brains are able to flexibly adapt to our environment.
We consider several key aspects of prediction in language comprehension: its computational nature, the representational level(s) at which we predict, whether we use higher level representations to predictively pre-activate lower level representations, and whether we ’commit’ in any way to our predictions, beyond pre-activation. We argue that the bulk of behavioral and neural evidence suggests that we predict probabilistically and at multiple levels and grains of representation. We also argue that we can, in principle, use higher level inferences to predictively pre-activate information at multiple lower representational levels. We also suggest that the degree and level of predictive pre-activation might be a function of the expected utility of prediction, which, in turn, may depend on comprehenders’ goals and their estimates of the relative reliability of their prior knowledge and the bottom-up input. Finally, we argue that all these properties of language understanding can be naturally explained and productively explored within a multi-representational hierarchical actively generative architecture whose goal is to infer the message intended by the producer, and in which predictions play a crucial role in explaining the bottom-up input.
Since the early 2000s, several ERP studies have challenged the assumption that we always use syntactic contextual information to influence semantic processing of incoming words, as reflected by the N400 component. One approach for explaining these findings is to posit distinct semantic and syntactic processing mechanisms, each with distinct time courses. While this approach can explain specific datasets, it cannot account for the wider body of findings. I propose an alternative explanation: a dynamic generative framework in which our goal is to infer the underlying event that best explains the set of inputs encountered at any given time. Within this framework, combinations of semantic and syntactic cues with varying reliabilities are used as evidence to weight probabilistic hypotheses about this event. I further argue that the computational principles of this framework can be extended to understand how we infer situation models during discourse comprehension, and intended messages during spoken communication.
Although there is broad agreement that top-down expectations can facilitate lexical-semantic processing, the mechanisms driving these effects are still unclear. In particular, while previous electroencephalography (EEG) research has demonstrated a reduction in the N400 response to words in a supportive context, it is often challenging to dissociate facilitation due to bottom-up spreading activation from facilitation due to top-down expectations. The goal of the current study was to specifically determine the cortical areas associated with facilitation due to top-down prediction, using magnetoencephalography (MEG) recordings supplemented by EEG and functional magnetic resonance imaging (fMRI) in a semantic priming paradigm. In order to modulate expectation processes while holding context constant, we manipulated the proportion of related pairs across 2 blocks (10 and 50% related). Event-related potential results demonstrated a larger N400 reduction when a related word was predicted, and MEG source localization of activity in this time-window (350-450 ms) localized the differential responses to left anterior temporal cortex. fMRI data from the same participants support the MEG localization, showing contextual facilitation in left anterior superior temporal gyrus for the high expectation block only. Together, these results provide strong evidence that facilitatory effects of lexical-semantic prediction on the electrophysiological response 350-450 ms postonset reflect modulation of activity in left anterior temporal cortex.
The extent to which language processing involves prediction of upcoming inputs remains a question of ongoing debate. One important data point comes from DeLong et al. (2005) who reported that an N400-like event-related potential correlated with a probabilistic index of upcoming input. This result is often cited as evidence for gradient probabilistic prediction of form and/or semantics, prior to the bottom-up input becoming available. However, a recent multi-lab study reports a failure to find these effects (Nieuwland et al., 2017). We review the evidence from both studies, including differences in the design and analysis approach between them. Building on over a decade of research on prediction since DeLong et al. (2005)’s original study, we also begin to spell out the computational nature of predictive processes that one might expect to correlate with ERPs that are evoked by a functional element whose form is dependent on an upcoming predicted word. For paradigms with this type of design, we propose an index of anticipatory processing, Bayesian surprise, and apply it to the updating of semantic predictions. We motivate this index both theoretically and empirically. We show that, for studies of the type discussed here, Bayesian surprise can be closely approximated by another, more easily estimated information theoretic index, the surprisal (or Shannon information) of the input. We re-analyze the data from Nieuwland and colleagues using surprisal rather than raw probabilities as an index of prediction. We find that surprisal is gradiently correlated with the amplitude of the N400, even in the data shared by Nieuwland and colleagues. Taken together, our review suggests that the evidence from both studies is compatible with anticipatory semantic processing. We do, however, emphasize the need for future studies to further clarify the nature and degree of form prediction, as well as its neural signatures, during language comprehension.
Introduction: Lexico-semantic disturbances are considered central to schizophrenia. Clinically, their clearest manifestation is in language production. However, most studies probing their underlying mechanisms have used comprehension or categorization tasks. Here, we probed automatic semantic activity prior to language production in schizophrenia using event-related potentials (ERPs). Methods: 19 people with schizophrenia and 16 demographically-matched healthy controls named target pictures that were very quickly preceded by masked prime words. To probe automatic semantic activity prior to production, we measured the N400 ERP component evoked by these targets. To determine the origin of any automatic semantic abnormalities, we manipulated the type of relationship between prime and target such that they overlapped in (a) their semantic features (semantically related, e.g. “cake” preceding a <picture of a pie>, (b) their initial phonemes (phonemically related, e.g. “stomach” preceding a <picture of a starfish>), or (c) both their semantic features and their orthographic/phonological word form (identity related, e.g. “socks” preceding a <picture of socks>). For each of these three types of relationship, the same targets were paired with unrelated prime words (counterbalanced across lists). We contrasted ERPs and naming times to each type of related target with its corresponding unrelated target. Results: People with schizophrenia showed abnormal N400 modulation prior to naming identity related (versus unrelated) targets: whereas healthy control participants produced a smaller amplitude N400 to identity related than unrelated targets, patients showed the opposite pattern, producing a larger N400 to identity related than unrelated targets. This abnormality was specific to the identity related targets. Just like healthy control participants, people with schizophrenia produced a smaller N400 to semantically related than to unrelated targets, and showed no difference in the N400 evoked by phonemically related and unrelated targets. There were no differences between the two groups in the pattern of naming times across conditions. Conclusion: People with schizophrenia can show abnormal neural activity associated with automatic semantic processing prior to language production. The specificity of this abnormality to the identity related targets suggests that that, rather than arising from abnormalities of either semantic features or lexical form alone, it may stem from disruptions of mappings (connections) between the meanings of words and their form.
BackgroundPeople with schizophrenia process language in unusual ways, but the causes of these abnormalities are unclear. In particular, it has proven difficult to empirically disentangle explanations based on impairments in the top-down processing of higher-level information from those based on the bottom-up processing of lower-level information.MethodsTo distinguish these accounts, we used visual world eye-tracking, a paradigm that measures spoken language processing during real-world interactions. Participants listened to and then acted out syntactically ambiguous spoken instructions (e.g., “tickle the frog with the feather”, which could either specify how to tickle a frog, or which frog to tickle). We contrasted how 24 people with schizophrenia and 24 demographically-matched controls used two types of lower-level information (prosody and lexical representations) and two types of higher-level information (pragmatic and discourse-level representations) to resolve the ambiguous meanings of these instructions. Eye-tracking allowed us to assess how participants arrived at their interpretation in real time, while recordings of participants’ actions measured how they ultimately interpreted the instructions.ResultsWe found a striking dissociation in participants’ eye movements: the two groups were similarly adept at using lower-level information to immediately constrain their interpretations of the instructions, but only controls showed evidence of fast top-down use of higher-level information. People with schizophrenia, nonetheless, did eventually reach the same interpretations as controls.ConclusionsThese data suggest that language abnormalities in schizophrenia partially result from a failure to use higher-level information in a top-down fashion, to constrain the interpretation of language as it unfolds in real time.
We used Magnetoencephalography (MEG) in combination with Representational Similarity Analysis to probe neural activity associated with distinct, item-specific lexico-semantic predictions during language comprehension. MEG activity was measured as participants read highly constraining sentences in which the final words could be predicted. Before the onset of the predicted words, both the spatial and temporal patterns of brain activity were more similar when the same words were predicted than when different words were predicted. The temporal patterns localized to the left inferior and medial temporal lobe. These findings provide evidence that unique spatial and temporal patterns of neural activity are associated with item-specific lexico-semantic predictions. We suggest that the unique spatial patterns reflected the prediction of spatially distributed semantic features associated with the predicted word, and that the left inferior/medial temporal lobe played a role in temporally “binding” these features, giving rise to unique lexico-semantic predictions.
When semantic information is activated by a context prior to new bottom-up input (i.e. when a word is predicted), semantic processing of that incoming word is typically facilitated, attenuating the amplitude of the N400 event related potential (ERP) – a direct neural measure of semantic processing. N400 modulation is observed even when the context is a single semantically related “prime” word. This so-called “N400 semantic priming effect” is sensitive to the probability of encountering a related prime-target pair within an experimental block, suggesting that participants may be adapting the strength of their predictions to the predictive validity of their broader experimental environment. We formalize this adaptation using a Bayesian learning model that estimates and updates the probability of encountering a related versus an unrelated prime-target pair on each successive trial. We found that our model’s trial-by-trial estimates of target word probability accounted for significant variance in the amplitude of the N400 evoked by target words. These findings suggest that Bayesian principles contribute to how comprehenders adapt their semantic predictions to the statistical structure of their broader environment, with implications for the functional significance of the N400 component and the predictive nature of language processing.
It has been proposed that hierarchical prediction is a fundamental computational principle underlying neurocognitive processing. Here we ask whether the brain engages distinct neurocognitive mechanisms in response to inputs that fulfill versus violate strong predictions at different levels of representation during language comprehension. Participants read three-sentence scenarios in which the third sentence constrained for a broad event structure, e.g. Agent caution animate-Patient. High constraint contexts additionally constrained for a specific event/lexical item, e.g. a two-sentence context about a beach, lifeguards and sharks constrained for the event, Lifeguards cautioned Swimmers and the specific lexical item, “swimmers”. Low constraint contexts did not constrain for any specific event/lexical item. We measured ERPs on critical nouns that fulfilled and/or violated each of these constraints. We found clear, dissociable effects to fulfilled semantic predictions (a reduced N400), to event/lexical prediction violations (an increased late frontal positivity), and to event structure/animacy prediction violations (an increased late posterior positivity/P600). We argue that the late frontal positivity reflects a large change in activity associated with successfully updating the comprehender’s current situation model with new unpredicted information. We suggest that the late posterior positivity/P600 is triggered when the comprehender detects a conflict between the input and her model of the communicator and communicative environment. This leads to an initial failure to incorporate the unpredicted input into the situation model, which may be followed by second-pass attempts to make sense of the discourse through reanalysis, repair, or reinterpretation. Together, these findings provide strong evidence that confirmed and violated predictions at different levels of representation manifest as distinct spatiotemporal neural signatures.
During language comprehension, online neural processing is strongly influenced by the constraints of the prior context. While the N400 ERP response (300-500ms) is known to be sensitive to a word’s semantic predictability, less is known about a set of late positive-going ERP responses (600-1000ms) that can be elicited when an incoming word violates strong predictions about upcoming content (late frontal positivity) or about what is possible given the prior context (late posterior positivity/P600). Across three experiments, we systematically manipulated the length of the prior context and the source of lexical constraint to determine their influence on comprehenders’ online neural responses to these two types of prediction violations. In Experiment 1, within minimal contexts, both lexical prediction violations and semantically anomalous words produced a larger N400 than expected continuations (James unlocked the door/laptop/gardener), but no late positive effects were observed. Critically, the late posterior positivity/P600 to semantic anomalies appeared when these same sentences were embedded within longer discourse contexts (Experiment 2a), and the late frontal positivity appeared to lexical prediction violations when the preceding context was rich and globally constraining (Experiment 2b). We interpret these findings within a hierarchical generative framework of language comprehension. This framework highlights the role of comprehension goals and broader linguistic context, and how these factors influence both top-down prediction and the decision to update or reanalyze the prior context when these predictions are violated.
It has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used magnetoencephalography (MEG) and electroencephalography (EEG), in combination with representational similarity analysis, to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate-constraining verbs was greater than following inanimate-constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.
To make sense of the world around us, we must be able to segment a continual stream of sensory inputs into discrete events. In this review, I propose that in order to comprehend events, we engage hierarchical generative models that “reverse engineer” the intentions of other agents as they produce sequential action in real time. By generating probabilistic predictions for upcoming events, generative models ensure that we are able to keep up with the rapid pace at which perceptual inputs unfold. By tracking our certainty about other agents’ goals and the magnitude of prediction errors at multiple temporal scales, generative models enable us to detect event boundaries by inferring when a goal has changed. Moreover, by adapting flexibly to the broader dynamics of the environment and our own comprehension goals, generative models allow us to optimally allocate limited resources. Finally, I argue that we use generative models not only to comprehend events but also to produce events (carry out goal-relevant sequential action) and to continually learn about new events from our surroundings. Taken together, this hierarchical generative framework provides new insights into how the human brain processes events so effortlessly while highlighting the fundamental links between event comprehension, production, and learning.
We used magnetoencephalography (MEG) and event-related potentials (ERPs) to track the time-course and localization of evoked activity produced by expected, unexpected plausible, and implausible words during incremental language comprehension. We suggest that the full pattern of results can be explained within a hierarchical predictive coding framework in which increased evoked activity reflects the activation of residual information that was not already represented at a given level of the fronto-temporal hierarchy (“error” activity). Between 300 and 500 ms, the three conditions produced progressively larger responses within left temporal cortex (lexico-semantic prediction error), whereas implausible inputs produced a selectively enhanced response within inferior frontal cortex (prediction error at the level of the event model). Between 600 and 1,000 ms, unexpected plausible words activated left inferior frontal and middle temporal cortices (feedback activity that produced top-down error), whereas highly implausible inputs activated left inferior frontal cortex, posterior fusiform (unsuppressed orthographic prediction error/reprocessing), and medial temporal cortex (possibly supporting new learning). Therefore, predictive coding may provide a unifying theory that links language comprehension to other domains of cognition.
The N400 event-related brain potential is elicited by each word in a sentence and offers an important window into the mechanisms of real-time language comprehension. Since the 1980s, studies investigating the N400 have expanded our understanding of how bottom-up linguistic inputs interact with top-down contextual constraints. More recently, a growing body of computational modeling research has aimed to formalize theoretical accounts of the N400 to better understand the neural and functional basis of this component. Here, we provide a comprehensive review of this literature. We discuss “word-level” models that focus on the N400’s sensitivity to lexical factors and simple priming manipulations, as well as more recent sentence-level models that explain its sensitivity to broader context. We discuss each model’s insights and limitations in relation to a set of cognitive and biological constraints that have informed our understanding of language comprehension and the N400 over the past few decades. We then review a novel computational model of the N400 that is based on the principles of predictive coding, which can accurately simulate both word-level and sentence-level phenomena. In this predictive coding account, the N400 is conceptualized as the magnitude of lexico-semantic prediction error produced by incoming words during the process of inferring their meaning. Finally, we highlight important directions for future research, including a discussion of how these computational models can be expanded to explain language-related ERP effects outside the N400 time window, and variation in N400 modulation across different populations.
During language comprehension, the processing of each incoming word is facilitated in proportion to its predictability. Here, we asked whether anticipated upcoming linguistic information is actually pre-activated before new bottom-up input becomes available, and if so, whether this pre-activation is limited to the level of semantic features, or whether extends to representations of individual word-forms (orthography/phonology). We carried out Representational Similarity Analysis on EEG data while participants read highly constraining sentences. Prior to the onset of the expected target words, sentence pairs predicting semantically-related words (financial “bank” – “loan”) and form-related words (financial “bank” – river “bank”) produced more similar neural patterns than pairs predicting unrelated words (“bank” – “lesson”). This provides direct neural evidence for item-specific semantic and form predictive pre-activation. Moreover, the semantic pre-activation effect preceded the form pre-activation effect, suggesting that top-down pre-activation is propagated from higher to lower levels of the linguistic hierarchy over time.
To comprehend language, we continually use prior context to pre-activate expected upcoming information, resulting in facilitated processing of incoming words that confirm these predictions. But what are the consequences of disconfirming prior predictions? To address this question, most previous studies have examined unpredictable words appearing in contexts that constrain strongly for a single continuation. However, during natural language processing, it is far more common to encounter contexts that constrain for multiple potential continuations, each with some probability. Here, we ask whether and how pre-activating both higher and lower probability alternatives influences the processing of the lower probability incoming word. One possibility is that, similar to language production, there is continuous pressure to select the higher-probability pre-activated alternative through competitive inhibition. During comprehension, this would result in relative costs in processing the lower probability target. A second possibility is that if the two pre-activated alternatives share semantic features, they mutually enhance each other’s pre-activation. This would result in greater facilitation in processing the lower probability target. To distinguish between these accounts, we recorded ERPs as participants read three-sentence scenarios that constrained either for a single word or for two potential continuations – a higher probability expected candidate and a lower probability second-best candidate. We found no evidence that competitive pre-activation between the expected and second-best candidates resulted in costs in processing the second-best target, either during lexico-semantic processing (indexed by the N400) or at later stages of processing (indexed by a later frontal positivity). Instead, we found only benefits of pre-activating multiple alternatives, with evidence of enhanced graded facilitation on lower-probability targets that were semantically related to a higher-probability pre-activated alternative. These findings are consistent with a previous eye-tracking study by Luke and Christianson (2016, Cogn Psychol) using corpus-based materials. They have significant theoretical implications for models of predictive language processing, indicating that routine graded prediction in language comprehension does not operate through the same competitive mechanisms that are engaged in language production. Instead, our results align more closely with hierarchical probabilistic accounts of language comprehension, such as predictive coding.