Abstract
How do comprehenders build up overall meaning representations of visual real-world events? This question was examined by recording event-related potentials (ERPs) while participants viewed short, silent movie clips depicting everyday events. In two experiments, it was demonstrated that presentation of the contextually inappropriate information in the movie endings evoked an anterior negativity. This effect was similar to the N400 component whose amplitude has been previously reported to inversely correlate with the strength of semantic relationship between the context and the eliciting stimulus in word and static picture paradigms. However, a second, somewhat later, ERP component—a posterior late positivity—was evoked specifically when target objects presented in the movie endings violated goal-related requirements of the action constrained by the scenario context (e.g., an electric iron that does not have a sharp-enough edge was used in place of a knife in a cutting bread scenario context). These findings suggest that comprehension of the visual real world might be mediated by two neurophysiologically distinct semantic integration mechanisms. The first mechanism, reflected by the anterior N400-like negativity, maps the incoming information onto the connections of various strengths between concepts in semantic memory. The second mechanism, reflected by the posterior late positivity, evaluates the incoming information against the discrete requirements of real-world actions. We suggest that there may be a tradeoff between these mechanisms in their utility for integrating across people, objects, and actions during event comprehension, in which the first mechanism is better suited for familiar situations, and the second mechanism is better suited for novel situations.