Understanding Moment‐to‐Moment Processing of Visual Narratives

Published on November 16, 2018

Abstract
What role do moment‐to‐moment comprehension processes play in visual attentional selection in picture stories? The current work uniquely tested the role of bridging inference generation processes on eye movements while participants viewed picture stories. Specific components of the Scene Perception and Event Comprehension Theory (SPECT) were tested. Bridging inference generation was induced by manipulating the presence of highly inferable actions embedded in picture stories. When inferable actions are missing, participants have increased viewing times for the immediately following critical image (Magliano, Larson, Higgs, & Loschky, #cogs12699-bib-0074). This study used eye‐tracking to test competing hypotheses about the increased viewing time: (a) Computational Load: inference generation processes increase overall computational load, producing longer fixation durations; (b) Visual Search: inference generation processes guide eye‐movements to pick up inference‐relevant information, producing more fixations. Participants had similar fixation durations, but they made more fixations while generating inferences, with that process starting from the fifth fixation. A follow‐up hypothesis predicted that when generating inferences, participants fixate scene regions important for generating the inference. A separate group of participants rated the inferential‐relevance of regions in the critical images, and results showed that these inferentially relevant regions predicted differences in other viewers’ eye movements. Thus, viewers’ event models in working memory affect visual attentional selection while viewing visual narratives.

Read Full Article (External Site)