Eye movements and scanpaths in the perception of real-world scenes

Humphrey, Katherine Anne (2010) Eye movements and scanpaths in the perception of real-world scenes. PhD thesis, University of Nottingham.

[img]
Preview
PDF (Word to PDF conversion (via antiword) conversion from application/msword to application/pdf) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (13MB) | Preview

Abstract

The way we move our eyes when viewing a scene is not random, but is influenced by both bottom-up (low-level), and top-down (cognitive) factors. This Thesis investigates not only what these influences are and how they effect eye movements, but more importantly how they interact with each other to guide visual perception of real-world scenes.

Experiments 1 and 2 show that the sequences of fixations and saccades - ‘scanpaths’ - generated when encoding a picture are replicated both during imagery and at recognition. Higher scanpath similarities at recognition suggest that low-level visual information plays an important role in guiding eye movements, yet the above-chance similarities at imagery argue against a purely bottom-up explanation and imply a link between eye movements and visual memory. This conclusion is supported by increased scanpath similarities when previously seen pictures are described from memory (experiment 3). When visual information is available, areas of high visual saliency attract attention and are fixated sooner than less salient regions. This effect, however, is reliably reduced when viewers possess top-down knowledge about the scene in the form of domain proficiency (experiments 4-6). Enhanced memory, as well as higher scanpath similarity, for domain-specific pictures exists at recognition, and in the absence of visual information when previously seen pictures are described from memory, but not when simply imagined (experiment 6). As well as the cognitive override of bottom-up saliency, domain knowledge also moderates the influence of top-down incongruence during scene perception (experiment 7). Object-intrinsic oddities are less likely to be fixated when participants view pictures containing other domain-relevant semantic information. The finding that viewers fixate the most informative parts of a scene was extended to investigate the presence of social (people) and emotional information, both of which were found to enhance recognition memory (experiments 8 and 9). However, the lack of relationship between string similarity and accuracy, when viewing ‘people’ pictures, challenges the idea that the reproduction of eye movements alone is enough to create this memory advantage (experiment 8). It is therefore likely that the semantically informative parts of a scene play a large role in guiding eye movements and enhancing memory for a scene. The processing of emotional features occurs at a very early stage of perception (even when they are still in the parafoveal), but once fixated only emotionally negative (not positive) features hold attention (experiment 9). The presence of these emotionally negative features also reliably decreases the influence of saliency on eye movements. Lastly, experiment 10 illustrates that although the fixation sequence is important for recognition memory, the influence of visually salient and semantically relevant parafoveal cues in real-world scenes decreases the necessity to fixate in the same order.

These experiments combine to conclude that eye movements are neither influenced by purely top-down nor bottom-up factors, but instead a combination of both, which interact to guide attention to the most relevant parts of the picture.

Item Type: Thesis (University of Nottingham only) (PhD)
Supervisors: Underwood, G.
Subjects: B Philosophy. Psychology. Religion > BF Psychology
Faculties/Schools: UK Campuses > Faculty of Science > School of Psychology
Item ID: 11651
Depositing User: EP, Services
Date Deposited: 19 Feb 2011 15:17
Last Modified: 19 Dec 2017 11:17
URI: https://eprints.nottingham.ac.uk/id/eprint/11651

Actions (Archive Staff Only)

Edit View Edit View