GRK 2340

Graduiertenkolleg "Computational Cognition"


Navigation und Suche der Universität Osnabrück


Hauptinhalt

Topinformationen

Jasmin L. Walter

e-mail

Room 50/207

Institute of Cognitive Science,
Wachsbleiche 27,
49090 Osnabrück, Germany

 

Supervisors

Peter König
Gordon Pipa
Tim Kietzmann

P1: Graph theoretical analysis of eye tracking data recorded in complex VR cities to investigate spatial navigation

Investigating visual attention using eye tracking has been a corner stone to better understanding cognitive processes and mechanisms. With recent technical advances, mobile eye tracking systems now allow eye tracking experiments to be conducted both in the real world as well as virtual reality (VR) environments. Especially in the field of spatial navigation research, a clear trend towards using more complex and naturalistic environments in VR can be observed. However, eye tracking data recorded in 3D environments with freedom of movement come with challenges and require new analysis approaches. Therefore, we propose a new data driven method to quantify characteristics of visual behavior by applying a graph-theoretical analysis to eye tracking data recorded in a complex VR environment.

In the first part of the project, applied the new analysis approach to eye tracking data of 20 participants recorded while freely exploring the virtual city Seahaven for 90 min. We proposed a 5-step pre-processing pipeline including a cleaning and interpolation process ultimately resulting in a new data form “gaze events”. Using these gaze events, we create gaze graphs on which we apply graph theoretical measures to access global navigation characteristics across participants. Our results reveal a subset of houses consistently outstanding in their graph theoretical properties and matching several characteristics expected in landmarks, thus, we call these outstanding houses “gaze-graph-defined landmarks”. Furthermore, we find that these gaze-graph-defined landmarks are preferentially connected to each other and participants spend a majority of their experiment time in areas of the city where several gaze-graph-define landmarks are visible.

As a second step in the project, we recorded data of 26 other participants who spend a total of 150 min freely exploring a different virtual city called Westbrück. In this experiment, participants were also asked to complete a number of spatial navigation task at the end of their exploration phase. We are now investigating whether we will be able to replicate our first results in the Seahaven study. In addition, we are interested in participants visual behavior during their exploration phase of the city and how it relates to their performance in the spatial tasks. Specifically, which characteristics of their gaze graphs will predict their task performance.

Supervisor: Prof. Peter König

P2: Adapting the rational speech act framework for visual interaction

How do we understand each other in conversations? The rational speech act (RSA) framework proposes a formulated model, in which a speaker and a listener engage in a cooperative conversation whereas they reason about each other’s beliefs. In other words, both speaker and listener engage in a recursive reasoning, i.e. the listener thinks about what the speaker thinks about what the listener thinks. While there is several empirical work showing that the RSA framework explains a lot of language phenomenon, in everyday life we often communicate without using language.

What we are interested in this project is, if people use a similar recursive reasoning even if they interact without language. One major aspect of non-verbal communication is visual interaction and shared gaze. Indeed, infants develop the ability to follow a gaze and thus initiate shared gaze long before they are able to point and long before language develops. However, eyes have a double functionality. On the one hand, eye movement is necessary to sample the environment and enable visual perception of the world. On the other hand, moving the eyes in certain ways is an essential part of social interaction and social cognition in general. So how do we differentiate between these different requirements of eye movements? Moreover, do we actively adapt our eye movements in social situations based on the knowledge we have about the other person and how does it relate to the recursive reasoning we know from the RSA framework in language communication? For example, in a situation where two people interact with each other to achieve a common goal while only being able to communicate and interact visually, we are interested in their visual behavior and what it reveals about the internal reasoning of the participants. Specifically, will the interaction partners adjust their eye movements and gaze location based on the communicative signaling as opposed to their biological need of sampling the environment to see? In addition, will they adjust their visual behavior to increase the chance of having a successful cooperative interaction based on their world knowledge and knowledge about what their interaction partner knows and perceives? In other words, in this project we investigate whether the rational speech act framework also applies to non-verbal, more specifically, visual interaction and communication

Supervisors: Prof. Peter König, Prof. Gordon Pipa, Prof. Tim Kietzmann

Topinformationen

Publications

Walter, J. L., Essmann, L., König, S. U., & König, P. (2022). Finding landmarks – An investigation of viewing behavior during spatial navigation in VR using a graph-theoretical analysis approach. (accepted by PLOS Computational Biology)