While it is standard clinical practice to measure perception in assessing visual and auditory function, the same is not true for vestibular function. Typically, clinicians examine only vestibularly-driven reflexive eye movements, even when the primary patient complaint is perceptual, e.g. dizziness. As an alternative, we are developing methods to directly measure vestibular perceptual function in the clinic. For example, we measure vestibular perceptual thresholds, which means we quantify the weakest vestibular signal that is still identifiable as a movement in one direction or the other. Measurements are conducted using the motion platform which allows rotating around or translating along any axis so that perceptual sensitivity can be comprehensively characterized.
The nervous system processes visual and vestibular self-motion signals together in many areas of the brain because they provide complimentary information. When the signals are redundant the nervous system can combine them to achieve the best possible multimodal self-motion estimate. When visual or vestibular signals are ambiguous, an unambiguous interpretation can be achieved by combining information. We conduct behavioral experiments using a virtual reality motion simulator to elucidate the nature of these interactions. One experimental approach is to compare visual and vestibular perceptual performance, for example in heading estimation, to uncover clues about the commonality of the underlying computational and/or neural processes. Another approach is to identify instances where computations could be facilitated by common visual-vestibular processing, for example in detecting object motion, and to determine whether this facilitation can be observed in an experimental setting. A final approach is to look at adaptation, for example aftereffects, as a way to characterize the nature of visual-vestibular neural interactions.
When the head moves relative to the stationary environment, this gives rise to visual and auditory motion because the ears and eyes move with the head. Despite these moving stimuli, we perceive a stable world, a phenomenon known as spatial constancy. To achieve spatial constancy, the nervous system combines visual and auditory signals with sensory and motor information about eyes and head movement. We investigate these processes using the virtual reality motion simulator which allows us to manipulate the natural coherence between head motion and visual or auditory signals. One set of experiments is focused on the problem of auditory spatial updating, i.e. how the nervous system accounts for head motion in perceiving the spatial positon of auditory targets. In other experiments, we are measuring the visual motion that must be presented simultaneously with head motion to elicit perception of a stationary visual environment.
Traditionally, perception is associated with sensory processing. In reality, motor signals play a central role in shaping perception. For example, most head movement through space is actively generated, whether we are moving our whole bodies through space, as in walking, or just turning the head on the body to look at something. Efference copies of these motor commands can be used to predict how the head is moving, and these predictions are combined with vestibular and other sensory signals to drive self-motion perception. In different projects we are investigating how self-motion perception is impacted by motors signals associated with both head-on-body movements, as well as locomotor signals.
Our everyday behavior is restricted to a relatively limited repertoire of movements, and consequently, patterns of head motion experienced in everyday life are very stereotypical. The nervous system keeps track of these regularities, and uses this knowledge to simplify sensory processing. This project aims to characterize regularities in everyday head motion as a way to gain insight on how they may be exploited by the nervous system.