
NSF Org: |
BCS Division of Behavioral and Cognitive Sciences |
Recipient: |
|
Initial Amendment Date: | August 26, 2010 |
Latest Amendment Date: | July 10, 2013 |
Award Number: | 1029084 |
Award Instrument: | Continuing Grant |
Program Manager: |
alumit ishai
BCS Division of Behavioral and Cognitive Sciences SBE Directorate for Social, Behavioral and Economic Sciences |
Start Date: | September 1, 2010 |
End Date: | August 31, 2015 (Estimated) |
Total Intended Award Amount: | $625,694.00 |
Total Awarded Amount to Date: | $625,694.00 |
Funds Obligated to Date: |
FY 2011 = $154,104.00 FY 2012 = $158,282.00 FY 2013 = $163,755.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
9500 GILMAN DR LA JOLLA CA US 92093-0021 (858)534-4896 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
9500 GILMAN DR LA JOLLA CA US 92093-0021 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Cognitive Neuroscience |
Primary Program Source: |
01001112DB NSF RESEARCH & RELATED ACTIVIT 01001213DB NSF RESEARCH & RELATED ACTIVIT 01001314DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.075 |
ABSTRACT
Many of the objects and events that we encounter in everyday life, such as a barking dog or a honking car, are both seen and heard. A basic task our brains must carry out is to bring together the sensory information that is received concurrently by our eyes and ears, so that we perceive a world of unitary objects having both auditory and visual properties. With funding from the National Science Foundation, Dr. Steven A. Hillyard and colleagues, of the University of California, San Diego, are investigating when and where in the brain the visual and auditory signals are combined and integrated to produce coherent, multi-dimensional perceptions of objects in the environment. The sensory inputs from the eyes and ears are projected initially to separate regions of the brain specialized for perceiving the visual and auditory modalities, respectively. The timing and anatomical localization of neural interactions between auditory and visual inputs is being analyzed by means of scalp recordings of brain potentials that are triggered by these sensory events, together with magnetic resonance imaging of stimulus-induced brain activity patterns. A major aim is to analyze the brain interactions that cause a stimulus in one modality (auditory or visual) to alter the perception of a stimulus in the other modality. Three types of such auditory-visual interactions are being studied: (1) the brightness enhancement of a visual event when accompanied by a sound, (2) the ventriloquist illusion, which is a shift in perceived sound location towards the position of a concurrent visual event, and (3) the double-flash illusion that is induced when a single flash is interposed between two pulsed sounds. In each case, the precisely timed sequence of neural interactions in the brain that underlie the perceptual experience will be identified, and the influence of selective attention on these interactions will be determined. The overall aim is to understand the neural mechanisms by which stimuli in different sensory modalities are integrated in the brain to achieve unified perceptions of multi-modal events in the world.
Because much of our everyday experience involves recognizing and reacting to the sights and sounds of our surroundings, understanding the principles by which these auditory and visual inputs are synthesized in the brain is important. The ability to combine auditory and visual signals effectively is particularly important in teaching and learning situations, where spoken words must be put together with a variety of pictorial, graphic, and written information in order to understand the material. This research program can contribute to the development of more efficient learning environments and teaching techniques, and lead to improved designs for communication media such as audio-visual teaching tools, information displays, and critical warning signals. These studies are exploring the role of selective attention in synthesizing auditory and visual signals, research that can lead to improved teaching techniques that emphasize the training of attention. By studying the brain systems that enable auditory and visual inputs to be combined into perceptual wholes, this research can also help to understand what goes wrong in the brains of patients who suffer from abnormalities of perception, including those with learning disabilities, attention deficit disorder, autism, and schizophrenia.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Events in the natural world are perceived through multiple sensory modalities, and multi-modal interactions in the brain are critical for generating coherent perceptual experiences and adaptive behavioral responses. Our NSF-sponsored research has been concerned with understanding how cross-modal interactions take place in the human brain to achieve perceptual integration. The major research theme over the past four years has been to study how the visual and auditory systems of the brain interact, and in particular to show how sounds influence our visual perceptions. Our principal finding has been that a salient sound automatically attracts attention to its location so that the perception of a subsequent visual stimulus that occurs at the same location is enhanced. Thus, when a visual stimulus follows a salient sound at the same location it is perceived more accurately, enters awareness more rapidly, appears brighter and elicits a larger evoked potential in the visual cortex than does a visual stimulus presented at a different location (e.g., in the opposite visual field).
We investigated the neural basis of this cross-modal facilitation of visual perception by recording changes in the electroencephalogram (EEG) elicited by sounds. We discovered that presenting a salient sound to one side elicits a positive potential shift in the contralateral visual cortex, which was termed the “Auditory Contralateral Occipital Positivity” (ACOP). We hypothesized that this ACOP was the neural sign of the perceptual priming of the visual cortex by the sound. In support of this hypothesis, we found that letters that were flashed at the location of an immediately preceding sound were discriminated more accurately than letters flashed in the opposite visual field. Most importantly, the ACOP triggered by the sound was larger when the subsequent letter discrimination was correct as opposed to when it was incorrect. We concluded that the spatially non-predictive sound automatically captured visual attention to the sound’s location, which resulted in a lateralized facilitation of visual processing that was indexed by the ACOP.
Over the past year we have extended these studies of visual cortex activity elicited by sounds into the frequency domain, using wavelet analyses. We found that the sounds provoked a robust desynchronization (blocking) of the ongoing alpha rhythm of the EEG, and that the time course of this alpha-blocking closely paralleled that of the ACOP slow potential. In support of the close connection between the ACOP and alpha blocking, we reanalyzed the data from the letter discrimination experiment and found that trials with correct discriminations had stronger alpha blocking that incorrect trials, thereby closely paralleling the findings with ACOP. These studies demonstrate that the blocking of alpha rhythm provoked by a sound represents a fundamental neural mechanism by which cross-modal attention facilitates visual perception.
These studies have advanced our understanding of the brain mechanisms by which information from different sensory modalities is combined in the brain to achieve perceptual coherence in a multi-sensory world. This research also has broader implications for the fields of communication, education, and mental health. Because much of our everyday perceptual experience involves multi-sensory integration, co-ordination between sensory inputs in different modalities helps us localize and re...
Please report errors in award information by writing to: awardsearch@nsf.gov.