text-only page produced automatically by LIFT Text Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation Home National Science Foundation - Computer & Information Science & Engineering (CISE)
Information & Intelligent Systems (IIS)
design element
IIS Home
About IIS
Funding Opportunities
Awards
News
Events
Discoveries
Publications
Career Opportunities
See Additional IIS Resources
View IIS Staff
CISE Organizations
Advanced Cyberinfrastructure (ACI)
Computing and Communication Foundations (CCF)
Computer and Network Systems (CNS)
Information & Intelligent Systems (IIS)
Proposals and Awards
Proposal and Award Policies and Procedures Guide
  Introduction
Proposal Preparation and Submission
bullet Grant Proposal Guide
  bullet Grants.gov Application Guide
Award and Administration
bullet Award and Administration Guide
Award Conditions
Other Types of Proposals
Merit Review
NSF Outreach
Policy Office
Additional IIS Resources
Data Sharing for Computational Neuroscience
Research on Data Confidentiality
Other Site Features
Special Reports
Research Overviews
Multimedia Gallery
Classroom Resources
NSF-Wide Investments

Email this pagePrint this page

Discovery
Hearing It Like It Was

Your ears not only tell you what you're hearing, but also a lot about where you're hearing it. A new recording and playback method developed at the University of California, Davis, keeps your head in the mix, so you can hear it like it really was.

graduate student Robert Dalton listens to an MTB recording

UC Davis graduate student Robert Dalton listens to an MTB recording.
Credit and Larger Version

July 30, 2004

 

Recorded sounds don't often give you much sense of "being there." That's because your ears not only tell you what you're hearing, but also a lot about where you're hearing it. A new recording and playback method developed at the University of California, Davis, puts your head back in the mix, so you can hear it like it really was.

The patent-pending technique, which the researchers have named "motion-tracked binaural" (MTB) sound, captures both the 3-D position of sound sources and the subtleties of natural, ambient sound that other systems don't. On top of that, the system makes it easy to record 3-D sound with off-the-shelf equipment that won't break the bank.

"Conventional audio playback doesn't reflect how you hear in real life," said Ralph Algazi, director of the Interface Laboratory in the UC Davis Center for Image Processing and Integrated Computing (CIPIC). "Your body, the shape and motion of your head and the room acoustics all affect how you hear."

MTB sound could find its way into applications including teleconferences, home theater presentations, training simulators, video games and museum exhibits. Algazi, Richard Duda and Dennis Thompson presented the research leading to the MTB method at the 116th convention of the Audio Engineering Society in Berlin, Germany. The work is supported as part of two awards from the National Science Foundation.

"MTB captures the dynamic cues of head motion with reasonable computational requirements and affordable equipment for recording and playback," Duda said. "For groups of listeners, MTB sound has the advantage over the computational demands of existing commercial systems."

The MTB sound process starts with microphones -- eight for voice, 16 for music -- attached around a ball or cylinder standing in for a human head. During the recording process, the microphones capture the distinctive sounds at each point around the dummy head.

For playback, a listener wears headphones that have a small head tracker attached. The head tracker determines the position of the listener's ears relative to the microphones on the dummy head, and the MTB software computes signals that combine sounds from the two microphones closest to each ear. When you turn your head, MTB repositions the virtual ears and recalculates the audio feeds.

The system can capture the sound of instruments much more fully than a conventional single microphone," Algazi said. "It captures changes in sound and the effects of a room in a way that is much closer to reality."

A laptop computer can support an audience of several dozen listeners who can independently move their heads to locate a sound source, turn to "face" a person speaking and tell when sound sources are nearby or far away. In addition, MTB captures the ambient sounds of the location, so you recognize the echoes in a church or the confines of a conference room.

NSF program officer Ephraim Glinert has experienced an MTB sound demo first hand. "I put on headphones, listened, turned my head in various directions, and so on," he said. "The results were really quite amazing -- some of the best and most innovative work I've seen in the field."

Home-theater surround sound also imitates a 3-D sound experience, but making these recordings is often more art than science. Engineers experiment with microphone locations, multiple audio tracks and a manual editing process to produce the final mapping of sounds to speakers.

Another spatial sound technique, binaural recording, uses two microphones embedded in a dummy head to record sound and does a fair job of reproducing sounds to the left and right. However, it doesn't allow for movement, Duda said, so sounds to the front appear to come from immediately in front of or behind your head.

With MTB sound, the researchers have created a systematic process that extends binaural recording to allow for head movement. The engineers have made sample recordings with musicians from the UC Davis music department and visiting classical and bluegrass musicians.

"I think they have something really wonderful," said Pablo Ortiz, chair of the Department of Music at UC Davis. "The thing that's interesting to me is the way that it records space."

William Beck, a composer of electronic music and lecturer at the music department, said that live recordings could be a major application.

"The 'being there' feel is something people would really like," Beck said.
-- David Hart

 

Investigators
V. Ralph Algazi
Richard Duda
Larry Davis
Ramani Duraiswami
Qing Huo Liu

Related Institutions/Organizations
University of California-Davis
University of Maryland College Park

Locations
California

Related Awards
#0097256 Customized Spatial Sound for Human/Computer Interaction
#0086075 ITR: Personalized Spatial Audio via Scientific Computing and Computer Vision

Total Grants
$3,744,520

Related Websites
UC Davis CIPIC Interface Laboratory: http://interface.cipic.ucdavis.edu/

Ralph Algazi listens to an MTB recording
Ralph Algazi, professor emeritus of electrical and computer engineering, listens to an MTB recording
Credit and Larger Version

students with the recording array
UC Davis graduate students with the recording array.
Credit and Larger Version



Email this pagePrint this page
Back to Top of page