Award Abstract # 1748667
EAGER: Spatial Audio Data Immersive Experience (SADIE)

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: VIRGINIA POLYTECHNIC INSTITUTE & STATE UNIVERSITY
Initial Amendment Date: August 7, 2017
Latest Amendment Date: August 7, 2017
Award Number: 1748667
Award Instrument: Standard Grant
Program Manager: Ephraim Glinert
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 15, 2017
End Date: August 31, 2019 (Estimated)
Total Intended Award Amount: $149,930.00
Total Awarded Amount to Date: $149,930.00
Funds Obligated to Date: FY 2017 = $149,930.00
History of Investigator:
  • Ivica Bukvic (Principal Investigator)
    ico@vt.edu
  • Gregory Earle (Co-Principal Investigator)
Recipient Sponsored Research Office: Virginia Polytechnic Institute and State University
300 TURNER ST NW
BLACKSBURG
VA  US  24060-3359
(540)231-5281
Sponsor Congressional District: 09
Primary Place of Performance: Virginia Polytechnic Institute and State University
VA  US  24061-0001
Primary Place of Performance
Congressional District:
09
Unique Entity Identifier (UEI): QDE5UHE5XD16
Parent UEI: X6KEFGLHSJX7
NSF Program(s): HCC-Human-Centered Computing
Primary Program Source: 01001718DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7367, 7916
Program Element Code(s): 736700
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Although there has been much recent interest in visualization to support data analysis, sonification -- the rendering of non-auditory information as sound -- represents a relatively unexplored but rich space that could map onto many data analysis problems, especially when the data has a natural spatial and temporal element. In contrast to headphone-based sonification approaches, this project will explore the potential of "exocentric" sound environments that completely encompass the user and allow them to interact with the sonified data by moving in space. The hypothesis is that compared to existing methods of data analysis, the coupling of spatial data with spatial representations, the naturalness of interacting with the data through motion, the leveraging of humans' ability to hear patterns and localize them in 3D, and the avoidance of artifacts introduced by headphone-based sonification strategies will all help people perceive patterns and causal relationships in data. To test this, the team will develop a set of primitives for mapping spatio-temporal data to sound parameters such as volume, pitch, and spectral filtering. They will refine these primitives through a series of increasingly complex data analysis experiments, including specific analysis tasks in the domain of geospace science. If successful, the work could have implications in a variety of applications, from enhancing visualizations to developing better virtual reality systems, while developing interdisciplinary bridges between scientific communities from music to computing to the physical sciences.

The project will be developed using an immersive sound studio that includes motion tracking capabilities and a high-density loudspeaker array, driven by algorithms and open source sound libraries developed by the team to support embodied, rich exploration of sonified data that is not subject to audio deviations introduced by headphone-based strategies such as Head Related Transfer Functions. The specific sonification strategies for individual data streams will be based on the primitives described earlier, focusing on sounds rich in spectra that are easier for people to localize. Strategies for representing multiple data streams will include layering multiple non-masking sounds and combining streams to modulate different aspects of the same sound (e.g., pitch and volume). To develop and validate the strategies, the team will conduct a series of experiments that gradually increase the complexity of the analysis tasks: from basic ability to perceive and interpret single data primitives, to perceiving and inferring relationships between multiple data streams, to measuring subjects' ability to perceive known causes between multiple data streams in a series of geospatial model scenarios. In these studies the team will vary the strength of relationships in the data, the size of the parameter manipulations of sounds, and the pairing of different sounds and parameterizations in order to determine perceptual properties and limitations of sonficiation strategies (similar in some ways to perception-based foundations of visualization); they will also compare both analysis performance and qualitative reactions of participants using both the exocentric environment and a headphone-based egocentric environment as a control.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Bukvic, Ivica Ico and Earle, Gregory "Reimagining Human Capacity for Location-Aware Audio Pattern Recognition: A Case for Immersive Exocentric Sonification" International Conference on Auditory Display , 2018 Citation Details
Bukvic, Ivica Ico and Earle, Gregory and Sardana, Disha and Joo, Woohun. "Studies in spatial aural perception: establishing foundations for immersive sonification" International Conference on Auditory Display , 2019 https://doi.org/10.21785/icad2019.017 Citation Details
Sardana, Disha and Joo, Woohun and Bukvic, Ivica Ico and Earle, Gregory. "Introducing locus: a nime for immersive exocentric aural environments" New Interfaces for Musical Expression , 2019 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The spatial-audio data immersive experience project (SADIE) focuses on the use of sound to explore complex, multi-faceted datasets.  The essential idea is to develop infrastructure and knowledge necessary to analyze data by assigning specific sounds to a group of variables, and then immersing users in a room equipped to simultaneously play these sounds so that users can let their auditory systems detect correlations and other relationships between the variables.  The SADIE project is developing new foundational knowledge in this research domain, which has been found to offer unique advantages relative to headphone-based sound studies.  To date we have developed the infrastructure required to do this. and also to allow users to interact with the data.  Further, we have run a study with over 100 participants and have uncovered foundational knowledge that sets the stage for the follow-on studies in order to be able to expand our sonification methodologies to include multidimensional datasets, with primary focus on inherently spatial geospatial data. The study has resulted in 3 peer-reviewed publications at international conferences and has in part served as an inspiration for a book chapter.  Further, the underlying technologies used in this study have paved way towards potential commercialization opportunities.  The facilities and techniques used in this study do more than enable research; they also allow users to interact with the sound fields by selecting and modifying specific sounds, with potential future use scenarios in education and assistive technologies scenarios.  This capability has also led to new forms of artistic expression, including performances for the general public.  


Last Modified: 08/12/2019
Modified by: Ivica I Bukvic

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page