Award Abstract # 1925178
NRI: FND: Creating Trust Between Groups of Humans and Robots Using a Novel Music Driven Robotic Emotion Generator

NSF Org: CMMI
Division of Civil, Mechanical, and Manufacturing Innovation
Recipient: GEORGIA TECH RESEARCH CORP
Initial Amendment Date: August 20, 2019
Latest Amendment Date: July 31, 2020
Award Number: 1925178
Award Instrument: Standard Grant
Program Manager: Alex Leonessa
CMMI
 Division of Civil, Mechanical, and Manufacturing Innovation
ENG
 Directorate for Engineering
Start Date: November 1, 2019
End Date: October 31, 2023 (Estimated)
Total Intended Award Amount: $669,912.00
Total Awarded Amount to Date: $803,892.00
Funds Obligated to Date: FY 2019 = $669,912.00
FY 2020 = $133,980.00
History of Investigator:
  • Gil Weinberg (Principal Investigator)
    gil.weinberg@coa.gatech.edu
Recipient Sponsored Research Office: Georgia Tech Research Corporation
926 DALNEY ST NW
ATLANTA
GA  US  30318-6395
(404)894-4819
Sponsor Congressional District: 05
Primary Place of Performance: Georgia Institute of Technology
225 North Avenue NW
Atlanta
GA  US  30332-0002
Primary Place of Performance
Congressional District:
05
Unique Entity Identifier (UEI): EMW9FC8J3HN4
Parent UEI: EMW9FC8J3HN4
NSF Program(s): NRI-National Robotics Initiati
Primary Program Source: 01001920DB NSF RESEARCH & RELATED ACTIVIT
01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 116E, 7632, 8086, 9178, 9231, 9251
Program Element Code(s): 801300
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.041

ABSTRACT

This project will perform fundamental research contributing to the establishment of trust between humans and robots through the development of novel emotional communication channels. As co-robots become prevalent at home, work, and public spaces, they need to become trust-worthy and socially believable agents if they are to be integrated into and accepted by society. The research will utilize the latest developments in Artificial Intelligence to gain knowledge about of the role of non-linguistic expressions in trust building. Findings from studies about non-linguistic emotional expressions such as prosody and gestures in music - one of the most emotionally meaningful human experiences - will be implemented in a group of newly developed personal robots. User experiments will be conducted to explore humans' reactions to - and trust building with - these prosody-driven robots. Results of the study will lead to novel approaches for creating open and meaningful interactions between groups of humans and robots. The research will advance national prosperity by increasing engagement, relatability, and trust in large scale human-robot interactive scenarios such as personal robots in private and public spaces, work place training, education, and combat. The project takes an interdisciplinary approach, which will address fields such as cognitive science, communication, and music, while leading to progress in both science and engineering.

Prosodic features such as pitch, loudness, tempo, timbre, and rhythm bear strong resemblance to musical features, which can inform a novel approach for generative emotion-driven robotic prosody. The first phase of this project will focus on developing machine learning techniques to derive features from a newly created emotionally labeled musical dataset. It will use these features to drive a non-linguistic robotic voice synthesizer that conveys emotional content and builds trust. The results of this study will be integrated with previous work on conveying robotic emotions through physical gestures. The second phase of the project will focus on user experiments that will study subjects' preference to a variety of robotic emotional responses when interacting with a single robot. It will use the learned features to design a larger scale robotic emotional contagion engine in an effort to improve and enrich emotion-driven human interaction with large groups of robots.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 11)
Farris, N. "Musical Prosody-Driven Emotion Classification: Interpreting Vocalists Portrayal of Emotions Through Machine Learning" Proceedings of the Sound and Music Computing Conferences , 2021 Citation Details
Savery, R. "Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication" ACMIEEE International Conference on HumanRobot Interaction , 2020 Citation Details
Savery, R. "Emotional Musical Prosody: Validated Vocal Dataset for Human Robot Interaction" 2020 Joint Conference on AI Music Creativity, , 2020 Citation Details
Savery, R. "Emotion Musical Prosody for Robotic Groups and Entitativity" 30th IEEE International Conference on Robot & Human Interactive Communication , 2021 Citation Details
Savery, R. and "Machine Learning Driven Musical Improvisation for Mechanomorphic Human-Robot Interaction" ACMIEEE International Conference on HumanRobot Interaction , 2021 Citation Details
Savery, R and Weinberg, G. "A Survey of Robots and Emotion: Broad Trends and Models of Emotional Interaction" 29th IEEE International Conference on Robot & Human Interactive Communication. , 2020 https://doi.org/ Citation Details
Savery, R and Zahray, L and Weinberg, G. "Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication" 29th IEEE International Conference on Robot & Human Interactive Communication , 2020 https://doi.org/ Citation Details
Savery, Richard and Weinberg, G. "A Survey of Robots and Emotion: Broad Trends and Models of Emotional Interaction;" 29th IEEE International Conference on Robot & Human Interactive Communication , 2020 Citation Details
Savery, Richard and Weinberg, Gil "Robots and emotion: a survey of trends, classifications, and forms of interaction" Advanced Robotics , 2021 https://doi.org/10.1080/01691864.2021.1957014 Citation Details
Savery, Richard and Zahray, Lisa and Weinberg, Gil "Before, Between, and After: Enriching Robot Communication Surrounding Collaborative Creative Activities" Frontiers in Robotics and AI , v.8 , 2021 https://doi.org/10.3389/frobt.2021.662355 Citation Details
Zahray, L: Savery "Robot Gesture Sonification to Enhance Awareness of Robot Status and Enjoyment of Interaction" 29th IEEE International Conference on Robot & Human Interactive Communication , 2020 https://doi.org/10.1109/RO-MAN47096.2020.9223452 Citation Details
(Showing: 1 - 10 of 11)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

As co-robots become prevalent at home, work, and public spaces, they need to become trust-worthy and socially believable agents if they are to be integrated into and accepted by society. The main goal of this project was to develop a new paradigm for trust building between humans and co-robots through novel emotional communication channels that utilize non-linguistic expressions such as vocal prosody and physical gestures.

To address this goal, a large dataset of vocal and instrumental audio phrases was created, labeled for emotional content by musicians, and validated in a large-scale user survey.  Deep learning models were then developed and trained by this dataset to generate audio that carry Emotional Musical Prosody (EMP). A new rule-based model for emotion conveyance through gestures was also developed for multiple robotic platforms.  The emotional generated audio was then integrated with the emotional robotic gesture generator in a number of robotic platforms. 

Multiple Human-Robot Interaction studies have been conducted to evaluate the effectiveness of the integrated system in improving trust between humans and robots.  The studies showed that our integrated sound-gestural system not only improved robotic emotion detection by humans but also significantly improved the perception of human trust in these robots. The system also presented significant results in improving the perception robotic animacy, anthropomorphism, likability, and intelligence. 

Over the life of the project, two PhD students and 8 MS students have been trained and contributed to the development of different modules of the project. The graduate students were helped by more than 20 undergrad students as part of the Vertical Integrated Project class offered by Georgia Tech, where undergraduate students from all 5 colleges across campus are working with professors and grad students on their research.

Results from the project were disseminated through academic publications as well as performances, workshops, and concerts, where the general public learned about the importance of sound and gestures in creating trust between humans and robots.   One of these performances titled FOREST received the international Falling Walls award in the category of Arts and Sciences, expanding the reach of the project to international audiences.


Last Modified: 12/25/2023
Modified by: Gil Weinberg

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page