
NSF Org: |
CMMI Division of Civil, Mechanical, and Manufacturing Innovation |
Recipient: |
|
Initial Amendment Date: | August 20, 2019 |
Latest Amendment Date: | July 31, 2020 |
Award Number: | 1925178 |
Award Instrument: | Standard Grant |
Program Manager: |
Alex Leonessa
CMMI Division of Civil, Mechanical, and Manufacturing Innovation ENG Directorate for Engineering |
Start Date: | November 1, 2019 |
End Date: | October 31, 2023 (Estimated) |
Total Intended Award Amount: | $669,912.00 |
Total Awarded Amount to Date: | $803,892.00 |
Funds Obligated to Date: |
FY 2020 = $133,980.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
926 DALNEY ST NW ATLANTA GA US 30318-6395 (404)894-4819 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
225 North Avenue NW Atlanta GA US 30332-0002 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | NRI-National Robotics Initiati |
Primary Program Source: |
01002021DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.041 |
ABSTRACT
This project will perform fundamental research contributing to the establishment of trust between humans and robots through the development of novel emotional communication channels. As co-robots become prevalent at home, work, and public spaces, they need to become trust-worthy and socially believable agents if they are to be integrated into and accepted by society. The research will utilize the latest developments in Artificial Intelligence to gain knowledge about of the role of non-linguistic expressions in trust building. Findings from studies about non-linguistic emotional expressions such as prosody and gestures in music - one of the most emotionally meaningful human experiences - will be implemented in a group of newly developed personal robots. User experiments will be conducted to explore humans' reactions to - and trust building with - these prosody-driven robots. Results of the study will lead to novel approaches for creating open and meaningful interactions between groups of humans and robots. The research will advance national prosperity by increasing engagement, relatability, and trust in large scale human-robot interactive scenarios such as personal robots in private and public spaces, work place training, education, and combat. The project takes an interdisciplinary approach, which will address fields such as cognitive science, communication, and music, while leading to progress in both science and engineering.
Prosodic features such as pitch, loudness, tempo, timbre, and rhythm bear strong resemblance to musical features, which can inform a novel approach for generative emotion-driven robotic prosody. The first phase of this project will focus on developing machine learning techniques to derive features from a newly created emotionally labeled musical dataset. It will use these features to drive a non-linguistic robotic voice synthesizer that conveys emotional content and builds trust. The results of this study will be integrated with previous work on conveying robotic emotions through physical gestures. The second phase of the project will focus on user experiments that will study subjects' preference to a variety of robotic emotional responses when interacting with a single robot. It will use the learned features to design a larger scale robotic emotional contagion engine in an effort to improve and enrich emotion-driven human interaction with large groups of robots.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
As co-robots become prevalent at home, work, and public spaces, they need to become trust-worthy and socially believable agents if they are to be integrated into and accepted by society. The main goal of this project was to develop a new paradigm for trust building between humans and co-robots through novel emotional communication channels that utilize non-linguistic expressions such as vocal prosody and physical gestures.
To address this goal, a large dataset of vocal and instrumental audio phrases was created, labeled for emotional content by musicians, and validated in a large-scale user survey. Deep learning models were then developed and trained by this dataset to generate audio that carry Emotional Musical Prosody (EMP). A new rule-based model for emotion conveyance through gestures was also developed for multiple robotic platforms. The emotional generated audio was then integrated with the emotional robotic gesture generator in a number of robotic platforms.
Multiple Human-Robot Interaction studies have been conducted to evaluate the effectiveness of the integrated system in improving trust between humans and robots. The studies showed that our integrated sound-gestural system not only improved robotic emotion detection by humans but also significantly improved the perception of human trust in these robots. The system also presented significant results in improving the perception robotic animacy, anthropomorphism, likability, and intelligence.
Over the life of the project, two PhD students and 8 MS students have been trained and contributed to the development of different modules of the project. The graduate students were helped by more than 20 undergrad students as part of the Vertical Integrated Project class offered by Georgia Tech, where undergraduate students from all 5 colleges across campus are working with professors and grad students on their research.
Results from the project were disseminated through academic publications as well as performances, workshops, and concerts, where the general public learned about the importance of sound and gestures in creating trust between humans and robots. One of these performances titled FOREST received the international Falling Walls award in the category of Arts and Sciences, expanding the reach of the project to international audiences.
Last Modified: 12/25/2023
Modified by: Gil Weinberg
Please report errors in award information by writing to: awardsearch@nsf.gov.