Award Abstract # 1421330
CHS: Small: Investigating an Interactive Computational Framework for Nonverbal Interpersonal Skills Training

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF SOUTHERN CALIFORNIA
Initial Amendment Date: July 16, 2014
Latest Amendment Date: August 8, 2016
Award Number: 1421330
Award Instrument: Standard Grant
Program Manager: William Bainbridge
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 1, 2014
End Date: July 31, 2018 (Estimated)
Total Intended Award Amount: $499,992.00
Total Awarded Amount to Date: $523,992.00
Funds Obligated to Date: FY 2014 = $499,992.00
FY 2016 = $24,000.00
History of Investigator:
  • Stefan Scherer (Principal Investigator)
    scherer@ict.usc.edu
  • Louis-Philippe Morency (Co-Principal Investigator)
  • Ari Shapiro (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Southern California
3720 S FLOWER ST FL 3
LOS ANGELES
CA  US  90033
(213)740-7762
Sponsor Congressional District: 34
Primary Place of Performance: University of Southern California
12015 Waterfront Drive
Playa Vista
CA  US  90094-2536
Primary Place of Performance
Congressional District:
36
Unique Entity Identifier (UEI): G88KLJR3KYT5
Parent UEI:
NSF Program(s): HCC-Human-Centered Computing
Primary Program Source: 01001415DB NSF RESEARCH & RELATED ACTIVIT
01001617DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7367, 7923, 9251
Program Element Code(s): 736700
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This project will advance social skill training by developing and evaluating a multimodal computational framework specifically targeted to improve public speaking performance through repeated training interactions with a virtual audience that perceives the speaker and produces meaningful nonverbal feedback. Interpersonal skills such as public speaking are essential assets for a large variety of professions and in everyday life. The ability to communicate in social environments often greatly influences a person's career development, can help build relationships and resolve conflict. Public speaking is not a skill that is innate to everyone, but can be mastered through extensive training. Nonverbal communication is an important aspect of successful public speaking and interpersonal communication, and at the same time difficult to train. This research effort will create the computational foundations to automatically assess interpersonal skill expertise and help people improve their skills using an interactive simulated virtual human framework.

There are three fundamental research goals: (1) Developing a probabilistic computational model to learn the temporal and multimodal dependencies and infer a speaker's public speaking performance from acoustic and visual nonverbal behavior; (2) Understanding the design challenges of developing a simulated audience that is interactive, believable, and most importantly providing meaningful and training-relevant feedback to the speaker; and (3) Understanding the impact of the virtual audience on speakers' performance and learning outcomes by performing a comparative study investigating alternative feedback and training approaches. This work builds upon the promising results of a pilot research study and upon a prototype virtual human infrastructure allowing for the seamless integration of automatically modeled interpersonal skill expertise for flexible virtual human interaction and gesture control.

Virtual audiences have the great advantage that their appearance and behavioral patterns can be precisely programmed and systematically presented to pace the interaction. The algorithms developed as part of this research to model temporal and multimodal dependencies will have a broad applicability outside the domain of public speaking assessment, including healthcare applications. The interactive virtual human technology may serve as the basis for novel teaching applications in a wide range of areas in the future, due to its extensibility and availability. The programming code and data will be made available to the research community and students.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Chollet, M. and Scherer, S. "Perception of Virtual Audiences" IEEE Computer Graphics and Applications , v.37 , 2017 , p.50
Chollet, M., Ghate, P., & Scherer, S. "A Generic Platform for Training Social Skills with Adaptative Virtual Agents" In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems , 2018
hollet, M., Ghate, P., Neubauer, C., & Scherer, S. "Influence of Individual Differences when Training Public Speaking with Virtual Audiences" In Proceedings of the 18th International Conference on Intelligent Virtual Agents , 2018
Scherer, S. "Multimodal behavior analytics for interactive technologies" KI-Künstliche Intelligenz , 2016

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Good public speaking skills convey strong and effective communication, which is critical in many professions and used in everyday life. The ability to speak publicly requires a lot of training and practice. Recent technological developments enable new approaches for public speaking training that allow users to practice in a safe and engaging environment. We explored feedback strategies for public speaking training that are based on an interactive virtual audience paradigm. A virtual audience is comprised by interactive and reactive digital representations of humans. Within this project, we perform extensive evaluation based on self-assessment questionnaires, expert assessments, and objectively annotated measures, such as eye-contact and avoidance of pause fillers. Our experiments showed that the interactive virtual audience can be used to successfully train public speaking skills. Specifically, we showed that training with the audience increases engagement in the student as well as improved public speaking skills as judged by experts. We provide both realtime and actionable post-interaction feedback to the users to achieve optimal learning outcomes. In addition, to showing successful training outcomes we developed audiovisual machine learning methods to automatically assess speaker performances and their public speaking anxiety. Lastly, we expanded the technology to additionally enable training of Doctors' bedside manners and job interviewing skills (in collaboration with Drs. Talbot and Rizzo at the USC Institute for Creative Technologies).


Last Modified: 11/28/2018
Modified by: Stefan Scherer

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page