
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | July 16, 2014 |
Latest Amendment Date: | August 8, 2016 |
Award Number: | 1421330 |
Award Instrument: | Standard Grant |
Program Manager: |
William Bainbridge
IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 1, 2014 |
End Date: | July 31, 2018 (Estimated) |
Total Intended Award Amount: | $499,992.00 |
Total Awarded Amount to Date: | $523,992.00 |
Funds Obligated to Date: |
FY 2016 = $24,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
3720 S FLOWER ST FL 3 LOS ANGELES CA US 90033 (213)740-7762 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
12015 Waterfront Drive Playa Vista CA US 90094-2536 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | HCC-Human-Centered Computing |
Primary Program Source: |
01001617DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This project will advance social skill training by developing and evaluating a multimodal computational framework specifically targeted to improve public speaking performance through repeated training interactions with a virtual audience that perceives the speaker and produces meaningful nonverbal feedback. Interpersonal skills such as public speaking are essential assets for a large variety of professions and in everyday life. The ability to communicate in social environments often greatly influences a person's career development, can help build relationships and resolve conflict. Public speaking is not a skill that is innate to everyone, but can be mastered through extensive training. Nonverbal communication is an important aspect of successful public speaking and interpersonal communication, and at the same time difficult to train. This research effort will create the computational foundations to automatically assess interpersonal skill expertise and help people improve their skills using an interactive simulated virtual human framework.
There are three fundamental research goals: (1) Developing a probabilistic computational model to learn the temporal and multimodal dependencies and infer a speaker's public speaking performance from acoustic and visual nonverbal behavior; (2) Understanding the design challenges of developing a simulated audience that is interactive, believable, and most importantly providing meaningful and training-relevant feedback to the speaker; and (3) Understanding the impact of the virtual audience on speakers' performance and learning outcomes by performing a comparative study investigating alternative feedback and training approaches. This work builds upon the promising results of a pilot research study and upon a prototype virtual human infrastructure allowing for the seamless integration of automatically modeled interpersonal skill expertise for flexible virtual human interaction and gesture control.
Virtual audiences have the great advantage that their appearance and behavioral patterns can be precisely programmed and systematically presented to pace the interaction. The algorithms developed as part of this research to model temporal and multimodal dependencies will have a broad applicability outside the domain of public speaking assessment, including healthcare applications. The interactive virtual human technology may serve as the basis for novel teaching applications in a wide range of areas in the future, due to its extensibility and availability. The programming code and data will be made available to the research community and students.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Good public speaking skills convey strong and effective communication, which is critical in many professions and used in everyday life. The ability to speak publicly requires a lot of training and practice. Recent technological developments enable new approaches for public speaking training that allow users to practice in a safe and engaging environment. We explored feedback strategies for public speaking training that are based on an interactive virtual audience paradigm. A virtual audience is comprised by interactive and reactive digital representations of humans. Within this project, we perform extensive evaluation based on self-assessment questionnaires, expert assessments, and objectively annotated measures, such as eye-contact and avoidance of pause fillers. Our experiments showed that the interactive virtual audience can be used to successfully train public speaking skills. Specifically, we showed that training with the audience increases engagement in the student as well as improved public speaking skills as judged by experts. We provide both realtime and actionable post-interaction feedback to the users to achieve optimal learning outcomes. In addition, to showing successful training outcomes we developed audiovisual machine learning methods to automatically assess speaker performances and their public speaking anxiety. Lastly, we expanded the technology to additionally enable training of Doctors' bedside manners and job interviewing skills (in collaboration with Drs. Talbot and Rizzo at the USC Institute for Creative Technologies).
Last Modified: 11/28/2018
Modified by: Stefan Scherer
Please report errors in award information by writing to: awardsearch@nsf.gov.