Award Abstract # 1524782
CHS: Small: Digitally Mediated Multi-party Communication: Acquisition, Modeling, and Evaluation
NSF Org: |
IIS
Division of Information & Intelligent Systems
|
Recipient: |
UNIVERSITY OF HOUSTON SYSTEM
|
Initial Amendment Date:
|
August 19, 2015 |
Latest Amendment Date:
|
May 10, 2016 |
Award Number: |
1524782 |
Award Instrument: |
Standard Grant |
Program Manager: |
Ephraim Glinert
IIS
Division of Information & Intelligent Systems
CSE
Directorate for Computer and Information Science and Engineering
|
Start Date: |
September 1, 2015 |
End Date: |
August 31, 2020 (Estimated) |
Total Intended Award
Amount: |
$397,560.00 |
Total Awarded Amount to
Date: |
$413,560.00 |
Funds Obligated to Date:
|
FY 2015 = $397,560.00
FY 2016 = $16,000.00
|
History of Investigator:
|
-
Zhigang
Deng
(Principal Investigator)
zhigang.deng@gmail.com
|
Recipient Sponsored Research
Office: |
University of Houston
4300 MARTIN LUTHER KING BLVD
HOUSTON
TX
US
77204-3067
(713)743-5773
|
Sponsor Congressional
District: |
18
|
Primary Place of
Performance: |
University of Houston
4800 Calhoun Road
Houston
TX
US
77204-3010
|
Primary Place of
Performance Congressional District: |
18
|
Unique Entity Identifier
(UEI): |
QKWEF8XLMTT3
|
Parent UEI: |
|
NSF Program(s): |
HCC-Human-Centered Computing
|
Primary Program Source:
|
01001516DB NSF RESEARCH & RELATED ACTIVIT
01001617DB NSF RESEARCH & RELATED ACTIVIT
|
Program Reference
Code(s): |
7923,
7367,
9251
|
Program Element Code(s):
|
736700
|
Award Agency Code: |
4900
|
Fund Agency Code: |
4900
|
Assistance Listing
Number(s): |
47.070
|
ABSTRACT

Online persistent and shared multi-user virtual environments (MUVEs), with thousands or even millions of users, constitute an emerging and rapidly growing field that is likely to dramatically impact higher education in the near future. Direct player-to-player interaction, and the networks that players develop in the virtual world, are central to the unique experience and success of these MUVEs. However, despite their increased visual realism, the immersive "social functionality" in current MUVEs is still rudimentary at best, since real-world conversations and social interactions have not been mimicked and modeled. This is because it is technically challenging to extend existing one-to-one conversation modeling approaches to digitally mediated multi-party conversations and interactions in virtual worlds, due to the significant differences in nonverbal behavior and interaction patterns. The automated generation of digitally mediated multi-party communication and interaction has thus become a major technical barrier that restricts the depth and usefulness of various online virtual worlds and virtual reality applications. In this research, the PI will tackle this issue by designing new algorithms and systems driven by live speech from users in different locations, which can automatically generate synchronized multi-modal conversational gestures on embodied avatars, including head/eye movement, lip movement, hand gesture, and body posture. Project outcomes will facilitate the widespread adoption of useful avatar and tele-immersion technology in applications where computer-mediated communication plays a role, including education, commerce, health and engineering. The PI will make the acquired high-fidelity multi-modal multi-party conversational behavior datasets available to the scientific community at large, so they can be used in future research.
This ambitious project will focus on three inter-related research thrusts that are aligned with the PI's research expertise in computer animation, virtual humans, and human computer interaction. Automated generation of realistic talking avatars based on live speech input alone; the PI will design efficient and automated schemes to generate on-the-fly talking avatars based on live speech input, by fusing established social exchange rules with data-driven statistical modeling. Automated generation of believable listening avatars with immersive social exchanges; based on in-depth statistical analysis of real life multiparty conversation data, the PI will design data-driven schemes for generating tightly coordinated gazes, head movements, and body posture shifts on listening avatars, as well as social gaze exchanges between listening peers. Comparative evaluation of the proposed avatar-mediated multi-party conversation and interaction approach in an in-house built research testbed; the robustness and effectiveness of the proposed framework will be evaluated by integrating it into an in-house built research testbed (i.e., a simplified MUVE prototype).
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
(Showing: 1 - 10 of 66)
(Showing: 1 - 66 of 66)
Aobo Jin, Qiang Fu, and Zhigang Deng
"Contour-based 3D Modeling through Joint Embedding of Shapes and Contours"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D) 2020
, 2020
, p.9:1
https://doi.org/10.1145/3384382.3384518
Aobo Jin, Qixin Deng, Yuting Zhang, and Zhigang Deng
"A Deep Learning based Model for Head and Eye Motion Generation in Three-party Conversations"
Proceeding of the ACM on Computer Graphics and Interactive Techniques
, v.2
, 2019
, p.9:1
10.1145/3340250
Aobo Jin, Qixin Deng, Yuting Zhang, and Zhigang Deng
"A Deep Learning based Model for Head and Eye Motion Generation in Three-party Conversations"
Proceeding of the ACM on Computer Graphics and Interactive Techniques
, v.2
, 2019
, p.9:1
https://doi.org/10.1145/3340250
Bailin Yang, Tianxiang Wei, Xianyong Fang, Zhigang Deng, Frederick W.B. Li, Yun Liang, and Xun Wang
"A Color-Pair based Approach for Accurate Color Harmony Estimation"
Computer Graphics Forum
, v.38
, 2019
, p.481
https://doi.org/10.1111/cgf.13854
Binh H. Le, and Zhigang Deng
"Interactive Cage Generation for Mesh Deformation"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2017
, 2017
, p.3:1
10.1145/3023368.3023369
Binh H. Le, and Zhigang Deng
"Interactive Cage Generation for Mesh Deformation"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2017 (SI3D)
, 2017
, p.3:1
http://dx.doi.org/10.1145/3023368.3023369
Binh H. Le, and Zhigang Deng
"Interactive Cage Generation for Mesh Deformation"
Proc. of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2017
, 2017
, p.3:1
10.1145/3023368.3023369
Binh H. Le, and Zhigang Deng
"Interactive Cage Generation for Mesh Deformation"
Proc. of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2017
, 2017
, p.3:1
https://doi.org/10.1145/3023368.3023369
Deng, Qixin and Ma, Luming and Jin, Aobo and Bi, Huikun and Le, Binh Huy and Deng, Zhigang
"Plausible 3D Face Wrinkle Generation Using Variational Autoencoders"
IEEE Transactions on Visualization and Computer Graphics
, v.28
, 2022
https://doi.org/10.1109/TVCG.2021.3051251
Citation
Details
Guoliang Luo, Zhigang Deng, Xiaogang Jin, Xin Zhao, Wei Zeng, Wenqiang Xie, and Hyewon Seo
"3D mesh animation compression based on adaptive spatio-temporal segmentation"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2019
, 2019
, p.10:1
10.1145/3306131.3317017
Guoliang Luo, Zhigang Deng, Xiaogang Jin, Xin Zhao, Wei Zeng, Wenqiang Xie, and Hyewon Seo
"3D mesh animation compression based on adaptive spatio-temporal segmentation"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2019
, 2019
, p.10:1
https://doi.org/10.1145/3306131.3317017
Guoliang Luo, Zhigang Deng, Xiaogang Jin, Xin Zhao, Wei Zeng, Wenqiang Xie, and Hyewon Seo
"Spatio-temporal Segmentation based Adaptive Compression of Dynamic Mesh Sequences"
ACM Transactions on Multimedia Computing Communication Applications
, v.16
, 2020
, p.14:1
https://doi.org/10.1145/3377475
Hao Jiang, Zhigang Deng, Mingliang Xu, Xiangjun He, Tianlu Mao, and Zhaoqi Wang
"An Emotion Evolution based Model for Group Behavior Simulation"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2018
, 2018
, p.10:1
10.1145/3190834.3190844
Hao Jiang, Zhigang Deng, Mingliang Xu, Xiangjun He, Tianlu Mao, and Zhaoqi Wang
"An Emotion Evolution based Model for Group Behavior Simulation"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2018
, 2018
, p.10:1
10.1145/3190834.3190844
Hao Jiang, Zhigang Deng, Mingliang Xu, Xiangjun He, Tianlu Mao, and Zhaoqi Wang
"An Emotion Evolution based Model for Group Behavior Simulation"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2018
, 2018
, p.10:1
https://doi.org/10.1145/3190834.3190844
Huikun Bi, Tianlu Mao, Zhaoqi Wang, and Zhigang Deng
"A Data-driven Model for Lane-changing in Traffic Simulation"
Proceeding of ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2016
, 2016
, p.149
978-3-905674-61-3
Huikun Bi, Tianlu Mao, Zhaoqi Wang, and Zhigang Deng
"A Data-driven Model for Lane-changing in Traffic Simulation"
Proceeding of ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2016
, 2016
, p.149
978-3-905674-61-3
Huikun Bi, Tianlu Mao, Zhaoqi Wang, and Zhigang Deng
"A Data-driven Model for Lane-changing in Traffic Simulation"
Proceeding of ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2016
, 2016
, p.149
978-3-905674-61-3
Huikun Bi, Tianlu Mao, Zhaoqi Wang, and Zhigang Deng
"A Data-driven Model for Lane-changing in Traffic Simulation"
Proc. of ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2016
, 2016
, p.149
10.5555/2982818.2982839
Huikun Bi, Tianlu Mao, Zhaoqi Wang, and Zhigang Deng
"A Data-driven Model for Lane-changing in Traffic Simulation"
Proc. of ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2016
, 2016
, p.149
978-3-905674-61-3
Huikun Bi, Tianlu Mao, Zhaoqi Wang, and Zhigang Deng
"A Deep Learning based Framework for Intersectional Traffic Simulation and Editing"
IEEE Transactions on Visualization and Computer Graphics
, v.26
, 2020
, p.2335
10.1109/TVCG.2018.2889834
Huikun Bi, Zhong Fang, Tianlu Mao, Zhaoqi Wang, and Zhigang Deng
"Joint Prediction of Kinematic Trajectories in Vehicle-Pedestrian-Mixed Scenes"
Proceeding of IEEE International Conference on Computer Vision (ICCV) 2019
, 2019
, p.10383
10.1109/ICCV.2019.01048
Jiamin Xu, Weiwei Xu, Yin Yang, Zhigang Deng, and Hujun Bao
"Online Global Non-rigid Registration for 3D Object Reconstruction Using Consumer-level Depth Cameras"
Computer Graphics Forum
, v.37
, 2018
, p.1
https://doi.org/10.1111/cgf.13542
Jiamin Xu, Weiwei Xu, Yin Yang, Zhigang Deng, and Hujun Bao
"Online Global Non-rigid Registration for 3D Object Reconstruction Using Consumer-level Depth Cameras"
Computer Graphics Forum
, v.37
, 2018
, p.1
https://doi.org/10.1111/cgf.13542
Jin, Aobo and Deng, Qixin and Deng, Zhigang
"A Live Speech-Driven Avatar-Mediated Three-Party Telepresence System: Design and Evaluation"
PRESENCE: Virtual and Augmented Reality
, v.29
, 2020
https://doi.org/10.1162/PRES_a_00358
Citation
Details
Luming Ma, and Zhigang Deng
"Real-time Face Video Swapping From A Single Portrait"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D) 2020
, 2020
, p.3:1
https://doi.org/10.1145/3384382.3384519
Luming Ma, and Zhigang Deng
"Real-time Facial Expression Transformation for Monocular RGB Video"
Computer Graphics Forum
, v.38
, 2019
, p.470
https://doi.org/10.1111/cgf.13586
Luming Ma, and Zhigang Deng
"Real-time Facial Expression Transformation for Monocular RGB Video"
Computer Graphics Forum
, v.38
, 2019
, p.470
https://doi.org/10.1111/cgf.13586
Luming Ma, and Zhigang Deng
"Real-time Hierarchical Facial Performance Capture"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2019
, 2019
, p.11:1
10.1145/3306131.3317016
Luming Ma, and Zhigang Deng
"Real-time Hierarchical Facial Performance Capture"
Proceeding of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2019
, 2019
, p.11:1
https://doi.org/10.1145/3306131.3317016
Mingyuan Li, Mingliang Xu, Weiwei Xu, Zhigang Deng, Yin Yang, and Kun Zhou
"Interactive Mechanism Modeling from Multi-view Images"
ACM Transactions on Graphics
, v.35
, 2016
, p.236:1
10.1145/2980179.2982425
Mingyuan Li, Mingliang Xu, Weiwei Xu, Zhigang Deng, Yin Yang, and Kun Zhou
"Interactive Mechanism Modeling from Multi-view Images"
ACM Transactions on Graphics
, v.35
, 2016
, p.236:1
https://doi.org/10.1145/2980179.2982425
Mingyuan Li, Mingliang Xu, Weiwei Xu, Zhigang Deng, Yin Yang, and Kun Zhou
"Interactive Mechanism Modeling from Multi-view Images"
(conditionally accepted to) ACM Transactions on Graphics (Proceeding of ACM SIGGRAPH Asia 2016 Conference Papers)
, v.35
, 2016
Mingyuan Li, Mingliang Xu, Weiwei Xu, Zhigang Deng, Yin Yang, and Kun Zhou
"Interactive Mechanism Modeling from Multi-view Images,"
ACM Transactions on Graphics
, v.35
, 2016
, p.236:1
10.1145/2980179.2982425
Qianwen Chao, Huikun Bi, Weizi Li, Tianlu Mao, Zhaoqi Wang, Ming C. Lin, and Zhigang Deng
"A Survey on Visual Traffic Simulation: Models, Evaluations, and Applications in Autonomous Driving"
Computer Graphics Forum
, v.39
, 2020
, p.287
https://doi.org/10.1111/cgf.13803
Qianwen Chao, Zhigang Deng, Jiaping Ren, Qianqian Ye, and Xiaogang Jin
"Realistic Data-Driven Traffic Flow Animation using Texture Synthesis"
IEEE Transactions on Visualization and Computer Graphics
, v.24
, 2018
, p.1167
10.1109/TVCG.2017.2648790
Qianwen Chao, Zhigang Deng, Jiaping Ren, Qianqian Ye, and Xiaogang Jin
"Realistic Data-Driven Traffic Flow Animation using Texture Synthesis"
IEEE Transactions on Visualization and Computer Graphics
, v.24
, 2018
, p.1167
10.1109/TVCG.2017.2648790
Qianwen Chao, Zhigang Deng, Jiaping Ren, Qianqian Ye, and Xiaogang Jin
"Realistic Data-Driven Traffic Flow Animation using Texture Synthesis"
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, v.24
, 2018
, p.1167
10.1109/TVCG.2017.2648790
Qianwen Chao, Zhigang Deng, Yangyi Xiao, Dunhang He, Qiguang Miao, and Xiaogang Jin
"Dictionary-based Fidelity Measure for Virtual Traffic"
IEEE Transactions on Visualization and Computer Graphics
, v.26
, 2020
, p.1490
10.1109/TVCG.2018.2873695
Xifeng Gao, Jin Huang, Kaoji Xu, Derong Pan, Zhigang Deng, and Guoning Chen
"Evaluating Hex-mesh Quality Metrics via Correlation Analysis"
Computer Graphics Forum
, v.36
, 2017
, p.105
10.1111/cgf.13249
Xifeng Gao, Jin Huang, Kaoji Xu, Derong Pan, Zhigang Deng, and Guoning Chen
"Evaluating Hex-mesh Quality Metrics via Correlation Analysis"
Computer Graphics Forum
, v.36
, 2017
, p.105
https://doi.org/10.1111/cgf.13249
Xifeng Gao, Jin Huang, Kaoji Xu, Derong Pan, Zhigang Deng, and Guoning Chen
"Evaluating Hex-mesh Quality Metrics via Correlation Analysis"
Computer Graphics Forum
, v.36
, 2017
, p.105
https://doi.org/10.1111/cgf.13249
Xifeng Gao, Jin Huang, Kaoji Xu, Derong Pan, Zhigang Deng, and Guoning Chen
"Evaluating Hex-mesh Quality Metrics via Correlation Analysis"
Computer Graphics Forum
, v.36
, 2017
10.1111/cgf.13249
Xifeng Gao, Wenping Wang, Zhigang Deng, Daniele Panozzo, and Guoning Chen
"Robust Structure Simplification for Hex Re-meshing"
ACM Transactions on Graphics
, v.36
, 2017
, p.185:1
10.1145/3130800.3130848
Xifeng Gao, Wenping Wang, Zhigang Deng, Daniele Panozzo, and Guoning Chen
"Robust Structure Simplification for Hex Re-meshing"
ACM Transactions on Graphics
, v.36
, 2017
, p.185:1
https://doi.org/10.1145/3130800.3130848
Xifeng Gao, Wenping Wang, Zhigang Deng, Daniele Panozzo, and Guoning Chen
"Robust Structure Simplification for Hex Re-meshing"
ACM Transactions on Graphics (ACM SIGGRAPH Asia 2017 Conference)
, v.36
, 2017
, p.185:1
10.1145/3130800.3130848
Xuequan Lu, Honghua Chen, Sai-Kit Yeung, Zhigang Deng, and Wenzhi Chen
"Unsupervised Articulated Skeleton Learning from Point Set Sequences Captured by a Single Depth Camera"
Proceeding of Thirty-second AAAI Conference on Artificial Intelligence (AAAI) 2018
, 2018
, p.7226
978-1-57735-800-8
Xuequan Lu, Honghua Chen, Sai-Kit Yeung, Zhigang Deng, and Wenzhi Chen
"Unsupervised Articulated Skeleton Learning from Point Set Sequences Captured by a Single Depth Camera"
Proceeding of Thirty-second AAAI Conference on Artificial Intelligence (AAAI) 2018
, 2018
, p.7226
978-1-57735-800-8
Xuequan Lu, Honghua Chen, Sai-Kit Yeung, Zhigang Deng, and Wenzhi Chen
"Unsupervised Articulated Skeleton Learning from Point Set Sequences Captured by a Single Depth Camera"
Proceeding of Thirty-second AAAI Conference on Artificial Intelligence (AAAI) 2018
, 2018
Xuequan Lu, Zhigang Deng, and Wenzi Chen
"A Robust Scheme for Feature-Preserving Mesh Denoising"
IEEE Transactions on Visualization and Computer Graphics
, v.22
, 2016
, p.1181
10.1109/TVCG.2015.2500222
Xuequan Lu, Zhigang Deng, and Wenzi Chen
"A Robust Scheme for Feature-Preserving Mesh Denoising"
IEEE Transactions on Visualization and Computer Graphics
, v.22
, 2016
, p.1181
10.1109/TVCG.2015.2500222
Xuequan Lu, Zhigang Deng, and Wenzi Chen
"A Robust Scheme for Feature-Preserving Mesh Denoising"
IEEE Transactions on Visualization and Computer Graphics
, v.22
, 2016
, p.1181
10.1109/TVCG.2015.2500222
Xuequan Lu, Zhigang Deng, and Wenzi Chen
"A Robust Scheme for Feature-Preserving Mesh Denoising"
IEEE Transactions on Visualization and Computer Graphics
, v.22
, 2016
, p.1181
10.1109/TVCG.2015.2500222
Xuequan Lu, Zhigang Deng, and Wenzi Chen
"A Robust Scheme for Feature-Preserving Mesh Denoising"
IEEE Transactions on Visualization and Computer Graphics
, v.22
, 2016
, p.1181
10.1109/TVCG.2015.2500222
Xuequan Lu, Zhigang Deng, Wenzi Chen, Sai-Kit Yeung, Jun Luo, and Ying He
"3D Articulated Skeleton Extraction Using a Single Consumer-Grade Depth Camera"
Computer Vision and Image Understanding
, v.188
, 2019
, p.102792
https://doi.org/10.1016/j.cviu.2019.102792
Yu Ding, Lei Shi, and Zhigang Deng
"Low-level Characterization of Expressive Head Motion through Frequency Domain Analysis"
IEEE Transactions on Affective Computing
, v.11
, 2020
, p.405
10.1109/TAFFC.2018.2805892
Yu Ding, Lei Shi, and Zhigang Deng
"Perceptual Enhancement of Emotional Mocap Head Motion: An Experimental Study"
Proceeding of International Conference on Affective Computing and Intelligent Interaction 2017 (ACII)
, 2017
, p.242
10.1109/ACII.2017.8273607
Yu Ding, Lei Shi, and Zhigang Deng
"Perceptual Enhancement of Emotional Mocap Head Motion: An Experimental Study"
Proc. of International Conference on Affective Computing and Intelligent Interaction 2017
, 2017
, p.242
10.1109/ACII.2017.8273607
Yu Ding, Lei Shi, and Zhigang Deng
"Perceptual Enhancement of Emotional Mocap Head Motion: An Experimental Study"
Proc. of International Conference on Affective Computing and Intelligent Interaction 2017
, 2017
, p.242
10.1109/ACII.2017.8273607
Yu Ding, Yuting Zhang, Meihua Xiao, and Zhigang Deng
"A Multifaceted Study on Eye Contact based Speaker Identification in Three-party Conversations"
Proceeding of ACM SIGCHI International Conference on Human Factors in Computing Systems 2017
, 2017
, p.3011
10.1145/3025453.3025644
Yu Ding, Yuting Zhang, Meihua Xiao, and Zhigang Deng
"A Multifaceted Study on Eye Contact based Speaker Identification in Three-party Conversations"
Proceeding of ACM SIGCHI International Conference on Human Factors in Computing Systems 2017 (ACM CHI)
, 2017
, p.3011
http://dx.doi.org/10.1145/3025453.3025644
Yu Ding, Yuting Zhang, Meihua Xiao, and Zhigang Deng
"A Multifaceted Study on Eye Contact based Speaker Identification in Three-party Conversations"
Proc. of ACM SIGCHI International Conference on Human Factors in Computing Systems 2017
, 2017
, p.3011
10.1145/3025453.3025644
Yu Ding, Yuting Zhang, Meihua Xiao, and Zhigang Deng
"A Multifaceted Study on Eye Contact based Speaker Identification in Three-party Conversations"
Proc. of ACM SIGCHI International Conference on Human Factors in Computing Systems 2017
, 2017
, p.3011
https://doi.org/10.1145/3025453.3025644
Yutong Wang, Xiaowei Yue, Xiaogang Jin, and Zhigang Deng
"Creative Virtual Tree Modeling through Hierarchical Topology-preserving Blending"
IEEE Transactions on Visualization and Computer Graphics
, v.23
, 2017
, p.2521
10.1109/TVCG.2016.2636187
Yutong Wang, Xiaowei Yue, Xiaogang Jin, and Zhigang Deng
"Creative Virtual Tree Modeling through Hierarchical Topology-preserving Blending"
IEEE Transactions on Visualization and Computer Graphics
, v.23
, 2017
, p.2521
10.1109/TVCG.2016.2636187
Yutong Wang, Xiaowei Yue, Xiaogang Jin, and Zhigang Deng
"Creative Virtual Tree Modeling through Hierarchical Topology-preserving Blending"
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, v.23
, 2017
, p.2521
10.1109/TVCG.2016.2636187
(Showing: 1 - 10 of 66)
(Showing: 1 - 66 of 66)
PROJECT OUTCOMES REPORT

Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the
Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or
recommendations expressed in this Report are those of the PI and do not necessarily reflect the
views of the National Science Foundation; NSF has not approved or endorsed its content.
As the main research outcomes of this project, the PI and the team have designed the following novel algorithms and systems: (i) to investigate the contribution of eye contact to the identification of the speaker in three-party conversations, a data-driven framework was proposed to model the occurrence of eye contact during uttering and further distinguish the speaker from the listeners. The study also provides fresh quantitative evidence that eye contact provides an objective cue for reliable identification of the speakers in three-party conversations. (ii) to investigate how head motion contributes to the perception of emotion in an utterance, intra-related objective analysis and perceptual experiments were conducted to quantify the link between the perception of emotion and various static/dynamic head movement features. The study shows that humans are unable to reliably perceive emotion from head motion alone, and that humans are sensitive to the static feature (in reference to the averaged up-down rotation angle) and the dynamic features (which reflect the fluidity and speed of movement). (iii) A novel hierarchical method was developed to reconstruct high resolution facial geometry and appearance in real-time by capturing an individual-specific face model with fine-scale details, based on monocular RGB video input. (iv) A novel deep learning based framework was developed to generate realistic three-party head and eye motions based on novel acoustic speech input together with speaker marking (i.e., speaking time for each interlocutor). (v) A novel real-time end-to-end system was developed for facial expression transformation, without the need of any driving source. It can be directly used for transforming the expression of a given monocular face video to a new user-specified expression. (vi) A live speech driven, avatarized, three-party telepresence system was developed and evaluated, through three remote users, embodied as avatars in a shared 3D immersive virtual world, can perform natural three-party telecommunication.
Through the research participation in this project, more than 8 PhD students, postDocs, and undergraduate student have been trained. Most of them have been working in major IT companies and universities in US and around the world. More than 30 peer-reviewed research articles have been published on major journals and conferences in computer graphics and human computer interaction. In addition, many local high school students have been provided summer interns or lab tours in the PI’s group.
Last Modified: 09/07/2020
Modified by: Zhigang Deng
Please report errors in award information by writing to: awardsearch@nsf.gov.