Award Abstract # 1237134
SHB: Type II (INT): Synthesizing Self-Model and Mirror Feedback Imageries with Applications to Behavior Modeling for Children with Autism

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF KENTUCKY RESEARCH FOUNDATION, THE
Initial Amendment Date: September 6, 2012
Latest Amendment Date: January 22, 2014
Award Number: 1237134
Award Instrument: Standard Grant
Program Manager: Wendy Nilsen
wnilsen@nsf.gov
 (703)292-2568
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2012
End Date: December 31, 2017 (Estimated)
Total Intended Award Amount: $798,912.00
Total Awarded Amount to Date: $798,912.00
Funds Obligated to Date: FY 2012 = $798,912.00
History of Investigator:
  • Sen-ching Cheung (Principal Investigator)
    cheung@engr.uky.edu
  • Ramesh Bhatt (Co-Principal Investigator)
  • Neelkamal Soares (Co-Principal Investigator)
  • Lisa Ruble (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Kentucky Research Foundation
500 S LIMESTONE
LEXINGTON
KY  US  40526-0001
(859)257-9420
Sponsor Congressional District: 06
Primary Place of Performance: University of Kentucky Research Foundation
500 S Limestone 109 Kinkead Hall
Lexington
KY  US  40526-0001
Primary Place of Performance
Congressional District:
06
Unique Entity Identifier (UEI): H1HYA8Z1NTM5
Parent UEI:
NSF Program(s): Smart and Connected Health
Primary Program Source: 01001213DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 8018, 8062, 9150
Program Element Code(s): 801800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This project is an interdisciplinary, integrated research and education program to develop novel technologies in manipulating mirror images, aimed at studying and enabling behavioral modeling of children with autism spectrum disorder (ASD). Central to the research is a "virtual-mirror" device that combines a network of calibrated depth and visual sensors to render a viewpoint-dependent dynamic view of an arbitrary-shaped virtual mirror on a room-size see-through display. Through multimodal and spatially-diverse sensors, the proposed system provides high-fidelity, non-intrusive capturing of eye gaze, facial expression, body pose, body movement, and other human behavioral patterns. New multimedia processing algorithms will be developed for transferring 2D and 3D physical appearances, as well as behaviors from a source individual to a target individual with limited target training data to be rendered on regular displays and the virtual mirror.

Children with ASD typically lack interest in social interactions, but appear to be highly interested in their own image in mirrors and others imitating their actions. The software and hardware systems developed in this project can provide unprecedented capability in creating novel behaviors of self in both traditional visual medium and immersive devices. The system is expected to provide greater flexibility to therapists, teachers, and caretakers in creating material for video modeling and self-modeling therapy. Deployment over the web lowers the technology barrier, increases access to evidence-based treatments, and promotes consistent practice of behavioral modeling in home and beyond clinics. By combining visual feedback and real-time rendering of new behaviors, the virtual mirror is expected to deliver more effective behavioral modeling for children with ASD. Success will indicate that the self/other system in ASD can be modified, and suggest that a fundamental deficit in ASD is subject to environmental manipulation. The proposed research program is a collaborative effort of PIs from electrical engineering, psychology, medicine, and education. The educational objective is to develop a research program in which problem identification, data sharing, and problem solving occur collaboratively from the very beginning. Outreach programs including device demonstrations at pediatric clinics, television documentaries of research results, and visits to high schools in rural districts are planned to promote the use of technologies in solving important health and societal problems, as well as to broaden the participation of under-represented groups in STEM activities.

Information about this project, including but not limited to the description, personnel, acknowledgements, latest results, and publications, will be posted at http://www.vis.uky.edu/mialab/NSF_autism.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 17)
Cheung, S.-C. "Integrating Multimedia into Autism Intervention" IEEE Multimedia Magazine , v.22 , 2015 , p.4 1070-986X
H. Sajid and S.-C. Cheung "Universal Multimode Background Subtraction" IEEE Transactions on Image Processing , v.26 , 2017 , p.3249
Ju Shen, Sen-ching Cheung, and Jian Zhao "Virtual Mirror By Fusing Depth and Color Cameras" IEEE Transactions on Image Processing , v.22 , 2013 , p.1-16 10.1109/TIP.2013.2268941
Liu, R., J. Shen, Q. Sun, J. Yang, and S.-C. Cheung "Cascaded Pose Regression Revisited: Face Alignment in Videos" IEEE Third International Conference on Multimedia Big Data (BigMM), , 2017 , p.291
Luo, Y., S.-C. Cheung, T. Pignata, R. Lazzeretti, and M. Barni "Anonymous Subject Identification and Privacy Information Management in Video Surveillance" International Journal of Information Security , 2017 1615-5270
Sajid, H. and S.-C. Cheung "VSig: Hand-Gestured Signature Recognition and Authentication with Wearable Camera" IEEE International Workshop on Information Forensics and Security (WIFS 2015) , 2015 10.1109/WIFS.2015.7368566
Sajid, H., S.-C. Cheung, and N. Jacobs "Appearance based Background Subtraction for PTZ Cameras" Signal Processing: Image Communication, Elsevier , v.47 , 2016 , p.417 0923-5965
Shen, Ju and Ti, Changpeng and Raghunathan, Anusha and Cheung, Sen-ching S and Patel, Rita "Automatic video self modeling for voice disorder" Multimedia Tools and Applications , 2014 , p.1--23 10.1007/s11042-014-2015-1
Su, P.-C., J. Shen, W. Xu, S.-C. Cheung, and Y. Luo "A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks" Sensors , 2017
Su, Po-Chang, Wanxin Xu, Ju Shen, and S.-C. Cheung "Real-time rendering of physical scene on virtual curved mirror with RGB-D camera networks" IEEE International Conference on Multimedia & Expo Hot3D Workshop , 2017 , p.79
Wang, S., S.-C. Cheung, and H. Sajid "Visual Bubble: Protecting Privacy in Wearable Cameras" IEEE Consumer Electronics Magazine , v.7 , 2018
(Showing: 1 - 10 of 17)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Autism Spectrum Disorder (ASD) is the most prevalent developmental disorder among children in the US. The Center of Disease Control estimated that ASD currently affects 1 in 68 children. Hallmarks of the condition are social and communicative impairments. The former includes poor eye contact, a lack of shared enjoyment and social reciprocity, while the latter leads to significant difficulty in producing speech and using gestures to communicate with others.These impairments could lead to life-long obstacles in developing social relations and maintaining communicative engagements with others. If left untreated, most children with ASD will not be able to live independently as adults. Although early behavioral and educational interventions are effective in addressing many of these deficits, such interventions traditionally require significant effort from parents, therapists, and teachers.


In this NSF project, our interdisciplinary team at University of Kentucky have developed novel multimedia-based instruction (MBI) technologies for children with ASD. MBI presents concepts in a systematic, simple format, and it effectively gains and keeps the child’s attention while providing a less emotionally laden way to learn. Many children with ASD find explicit routines in MBI comforting, circumventing difficult social demands through predictable interaction patterns. From the standpoint of delivering the interventions, MBI offers portability across different learning environments, accessibility by diverse population, controlled presentation of instructional stimuli, and customization based on individual needs. Our project aimed at developing tools that could provide greater flexibility for therapists, teachers, and caretakers to adopt MBI in their programs. The deployment of such tools should lower the technology barrier, increase access to evidence-based treatments, and promote consistent practice of behavioral modeling in home and other places beyond clinics.


The two main prototypes developed in this project are MEBook and Virtual Mirror. MEBook is a social narrative tool combined with serious games for social greeting training (Fig. 1). While most of us do not think much about saying “good morning” or “goodbye,” greetings involve all key elements of social interaction—proper eye gaze, hand movement, and vocalization, all of which are difficult for most children with ASD to master. MEBook combines elements from three different evidence-based interventions: social narratives,video self-modeling (VSM), and reinforcement. A novel component of MEBook is the use of an RGB-depth sensor to implement a gesture-based video game that serves a dual purpose of reinforcing the learning and creating raw material for VSM. Human-subject study on MEBook with ASD children age 7 to 12 showed that subjects demonstrated marked improvements in both vocalization and eye contact after the intervention and were able to maintain even after the end of the MEBook intervention.


The Virtual Mirror system extends beyond MEBook and other VSM systems which rely solely on genuine video records of the target behaviors of the learners. The fidelity of those behaviors is often poor because the learner has not yet mastered the skills. Virtual Mirror is a room-based augmented mirror display system in which the learner can see, in real-time, a computer-generated himself or herself engaging in the target behavior at a slightly higher skill level. It is a complex system with many innovations that break new grounds in computer vision and graphics. These innovations include (1) denoising and segmentation algorithms for both color and depth cameras (Fig. 2), (2) color and depth camera network calibration and rendering systems for 3D virtual environment (Fig. 3), (3) planar and curve mirror simulators (Fig. 4), and (4) a behavior transfer system that can transfer facial expression, eye gaze, and body pose from one image to another (Fig. 5). Human subject studies are underway to measure the effectiveness of the Virtual Mirror system in helping children with ASD in various learning tasks.


The research products stemmed from this project have been widely disseminated. A total of 19 journal papers and 21 conference papers have been published in the past five years. This project has provided partial financial support for 12 graduate students and five undergraduate students from three different departments: Electrical & Computer Engineering, Computer Science, and Educational Psychology. Five doctoral students and six master students have successfully defended their degrees based on the work directly related to the project goals. A broad range of outreach activities have been conducted during the funding period, including a year-long programming class for special-need high school students, project demonstrations in local K-12 schools, and technical talks in autism advocacy groups, local makerspace, as well as government meetings and entrepreneurship groups.


Last Modified: 04/18/2018
Modified by: Sen-Ching S Cheung

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page