Award Abstract # 0713055
RI: Computer Vision Algorithms for the Study of Facial Expressions of Emotions in Sign Languages

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient:
Initial Amendment Date: August 10, 2007
Latest Amendment Date: May 7, 2009
Award Number: 0713055
Award Instrument: Continuing Grant
Program Manager: Jie Yang
jyang@nsf.gov
 (703)292-4768
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 15, 2007
End Date: July 31, 2011 (Estimated)
Total Intended Award Amount: $366,171.00
Total Awarded Amount to Date: $366,171.00
Funds Obligated to Date: FY 2007 = $117,044.00
FY 2008 = $121,980.00

FY 2009 = $127,147.00
History of Investigator:
  • Aleix Martinez (Principal Investigator)
    aleix@ece.osu.edu
Recipient Sponsored Research Office: Ohio State University Research Foundation -DO NOT USE
1960 KENNY RD
Columbus
OH  US  43210-1016
(614)688-8734
Sponsor Congressional District: 03
Primary Place of Performance: Ohio State University Research Foundation -DO NOT USE
1960 KENNY RD
Columbus
OH  US  43210-1016
Primary Place of Performance
Congressional District:
03
Unique Entity Identifier (UEI): QR7NH79713E5
Parent UEI:
NSF Program(s): Robust Intelligence
Primary Program Source: app-0107 
01000809DB NSF RESEARCH & RELATED ACTIVIT

01000910DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7495, 9216, HPCC
Program Element Code(s): 749500
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

PI: Aleix Martinez
Institution: Ohio State University

Title: RI: Computer Vision Algorithms for the Study of Facial Expressions of Emotions in Sign Languages

It is known that there exist important perceptual differences between deaf native users of American Sign Language (ASL) and hearing people with no prior exposure to ASL. This project will systematically investigate the differences between these two groups as they observe and classify images of faces with regard to the displayed emotion.
These perceptual differences may have they roots in the distinct manner in which native users of ASL and non-users code and analyze 2D and 3D motion patterns. We will thus study how these differences relate to the perception of movement. Finally, we will develop a face avatar that can emulate the facial movements of users and non-users of ASL. To achieve this goal, we will develop a set of computer vision algorithms that can be used to study the differences in production of facial expressions of emotions in native users of ASL and non-signers. A necessary step for this is to collect a database of facial expressions of emotions as produced by users of ASL. This will reveal differences at the production level and will allow for the study of perceptual differences.

The research described above addresses several critical issues.
First, these studies are fundamental to fully understand the underlying mechanisms used by the brain to analyze, code and recognize facial expressions of emotions. While research on facial expressions of emotion has proven extremely challenging to date, most of the studies have only targeted the hearing. This proposal will study the underlying mechanisms associated to code, produce and interpret facial expression of emotions of native users of ASL.
Unfortunately, the computer vision algorithms necessary to carry out these studies are not available. The research in this project is set to remedy this shortcoming.

The facial analysis studies that will be conducted during the course of this proposal can be used in a large number of applications, for example, from human-computer interaction systems where the computer interprets expressions form its user, and to study the role that each facial feature plays in the grammar of ASL. Furthermore, the study of emotional gestures will be valuable to those anthropologists attempting to understand and model the evolution of emotions, and could be used to develop mechanisms to detect lies and deceit. The database of facial expressions collected during the course this project will be made available to the research community and to educators of ASL. We will open collaborations with the School for the Deaf and encourage deaf students to pursue careers in computing and engineering.

URL:
http://cbcsl.ece.ohio-state.edu/research/

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

D. Neth and A.M. Martinez "Emotion Perception in Emotionless Face Images Suggests a Norm-based Representation" Journal of Vision , v.9 , 2009 , p.1
D. You, O. Hamsici and A.M. Martinez "Kernel Optimization in Discriminant Analysis" IEEE Transactions on Pattern Analysis and Machine Intelligence , v.33 , 2011
H. Jia and A.M. Martinez "Low-Rank Matrix Fitting Based on Subspace Perturbation Analysis with Applications to Structure from Motion" IEEE Transactions on Pattern Analysis and Machine Intelligence , v.31 , 2009
J. Fortuna and A.M. Martinez "Rigid Structure from Motion from a Blind Source Separation Perspective" International Journal of Computer Vision , v.88 , 2010 , p.404
L. Ding and A.M. Martinez "Modelling and Recognition of the Linguistic Components in American Sign Language" Image and Vision Computing , 2009
M. Zhu and A.M. Martinez "Pruning Noisy Bases in Discriminant Analysis" IEEE Transactions Neural Networks , v.19 , 2008
O.C. Hamsici and A.M. Martinez "Bayes Optimality in Linear Discriminant Analysis" IEEE Transactions on Pattern Analysis and Machine Intelligence , v.30 , 2008

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page