
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | March 8, 2021 |
Latest Amendment Date: | May 16, 2023 |
Award Number: | 2046972 |
Award Instrument: | Continuing Grant |
Program Manager: |
Dan Cosley
dcosley@nsf.gov (703)292-8832 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | March 1, 2021 |
End Date: | February 28, 2026 (Estimated) |
Total Intended Award Amount: | $500,000.00 |
Total Awarded Amount to Date: | $506,000.00 |
Funds Obligated to Date: |
FY 2022 = $311,566.00 FY 2023 = $6,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
201 OLD MAIN UNIVERSITY PARK PA US 16802-1503 (814)865-1372 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
110 Technology Center Building University Park PA US 16802-1503 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | HCC-Human-Centered Computing |
Primary Program Source: |
01002122DB NSF RESEARCH & RELATED ACTIVIT 01002324DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This project will advance the state-of-the-art in cross-disciplinary areas including motion signal processing, machine learning (ML), sign language modeling, and real-time ML with dynamic device/edge partitioning, to develop new technology for automatic Sign Language Recognition (SLR) and translation to spoken language that enables more seamless communication between deaf and hearing people. The technology will incorporate wearable devices (such as a smartwatch, smart ring, and earphones) that are gaining in popularity, and will have broad impact through its introduction in deaf communities along with a sign language equivalent of voice assistants such as Amazon Alexa. The project will establish a pipeline of collaboration with deaf students, as well as courses based on SLR technology that will be disseminated through MOOC platforms such as Coursera. Additional impact will derive from workshops on wearable computing that will be conducted at the K-12 level, and a "sign-to-speech" library that will be publicly released for extensibility of the new technology to multiple sign languages.
To achieve its goals this research will include three thrusts: Development of ML models with efficient training that can perform accurate SLR by fusing multimodal input data from wearable devices that capture body motion and facial expressions; Implementation of efficient ML models by means of optimal partitioning between end-device and edge resources to achieve the best tradeoff in real time performance and SLR accuracy; Design of systematic user studies with fluent sign language users both for generating training data for ML models as well as for validation of accuracy, usability, and acceptability of the technology within the deaf community.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.