
NSF Org: |
CBET Division of Chemical, Bioengineering, Environmental, and Transport Systems |
Recipient: |
|
Initial Amendment Date: | June 16, 2021 |
Latest Amendment Date: | June 16, 2021 |
Award Number: | 2113485 |
Award Instrument: | Standard Grant |
Program Manager: |
Amanda O. Esquivel
aesquive@nsf.gov (703)292-0000 CBET Division of Chemical, Bioengineering, Environmental, and Transport Systems ENG Directorate for Engineering |
Start Date: | August 1, 2021 |
End Date: | July 31, 2025 (Estimated) |
Total Intended Award Amount: | $399,869.00 |
Total Awarded Amount to Date: | $399,869.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
W5510 FRANKS MELVILLE MEMORIAL LIBRARY STONY BROOK NY US 11794-0001 (631)632-9949 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
WEST 5510 FRK MEL LIB Stony Brook NY US 11794-0001 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Disability & Rehab Engineering |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.041 |
ABSTRACT
Interacting with computers remains a challenge for people with quadriplegia. Assistive technologies that enable hands-free interaction with computers are primarily based on eye-gaze, voice, and orally-controlled input modalities, each with its own strengths and weaknesses. However, these assistive technologies do not support collaborative use of multiple input modalities, such as using eye gaze to quickly narrow down the region containing the intended target for executing a spoken command. The overarching goal of the proposed project is to research, design and engineer intelligent and collaborative multimodal hands-free interaction techniques that synergistically combine inputs from different input modalities to accurately predict and act on the user's interaction intent. Synergistic integration of the input modalities and intelligent inferring of the user?s interaction intent amplify the collective strengths of the individual modalities while mitigating their weaknesses. More importantly, these techniques will also learn user-specific interaction patterns from the user?s interaction history for personalizing the prediction of each individual user?s intended action. Overall, the transformative assistive multimodal interaction system, SeeSayClick, that will emerge from this project, will make it far easier for people with quadriplegia to create and consume digital information and thereby fully participate in this digitized economy. The resulting higher productivity of such users will lead to improved access to education and employment opportunities. Lastly, this project will serve as a platform for training students and exposing them to careers in assistive technology development and rehabilitation engineering.
The novelty of the envisioned SeeSayClick assistive technology will be the tight integration of multiple interaction modalities that will work together synergistically and resolve ambiguities in interaction, and as a consequence, reduce the interaction burden substantially. The basis for the integration will be rooted in Bayesian inference methods for human computer interaction. These methods provide a principled approach for combining multiple sources of information, possibly noisy, to predict the user's intended interaction action, such as combining: (1) the locational information from gaze with (2) the spoken commands, and (3) prior knowledge from the interaction context to infer the intended target's precise location for selection and execution. By incorporating interaction history as prior into the Bayesian methods, the proposed approach for integrating multiple input modalities will also learn user-specific interaction patterns to personalize the prediction and enhance the prediction accuracy even further for each individual user. Besides cursor operations and command execution, the Bayesian methods will be coupled to a language model for text entry and editing operations.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.