
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | July 27, 2021 |
Latest Amendment Date: | August 5, 2024 |
Award Number: | 2107451 |
Award Instrument: | Continuing Grant |
Program Manager: |
Raj Acharya
racharya@nsf.gov (703)292-7978 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | October 1, 2021 |
End Date: | September 30, 2025 (Estimated) |
Total Intended Award Amount: | $200,000.00 |
Total Awarded Amount to Date: | $200,000.00 |
Funds Obligated to Date: |
FY 2023 = $51,063.00 FY 2024 = $42,432.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
4200 CONNECTICUT AVE NW WASHINGTON DC US 20008-1122 (202)274-6260 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
4200 Connecticut Ave NW Washington DC US 20008-1122 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Info Integration & Informatics |
Primary Program Source: |
01002223DB NSF RESEARCH & RELATED ACTIVIT 01002324DB NSF RESEARCH & RELATED ACTIVIT 01002425DB NSF RESEARCH & RELATED ACTIVIT 01002122DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
People encounter serious hurdles in finding effective decision-making solutions to real world problems because of uncertainty from a lack of information, conflicting information, and/or unsure observations. Critical safety concerns have been consistently highlighted because how to interpret this uncertainty has not been carefully investigated. If the uncertainty is misinterpreted, this can result in unnecessary risk. For example, a self-driving autonomous car can misdetect a human in the road. An artificial intelligence-based medical assistant may misdiagnose cancer as a benign tumor. Further, a phishing email can be detected as a normal email. The consequences of all these misdetections or misclassifications caused by different types of uncertainty adds risk and potential adverse events. Artificial intelligence (AI) researchers have actively explored how to solve various decision-making problems under uncertainty. However, no prior research has looked into how different approaches of studying uncertainty in AI can leverage each other. This project studies how to measure different causes of uncertainty and use them to solve diverse decision-making problems more effectively. This project can help develop trustworthy AI algorithms that can be used in many real world decision-making problems. In addition, this project is highly transdisciplinary so that it can encourage broader, newer, and more diverse approaches. To magnify the impact of this project in research and education, this project leverages multicultural, diversity, and STEM programs for students with diverse backgrounds and under-represented populations. This project also includes seminar talks, workshops, short courses, and/or research projects for high school and community college students.
This project aims to develop a suite of deep learning (DL) techniques by considering multiple types of uncertainties caused by different root causes and employ them to maximize the effectiveness of decision-making in the presence of highly intelligent, adversarial attacks. This project makes a synergistic but transformative research effort to study: (1) how different types of uncertainties can be quantified based on belief theory; (2) how the estimates of different types of uncertainties can be considered in DL-based approaches; and (3) how multiple types of uncertainties influence the effectiveness and efficiency of decision-making in high-dimensional, complex problems. This project advances the state-of-the-art research by performing the following: (1) Proposing a scalable, robust unified DL-based framework to effectively infer predictive multidimensional uncertainty caused by heterogeneous root causes in adversarial environments. (2) Dealing with multidimensional uncertainty based on neural networks. (3) Enhancing both decision effectiveness and efficiency by considering multidimensional uncertainty-aware designs. (4) Testing proposed approaches to ensure their robustness in the presence of intelligent adversarial attackers with advanced deception tactics based on both simulation models and visualization tools.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.