Award Abstract # 2107450
III: Medium: Collaborative Research: MUDL: Multidimensional Uncertainty-Aware Deep Learning Framework

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: VIRGINIA POLYTECHNIC INSTITUTE & STATE UNIVERSITY
Initial Amendment Date: July 27, 2021
Latest Amendment Date: August 5, 2024
Award Number: 2107450
Award Instrument: Continuing Grant
Program Manager: Raj Acharya
racharya@nsf.gov
 (703)292-7978
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2021
End Date: September 30, 2025 (Estimated)
Total Intended Award Amount: $500,000.00
Total Awarded Amount to Date: $500,000.00
Funds Obligated to Date: FY 2021 = $94,899.00
FY 2022 = $128,307.00

FY 2023 = $136,361.00

FY 2024 = $140,433.00
History of Investigator:
  • Jin-Hee Cho (Principal Investigator)
    jicho@vt.edu
Recipient Sponsored Research Office: Virginia Polytechnic Institute and State University
300 TURNER ST NW
BLACKSBURG
VA  US  24060-3359
(540)231-5281
Sponsor Congressional District: 09
Primary Place of Performance: Virginia Polytechnic Institute and State University
7054 Haycock Road
Falls Church
VA  US  22043-2368
Primary Place of Performance
Congressional District:
08
Unique Entity Identifier (UEI): QDE5UHE5XD16
Parent UEI: X6KEFGLHSJX7
NSF Program(s): Info Integration & Informatics
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
01002223DB NSF RESEARCH & RELATED ACTIVIT

01002324DB NSF RESEARCH & RELATED ACTIVIT

01002425DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7364, 7924
Program Element Code(s): 736400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

People encounter serious hurdles in finding effective decision-making solutions to real world problems because of uncertainty from a lack of information, conflicting information, and/or unsure observations. Critical safety concerns have been consistently highlighted because how to interpret this uncertainty has not been carefully investigated. If the uncertainty is misinterpreted, this can result in unnecessary risk. For example, a self-driving autonomous car can misdetect a human in the road. An artificial intelligence-based medical assistant may misdiagnose cancer as a benign tumor. Further, a phishing email can be detected as a normal email. The consequences of all these misdetections or misclassifications caused by different types of uncertainty adds risk and potential adverse events. Artificial intelligence (AI) researchers have actively explored how to solve various decision-making problems under uncertainty. However, no prior research has looked into how different approaches of studying uncertainty in AI can leverage each other. This project studies how to measure different causes of uncertainty and use them to solve diverse decision-making problems more effectively. This project can help develop trustworthy AI algorithms that can be used in many real world decision-making problems. In addition, this project is highly transdisciplinary so that it can encourage broader, newer, and more diverse approaches. To magnify the impact of this project in research and education, this project leverages multicultural, diversity, and STEM programs for students with diverse backgrounds and under-represented populations. This project also includes seminar talks, workshops, short courses, and/or research projects for high school and community college students.

This project aims to develop a suite of deep learning (DL) techniques by considering multiple types of uncertainties caused by different root causes and employ them to maximize the effectiveness of decision-making in the presence of highly intelligent, adversarial attacks. This project makes a synergistic but transformative research effort to study: (1) how different types of uncertainties can be quantified based on belief theory; (2) how the estimates of different types of uncertainties can be considered in DL-based approaches; and (3) how multiple types of uncertainties influence the effectiveness and efficiency of decision-making in high-dimensional, complex problems. This project advances the state-of-the-art research by performing the following: (1) Proposing a scalable, robust unified DL-based framework to effectively infer predictive multidimensional uncertainty caused by heterogeneous root causes in adversarial environments. (2) Dealing with multidimensional uncertainty based on neural networks. (3) Enhancing both decision effectiveness and efficiency by considering multidimensional uncertainty-aware designs. (4) Testing proposed approaches to ensure their robustness in the presence of intelligent adversarial attackers with advanced deception tactics based on both simulation models and visualization tools.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Ali, Haider and Al Ameedi, Mohannad and Swami, Ananthram and Ning, Rui and Li, Jiang and Wu, Hongyi and Cho, Jin-Hee "ACADIA: Efficient and Robust Adversarial Attacks Against Deep Reinforcement Learning" IEEE Conference on Communications and Network Security} (CNS) , 2022 https://doi.org/10.1109/CNS56114.2022.9947234 Citation Details
Ali, Haider and Chen, Dian and Harrington, Matthew and Salazar, Nathaniel and Ameedi, Mohannad Al and Khan, Ahmad Faraz and Butt, Ali R. and Cho, Jin-Hee "A Survey on Attacks and Their Countermeasures in Deep Learning: Applications in Deep Neural Networks, Federated, Transfer, and Deep Reinforcement Learning" IEEE Access , v.11 , 2023 https://doi.org/10.1109/ACCESS.2023.3326410 Citation Details
Chen, Dian and Zhang, Qisheng and Chen, Ing-Ray and Ha, Dong Sam and Cho, Jin-Hee "Energy-Adaptive and Robust Monitoring for Smart Farms Based on Solar-Powered Wireless Sensors" IEEE Internet of Things Journal , 2024 https://doi.org/10.1109/JIOT.2024.3409525 Citation Details
Li, Changbin and Li, Kangshuo and Ou, Yuzhe and Kaplan, Lance M and Jøsang, Audun and Cho, Jin_Hee and Jeong, Dong Hyun and Chen, Feng "Hyper Evidential Deep Learning to Quantify Composite Classification Uncertainty" , 2024 Citation Details
Mahajan, Yash and Cho, Jin-Hee and Chen, Ing-Ray "Privacy-Preserving and Diversity-Aware Trust-based Team Formation in Online Social Networks" ACM Transactions on Intelligent Systems and Technology , 2024 https://doi.org/10.1145/3670411 Citation Details
Zhang, Qisheng and Mahajan, Yash and Chen, Ing-Ray and Ha, Dong Sam and Cho, Jin-Hee "An Attack-Resilient and Energy-Adaptive Monitoring System for Smart Farms" 2022 IEEE Global Communications Conference , 2022 https://doi.org/10.1109/GLOBECOM48099.2022.10001060 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page