Award Abstract # 2107451
III: Medium: Collaborative Research: MUDL: Multidimensional Uncertainty-Aware Deep Learning Framework

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF THE DISTRICT OF COLUMBIA
Initial Amendment Date: July 27, 2021
Latest Amendment Date: August 5, 2024
Award Number: 2107451
Award Instrument: Continuing Grant
Program Manager: Raj Acharya
racharya@nsf.gov
 (703)292-7978
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2021
End Date: September 30, 2025 (Estimated)
Total Intended Award Amount: $200,000.00
Total Awarded Amount to Date: $200,000.00
Funds Obligated to Date: FY 2021 = $106,505.00
FY 2023 = $51,063.00

FY 2024 = $42,432.00
History of Investigator:
  • Dong Hyun Jeong (Principal Investigator)
    djeong@udc.edu
Recipient Sponsored Research Office: University of the District of Columbia
4200 CONNECTICUT AVE NW
WASHINGTON
DC  US  20008-1122
(202)274-6260
Sponsor Congressional District: 00
Primary Place of Performance: University of the District of Columbia
4200 Connecticut Ave NW
Washington
DC  US  20008-1122
Primary Place of Performance
Congressional District:
00
Unique Entity Identifier (UEI): NLULJV36KE96
Parent UEI:
NSF Program(s): Info Integration & Informatics
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
01002223DB NSF RESEARCH & RELATED ACTIVIT

01002324DB NSF RESEARCH & RELATED ACTIVIT

01002425DB NSF RESEARCH & RELATED ACTIVIT

01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7924, 7364
Program Element Code(s): 736400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

People encounter serious hurdles in finding effective decision-making solutions to real world problems because of uncertainty from a lack of information, conflicting information, and/or unsure observations. Critical safety concerns have been consistently highlighted because how to interpret this uncertainty has not been carefully investigated. If the uncertainty is misinterpreted, this can result in unnecessary risk. For example, a self-driving autonomous car can misdetect a human in the road. An artificial intelligence-based medical assistant may misdiagnose cancer as a benign tumor. Further, a phishing email can be detected as a normal email. The consequences of all these misdetections or misclassifications caused by different types of uncertainty adds risk and potential adverse events. Artificial intelligence (AI) researchers have actively explored how to solve various decision-making problems under uncertainty. However, no prior research has looked into how different approaches of studying uncertainty in AI can leverage each other. This project studies how to measure different causes of uncertainty and use them to solve diverse decision-making problems more effectively. This project can help develop trustworthy AI algorithms that can be used in many real world decision-making problems. In addition, this project is highly transdisciplinary so that it can encourage broader, newer, and more diverse approaches. To magnify the impact of this project in research and education, this project leverages multicultural, diversity, and STEM programs for students with diverse backgrounds and under-represented populations. This project also includes seminar talks, workshops, short courses, and/or research projects for high school and community college students.

This project aims to develop a suite of deep learning (DL) techniques by considering multiple types of uncertainties caused by different root causes and employ them to maximize the effectiveness of decision-making in the presence of highly intelligent, adversarial attacks. This project makes a synergistic but transformative research effort to study: (1) how different types of uncertainties can be quantified based on belief theory; (2) how the estimates of different types of uncertainties can be considered in DL-based approaches; and (3) how multiple types of uncertainties influence the effectiveness and efficiency of decision-making in high-dimensional, complex problems. This project advances the state-of-the-art research by performing the following: (1) Proposing a scalable, robust unified DL-based framework to effectively infer predictive multidimensional uncertainty caused by heterogeneous root causes in adversarial environments. (2) Dealing with multidimensional uncertainty based on neural networks. (3) Enhancing both decision effectiveness and efficiency by considering multidimensional uncertainty-aware designs. (4) Testing proposed approaches to ensure their robustness in the presence of intelligent adversarial attackers with advanced deception tactics based on both simulation models and visualization tools.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Guo, Zhen and Wan, Zelin and Zhang, Qisheng and Zhao, Xujiang and Zhang, Qi and Kaplan, Lance M. and Jøsang, Audun and Jeong, Dong H. and Chen, Feng and Cho, Jin-Hee "A survey on uncertainty reasoning and quantification in belief theory and its application to deep learning" Information Fusion , 2023 https://doi.org/10.1016/j.inffus.2023.101987 Citation Details
Jeong, Dong H and Cho, Jin-Hee and Chen, Feng and Jøsang, Audun and Ji, Soo-Yeon "Active Learning on Neural Networks through Interactive Generation of Digit Patterns and Visual Representation" , 2023 https://doi.org/10.1109/ISEC57711.2023.10402169 Citation Details
Jeong, Dong Hyun and Cho, Jin-Hee and Chen, Feng and Kaplan, Lance and Jøsang, Audun and Ji, Soo-Yeon "Interactive Web-Based Visual Analysis on Network Traffic Data" Information , v.14 , 2023 https://doi.org/10.3390/info14010016 Citation Details
Jeong, Dong Hyun and Jeong, Bong Keun and Ji, Soo Yeon "Leveraging Machine Learning to Analyze Semantic User Interactions in Visual Analytics" Information , v.15 , 2024 https://doi.org/10.3390/info15060351 Citation Details
Jeong, Dong Hyun and Jeong, Bong-Keun and Ji, Soo-Yeon "Multi-Resolution Analysis with Visualization to Determine Network Attack Patterns" Applied Sciences , v.13 , 2023 https://doi.org/10.3390/app13063792 Citation Details
Jeong, Dong Hyun and Jeong, Bong Keun and Leslie, Nandi and Kamhoua, Charles and Ji, Soo-Yeon "Designing a supervised feature selection technique for mixed attribute data analysis" Machine Learning with Applications , v.10 , 2022 https://doi.org/10.1016/j.mlwa.2022.100431 Citation Details
Zhao, Xujiang and Zhang, Xuchao and Zhao, Chen and Cho, Jin-Hee and Kaplan, Lance and Jeong, Dong Hyun and Jøsang, Audun and Chen, Haifeng and Chen, Feng "Multi-Label Temporal Evidential Neural Networks for Early Event Detection" Proceedings of the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , 2023 https://doi.org/10.1109/ICASSP49357.2023.10096305 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page