Award Abstract # 2123809
Collaborative Research: SCH: Trustworthy and Explainable AI for Neurodegenerative Diseases

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF FLORIDA
Initial Amendment Date: September 9, 2021
Latest Amendment Date: July 31, 2023
Award Number: 2123809
Award Instrument: Standard Grant
Program Manager: Sylvia Spengler
sspengle@nsf.gov
 (703)292-7347
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2021
End Date: September 30, 2025 (Estimated)
Total Intended Award Amount: $840,000.00
Total Awarded Amount to Date: $856,000.00
Funds Obligated to Date: FY 2021 = $840,000.00
FY 2023 = $16,000.00
History of Investigator:
  • My Thai (Principal Investigator)
    mythai@cise.ufl.edu
  • Ruogu Fang (Co-Principal Investigator)
  • Adolfo Ramirez-Zamora (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Florida
1523 UNION RD RM 207
GAINESVILLE
FL  US  32611-1941
(352)392-3516
Sponsor Congressional District: 03
Primary Place of Performance: University of Florida
1 University of Florida
Gainesville
FL  US  32611-2002
Primary Place of Performance
Congressional District:
03
Unique Entity Identifier (UEI): NNFQH1JAPEP3
Parent UEI:
NSF Program(s): Smart and Connected Health,
Info Integration & Informatics
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 8018, 9102, 9251
Program Element Code(s): 801800, 736400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Driven by its performance accuracy, machine learning (ML) has been used extensively for various applications in the healthcare domain. Despite its promising performance, researchers and the public have grown alarmed by two unsettling deficiencies of these otherwise useful and powerful models. First, there is a lack of trustworthiness - ML models are prone to interference or deception and exhibit erratic behaviors when in action dealing with unseen data, despite good practice during the training phase. Second, there is a lack of interpretability - ML models have been described as 'black-boxes' because there is little explanation for why the models make the predictions they do. This has called into question the applicability of ML to decision-making in critical scenarios such as image-based disease diagnostics or medical treatment recommendation. The ultimate goal of this project is to develop computational foundation for trustworthy and explainable Artificial Intelligence (AI), and offer a low-cost and non-invasive ML-based approach to early diagnosis of neurodegenerative diseases. In particular, the project aims to develop computational theories, ML algorithms, and prototype systems. The project includes developing principled solutions to trustworthy ML and making the ML prediction process transparent to end-users. The later will focus on explaining how and why an ML model makes such a prediction, while dissecting its underlying structure for deeper understanding. The proposed models are further extended to a multi-modal and spatial-temporal framework, an important aspect of applying ML models to healthcare. A verification framework with end-users is defined, which will further enhance the trustworthiness of the prototype systems. This project will benefit a variety of high-impact AI-based applications in terms of their explainability, trustworthy, and verifiability. It not only advances the research fronts of deep learning and AI, but also supports transformations in diagnosing neurodegenerative diseases.

This project will develop the computational foundation for trustworthy and explainable AI with several innovations. First, the project will systematically study the trustworthiness of ML systems. This will be measured by novel metrics such as, adversarial robustness and semantic saliency, and will be carried out to establish the theoretical basis and practical limits of trustworthiness of ML algorithms. Second, the project provides a paradigm shift for explainable AI, explaining how and why a ML model makes its prediction, moving away from ad-hoc explanations (i.e. what features are important to the prediction). A proof-based approach, which probes all the hidden layers of a given model to identify critical layers and neurons involved in a prediction from a local point of view, will be devised. Third, a verification framework, where users can verify the model's performance and explanations with proofs, will be designed to further enhance the trustworthiness of the system. Finally, the project also advances the frontier of neurodegenerative diseases early diagnosis from multimodal imaging and longitudinal data by: (i) identifying retinal vasculature biomarkers using proof-based probing in biomarker graph networks; (ii) connecting biomarkers of the retina and the brain vasculature via cross- modality explainable AI model; and, (iii) recognizing the longitudinal trajectory of vasculature biomarkers via a spatio-temporal recurrent explainable model. This synergistic effort between computer science and medicine will enable a wide range of applications to trustworthy and explainable AI for healthcare. The results of this project will be assimilated into the courses and summer programs that the research team have developed with specially designed projects to train students with trustworthy and explainable AI.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 11)
Cox, Joseph and Liu, Peng and Stolte, Skylar E and Yang, Yunchao and Liu, Kang and See, Kyle B and Ju, Huiwen and Fang, Ruogu "BrainSegFounder: Towards 3D foundation models for neuroimage segmentation" Medical Image Analysis , v.97 , 2024 https://doi.org/10.1016/j.media.2024.103301 Citation Details
He, W and Vu, M. and Jiang, Z and Thai, M.T. "An Explainer for Temporal Graph Neural Networks" IEEE Global Communications Conference , 2022 Citation Details
M. N. Vu, T. D. "c-Eval: A Unified Metric to Evaluate Feature-based Explanations via Perturbation" 2021 IEEE International Conference on Big Data (BigData) , 2021 Citation Details
Nguyen, Truc and Lai, Phung and Phan, Hai and Thai, My T. "XRAND: Differentially Private Defense against Explanation-Guided Attacks" Proceedings of the AAAI Conference on Artificial Intelligence , 2023 Citation Details
Skylar Stolte, Kyle Volle "DOMINO: Domain-aware Model Calibration in Medical Image Segmentation" 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) , 2022 Citation Details
Stolte, Skylar and Volle, Kyle and Indahlastari, Aprinda and Albizu, Alejandro and Woods, Adam and Brink, Kevin and Hale, Matthew and Fang, Ruogu "DOMINO: Domain-aware Loss for Deep Learning Calibration" Software impacts , 2023 Citation Details
Stolte, Skylar E and Indahlastari, Aprinda and Chen, Jason and Albizu, Alejandro and Dunn, Ayden and Pedersen, Samantha and See, Kyle B and Woods, Adam J and Fang, Ruogu "Precise and rapid whole-head segmentation from magnetic resonance images of older adults using deep learning" Imaging Neuroscience , v.2 , 2024 https://doi.org/10.1162/imag_a_00090 Citation Details
Tran, Charlie and Shen, Kai and Liu, Kang and Ashok, Akshay and Ramirez-Zamora, Adolfo and Chen, Jinghua and Li, Yulin and Fang, Ruogu "Deep learning predicts prevalent and incident Parkinsons disease from UK Biobank fundus imaging" Scientific Reports , v.14 , 2024 https://doi.org/10.1038/s41598-024-54251-1 Citation Details
Vu, Minh and Nguyen, Truc and Thai, My T. "NeuCEPT: Locally Discover Neural Networks' Mechanism via Critical Neurons Identification with Precision Guarantee" ICDM , 2022 Citation Details
Vu, Minh N and Thai, My T "Limitations of Perturbation-based Explanation Methods for Temporal Graph Neural Networks" , 2023 https://doi.org/10.1109/ICDM58522.2023.00071 Citation Details
Yousefzadeh, Nooshin and Tran, Charlie and Ramirez-Zamora, Adolfo and Chen, Jinghua and Fang, Ruogu and Thai, My T. "Neuron-level explainable AI for Alzheimers Disease assessment from fundus images" Scientific Reports , v.14 , 2024 https://doi.org/10.1038/s41598-024-58121-8 Citation Details
(Showing: 1 - 10 of 11)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page