Award Abstract # 2205417
Collaborative Research: SCH: Geometry and Topology for Interpretable and Reliable Deep Learning in Medical Imaging

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: RECTOR & VISITORS OF THE UNIVERSITY OF VIRGINIA
Initial Amendment Date: August 19, 2022
Latest Amendment Date: August 19, 2022
Award Number: 2205417
Award Instrument: Standard Grant
Program Manager: Sylvia Spengler
sspengle@nsf.gov
 (703)292-7347
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2022
End Date: August 31, 2026 (Estimated)
Total Intended Award Amount: $622,992.00
Total Awarded Amount to Date: $622,992.00
Funds Obligated to Date: FY 2022 = $622,992.00
History of Investigator:
  • Preston Fletcher (Principal Investigator)
    ptf8v@virginia.edu
  • Jonathan Garneau (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Virginia Main Campus
1001 EMMET ST N
CHARLOTTESVILLE
VA  US  22903-4833
(434)924-4270
Sponsor Congressional District: 05
Primary Place of Performance: University of Virginia Main Campus
85 Engineers Way
CHARLOTTESVILLE
VA  US  22904-4195
Primary Place of Performance
Congressional District:
05
Unique Entity Identifier (UEI): JJG6HU8PA4S5
Parent UEI:
NSF Program(s): Smart and Connected Health
Primary Program Source: 01002223DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 8018
Program Element Code(s): 801800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Deep learning models are being developed for safety-critical applications, such as health care, autonomous vehicles, and security. Their impressive performance has the potential to make profound impacts on human lives. For example, deep neural networks (DNNs) in medical imaging have been shown to have impressive diagnostic capabilities, often near that of expert radiologists. However, deep learning has not made it into standard clinical care, primarily due to a lack of understanding of why a model works and why it fails. The goal of this project is to develop methods for making machine learning models interpretable and reliable, and thus bridge the trust gap to make machine learning translatable to the clinic. This project achieves this goal through investigation of the mathematical foundations -- specifically the geometry and topology -- of DNNs. Based on these mathematical foundations, this project will develop computational tools that will improve the interpretability and reliability of DNNs. The methods developed in this project will be broadly applicable wherever deep learning is used, including health care, security, computer vision, natural language processing, etc.

The power of a deep neural network lies in its hidden layers, where the network learns internal representations of input data. This research project centers around the hypothesis that geometry and topology provide critical tools for analyzing the internal representations of DNNs. The first goal of this project is to develop a rigorous mathematical and algorithmic foundation for describing the geometry and topology of a neural network's internal representations and then design efficient algorithms for geometric and topological computations necessary to explore these spaces. The next aim of this project is to apply these tools to improve the interpretability of deep learning. This will be done by linking a model's internal representation with interpretable and trusted features and by interactive visualization that explores the landscape of a model's internal representation. The next goal of this project focuses on model reliability, where geometry and topology will be used for failure identification, mitigation, and prevention. Finally, this project will test the developed techniques for reliable and interpretable neural networks in a real-world setting to aid expert oncologists in predicting patient outcomes in head and neck cancers, e.g., whether a tumor will metastasize.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Jin, Yinzhu and Dwyer, Matthew B and Fletcher, P Thomas "Measuring Feature Dependency of Neural Networks by Collapsing Feature Dimensions in The Data Manifold" , 2024 https://doi.org/10.1109/ISBI56570.2024.10635874 Citation Details
Jin, Yinzhu and McDaniel, Rory and Tatro, N Joseph and Catanzaro, Michael J and Smith, Abraham D and Bendich, Paul and Dwyer, Matthew B and Fletcher, P Thomas "Implications of data topology for deep generative models" Frontiers in Computer Science , v.6 , 2024 https://doi.org/10.3389/fcomp.2024.1260604 Citation Details
Spears, Tyler and Fletcher, P Thomas "Learning Spatially-Continuous Fiber Orientation Functions" , 2024 https://doi.org/10.1109/ISBI56570.2024.10635838 Citation Details
Zhu, Shen and Zawar, Ifrah and Kapur, Jaideep and Fletcher, P Thomas "Quantifying Hippocampal Shape Asymmetry in Alzheimers Disease Using Optimal Shape Correspondences" , 2024 https://doi.org/10.1109/ISBI56570.2024.10635697 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page