
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | August 19, 2022 |
Latest Amendment Date: | August 19, 2022 |
Award Number: | 2205418 |
Award Instrument: | Standard Grant |
Program Manager: |
Sylvia Spengler
sspengle@nsf.gov (703)292-7347 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | September 1, 2022 |
End Date: | August 31, 2026 (Estimated) |
Total Intended Award Amount: | $570,102.00 |
Total Awarded Amount to Date: | $570,102.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
201 PRESIDENTS CIR SALT LAKE CITY UT US 84112-9049 (801)581-6903 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
75 S 2000 E SALT LAKE CITY UT US 84112-8930 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Smart and Connected Health |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Deep learning models are being developed for safety-critical applications, such as health care, autonomous vehicles, and security. Their impressive performance has the potential to make profound impacts on human lives. For example, deep neural networks (DNNs) in medical imaging have been shown to have impressive diagnostic capabilities, often near that of expert radiologists. However, deep learning has not made it into standard clinical care, primarily due to a lack of understanding of why a model works and why it fails. The goal of this project is to develop methods for making machine learning models interpretable and reliable, and thus bridge the trust gap to make machine learning translatable to the clinic. This project achieves this goal through investigation of the mathematical foundations -- specifically the geometry and topology -- of DNNs. Based on these mathematical foundations, this project will develop computational tools that will improve the interpretability and reliability of DNNs. The methods developed in this project will be broadly applicable wherever deep learning is used, including health care, security, computer vision, natural language processing, etc.
The power of a deep neural network lies in its hidden layers, where the network learns internal representations of input data. This research project centers around the hypothesis that geometry and topology provide critical tools for analyzing the internal representations of DNNs. The first goal of this project is to develop a rigorous mathematical and algorithmic foundation for describing the geometry and topology of a neural network's internal representations and then design efficient algorithms for geometric and topological computations necessary to explore these spaces. The next aim of this project is to apply these tools to improve the interpretability of deep learning. This will be done by linking a model's internal representation with interpretable and trusted features and by interactive visualization that explores the landscape of a model's internal representation. The next goal of this project focuses on model reliability, where geometry and topology will be used for failure identification, mitigation, and prevention. Finally, this project will test the developed techniques for reliable and interpretable neural networks in a real-world setting to aid expert oncologists in predicting patient outcomes in head and neck cancers, e.g., whether a tumor will metastasize.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.