Skip to feedback

Award Abstract # 2309810
Optimization-based Implicit Deep Learning, Theory and Applications

NSF Org: DMS
Division Of Mathematical Sciences
Recipient: TRUSTEES OF THE COLORADO SCHOOL OF MINES
Initial Amendment Date: July 14, 2023
Latest Amendment Date: August 21, 2024
Award Number: 2309810
Award Instrument: Continuing Grant
Program Manager: Yuliya Gorb
ygorb@nsf.gov
 (703)292-2113
DMS
 Division Of Mathematical Sciences
MPS
 Directorate for Mathematical and Physical Sciences
Start Date: July 15, 2023
End Date: June 30, 2026 (Estimated)
Total Intended Award Amount: $294,995.00
Total Awarded Amount to Date: $193,878.00
Funds Obligated to Date: FY 2023 = $95,575.00
FY 2024 = $98,303.00
History of Investigator:
  • Samy Wu Fung (Principal Investigator)
    swufung@mines.edu
Recipient Sponsored Research Office: Colorado School of Mines
1500 ILLINOIS ST
GOLDEN
CO  US  80401-1887
(303)273-3000
Sponsor Congressional District: 07
Primary Place of Performance: Colorado School of Mines
1500 ILLINOIS ST
GOLDEN
CO  US  80401-1887
Primary Place of Performance
Congressional District:
07
Unique Entity Identifier (UEI): JW2NGMP4NMA3
Parent UEI: JW2NGMP4NMA3
NSF Program(s): COMPUTATIONAL MATHEMATICS
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
01002425DB NSF RESEARCH & RELATED ACTIVIT

01002526DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 079Z, 9263
Program Element Code(s): 127100
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.049

ABSTRACT

The past decade has seen remarkable success in deep learning. However, a significant challenge in today's era is to ensure interpretability and reliability in these models. In various applications, deep neural networks (DNNs) need to provide guarantees on their outputs, such as maintaining a self-driving car within its lane. On the other hand, many of these tasks can be formulated as optimization problems, where optimization algorithms offer interpretable and reliable solutions. Unfortunately, these models do not leverage data and thus fall short of state-of-the-art deep learning models. This research will address enhancing interpretability and reliability in deep learning methods and improve public safety when such learning methods are applied. In addition, the project will provide valuable educational opportunities for students involved. Participants will gain knowledge in inverse problems, optimization, and machine learning, which are transferable skills applicable in academia, government, and industry.

The project aims to develop a framework that combines the interpretability and reliability of optimization algorithms with the design and training of DNNs. The primary focus is on implicit networks, a type of DNNs that determines their outputs implicitly through fixed point or optimality conditions, rather than a fixed number of computations like traditional DNNs with a set number of layers. This integration of optimization algorithms into implicit networks is referred to as implicit learning-to-optimize (L2O) networks. Implicit L2O networks have the potential to overcome the limitations of traditional DNNs, including their lack of reliability and interpretability. However, training and designing implicit L2O models present additional challenges that hinder their widespread adoption. To address these challenges, the research aims to develop a universal implicit L2O framework.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

McKenzie, D and Heaton, H and Li, Q and Wu_Fung, S and Osher, S and Yin, W "Three-Operator Splitting for Learning to Predict Equilibria in Convex Games" SIAM Journal on Mathematics of Data Science , v.6 , 2024 https://doi.org/10.1137/22M1544531 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page