
NSF Org: |
ECCS Division of Electrical, Communications and Cyber Systems |
Recipient: |
|
Initial Amendment Date: | September 6, 2022 |
Latest Amendment Date: | July 11, 2025 |
Award Number: | 2145346 |
Award Instrument: | Continuing Grant |
Program Manager: |
Huaiyu Dai
hdai@nsf.gov (703)292-4568 ECCS Division of Electrical, Communications and Cyber Systems ENG Directorate for Engineering |
Start Date: | September 15, 2022 |
End Date: | August 31, 2027 (Estimated) |
Total Intended Award Amount: | $500,000.00 |
Total Awarded Amount to Date: | $500,000.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
110 INNER CAMPUS DR AUSTIN TX US 78712-1139 (512)471-6424 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
TX US 78712-1532 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | CCSS-Comms Circuits & Sens Sys |
Primary Program Source: |
01002223DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.041 |
ABSTRACT
Efficient and scalable optimization algorithms (a.k.a., optimizers) are the cornerstone of almost all computational fields. In many practical applications of optimization, one will repeatedly perform a certain type of optimization tasks over a specific distribution of data. Learning to optimize (L2O) is an emerging paradigm that automatically develops an optimization method (optimizer) by learning from its performance on a set of past optimization tasks. Then on solving new but similar optimization tasks, the learned optimizer can demonstrate many promising benefits including faster convergence and/or better solution quality. As a fast-growing new field, many open challenges remain concerning both L2O's theoretical underpinnings and its practical applicability. In particular, the learned optimizers are often hard to interpret, trust, and scale.
The project targets those research gaps and expands to mid-term and long-term research directions pertaining to the foundations of L2O. Specifically, the project proposes a multi-pronged research agenda including: a novel symbolic representation that makes L2O lightweight and more interpretable; a Bayesian L2O modeling framework that can quantify optimizer uncertainty; new customized designs of L2O model architectures and regularizers that can robustly encode problem-specific priors; and a generic amalgamation scheme to bridge L2O training to classical optimizers as teachers. Each thrust addresses a unique aspect of L2O (representation, calibration, model design, and training strategy). Meanwhile, those thrusts are compatible with each other and can be applied together. The proposed efforts synergize cutting-edge technical advances from deep learning, symbolic learning, Bayesian optimization, and meta learning. Successful outcomes are expected to turn L2O into principled science as well as a mature tool for real applications. This project has an integrated plan of result dissemination, education, and outreach. In particular, all new algorithms resulting from the project will be integrated into the Open-L2O software package, developed and maintained by the PI's group.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.