
NSF Org: |
DMS Division Of Mathematical Sciences |
Recipient: |
|
Initial Amendment Date: | June 11, 2021 |
Latest Amendment Date: | June 11, 2021 |
Award Number: | 2113724 |
Award Instrument: | Standard Grant |
Program Manager: |
Yulia Gel
DMS Division Of Mathematical Sciences MPS Directorate for Mathematical and Physical Sciences |
Start Date: | July 1, 2021 |
End Date: | June 30, 2024 (Estimated) |
Total Intended Award Amount: | $200,000.00 |
Total Awarded Amount to Date: | $200,000.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1200 E CALIFORNIA BLVD PASADENA CA US 91125-0001 (626)395-6219 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
CA US 91125-0001 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | STATISTICS |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.049 |
ABSTRACT
Methods for the solution of inverse problems arising in domains such as image analysis, the geosciences, computational genomics, and many others are designed based on a detailed understanding by a human analyst of the structure underlying the problem. This project aims to develop new data-driven approaches to learning solution methods for inverse problems and to develop the associated statistical foundations. Specifically, the project will provide a new approach to data-driven design of learning regularizers, which can be computed or optimized within a specified computational budget, and come with statistical guarantees. The research will engage both graduate and undergraduate students and will be disseminated to a broader audience through the development of new courses.
Regularization techniques are widely employed in the solution of model selection and statistical inverse problems because of their effectiveness in addressing difficulties due to ill-posedness, access to only a small number of observations, or the high dimensionality of the signal or model to be inferred. In their most common manifestation, these methods take the form of penalty functions added to the objective in optimization-based formulations. The design of the penalty function is based on prior domain-specific expertise about the particular model selection or inverse problem at hand, with a view to promoting a desired structure in the solution. This project will develop a framework for the construction of algorithms for inferential problems so as to address the following questions ? What if we do not know in advance the structure we seek in our solution due to a lack of detailed domain knowledge? Can we identify a suitable regularizer directly from data rather than human-provided expertise? What are the fundamental limitations in terms of sample complexity and the amount of computational resources required in such a framework? Statistically, how do we provide confidence bounds for point estimates that lie in a collection of regularizers?
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
The aim of this project was to develop the statistical and computational foundations of the question of learning regularizers from data. Regularization is a popular method for solving ill-posed inverse problems arising in statistics and signal processing. Regularizers are often derived via human-provided domain expertise in the context of a particular application, and they are useful for promoting a desired structure in the solution of an inverse problem. However, in many settings such prior expertise may not be available and regularizers must be learned from data. This project developed new methods for learning regularizers from data and identified some of the associated fundamental computational and statistical limitations. Some of the papers that were partially supported by this project include:
-- Model selection over partially ordered sets, Proceedings of the National Academy of Sciences, 2024
-- Kronecker Product Approximation of Operators in Spectral Norm via Alternating SDP, SIAM Journal on Matrix Analysis and Applications, 2023
-- Terracini convexity, Mathematical Programming, 2023
-- Spectrahedral Regression, SIAM Journal on Optimization, 2023
-- Optimal Regularization for a Data Source, preprint
-- Controlling the False Discovery Rate in Subspace Selection, preprint
-- Free Descriptions of Convex Sets, preprint
-- Modeling groundwater levels in California's Central Valley by hierarchical Gaussian process and neural network regression, preprint
Last Modified: 10/15/2024
Modified by: Venkat Chandrasekaran
Please report errors in award information by writing to: awardsearch@nsf.gov.