Skip to feedback

Award Abstract # 2134209
Collaborative Research: Robust Deep Learning in Real Physical Space: Generalization, Scalability, and Credibility

NSF Org: DMS
Division Of Mathematical Sciences
Recipient: PURDUE UNIVERSITY
Initial Amendment Date: August 18, 2021
Latest Amendment Date: July 18, 2023
Award Number: 2134209
Award Instrument: Continuing Grant
Program Manager: Yong Zeng
yzeng@nsf.gov
 (703)292-7299
DMS
 Division Of Mathematical Sciences
MPS
 Directorate for Mathematical and Physical Sciences
Start Date: September 1, 2021
End Date: August 31, 2025 (Estimated)
Total Intended Award Amount: $800,000.00
Total Awarded Amount to Date: $800,000.00
Funds Obligated to Date: FY 2021 = $221,690.00
FY 2022 = $288,148.00

FY 2023 = $290,162.00
History of Investigator:
  • Guang Lin (Principal Investigator)
  • Guang Cheng (Co-Principal Investigator)
  • Stanley Chan (Co-Principal Investigator)
  • Jean Honorio (Co-Principal Investigator)
Recipient Sponsored Research Office: Purdue University
2550 NORTHWESTERN AVE # 1100
WEST LAFAYETTE
IN  US  47906-1332
(765)494-1055
Sponsor Congressional District: 04
Primary Place of Performance: Purdue University
150 N University Street
West Lafayette
IN  US  47907-2067
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): YRXVL4JYCEF5
Parent UEI: YRXVL4JYCEF5
NSF Program(s): OFFICE OF MULTIDISCIPLINARY AC,
Special Projects - CCF,
IIS Special Projects,
CDS&E-MSS
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
01002223DB NSF RESEARCH & RELATED ACTIVIT

01002324DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 075Z, 079Z
Program Element Code(s): 125300, 287800, 748400, 806900
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.041, 47.049

ABSTRACT

The vulnerability of deep neural networks to small and imperceptible perturbations is a major challenge in machine learning today. For a variety of applications such as autonomous vehicles, security, and medical diagnosis, this weakness has severely limited the deployment of machine learning systems at scale. Existing theoretical studies, while laying a good foundation based on advanced statistical analyses, require various idealistic assumptions that are difficult to be validated in real physical environments. Understanding the robustness of deep learning algorithms and its interactions with the real physical environment is therefore a critical step towards a better understanding of explainability, generalization, and trustworthiness. This project aims to close the gap by developing new theories and computer vision systems that can be realistically validated. The outcomes of the research will create new technologies that can be translated into more secure and reliable commercial products, hence strengthening the global competitiveness of the United States; new trustworthy AI systems that can be deployed for surveillance and defense products to improve the national security of the United States; expand the next-generation workforce capacity by developing a complete training pipeline from K-12 outreach to undergraduate research, graduate mentoring, industry partnership, online learning modules, and curriculum development; broaden participation in STEM by leveraging the accessibility and intrigue of the foundational research concepts to conduct educational outreach that targets female participants from elementary up through graduate school; and promote the exchanges of ideas across disciplines in statistics, theoretical computer science, and image processing. 

Robust machine learning in real physical space requires co-modeling the deep neural networks and the environment in which the neural networks are operating. Research efforts focusing on one specific domain but not interacting with the other domain will unlikely solve the problem. The combination of skills in electrical engineering, statistics, and computer science possessed by the Purdue-UCSD team offers a unique opportunity to address the problem. The technical approach the team will take is to reformulate the robust adversarial learning problem by incorporating the environmental factors. Four specific research objectives will be pursued: (1) Parametrizing the physical environment via a hierarchy of deterministic and generative approaches, so that the set of all possible distortions can be constrained. (2) Analyzing the generalization bounds of neural networks in the presence of the environmental factors and analyzing the credibility of such a system by studying the robustness and uncertainty quantification. (3) Developing computationally efficient algorithms to seek the equilibrium points of a proposed minimax optimization. (4) Building a computational photography testbed to implement the concepts and validate the theoretical results. On the educational front, the project provides a suite of outreach activities to K-12 to improve their interest in STEM, and research opportunities to undergraduates.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 76)
Hanbyul Lee and Qifan Song and Jean Honorio "Support Recovery in Sparse PCA with Incomplete Data" Advances in neural information processing systems , 2022 Citation Details
Song, Qifan and Cheng, Guang "Optimal false discovery control of minimax estimators" Bernoulli , v.29 , 2023 https://doi.org/10.3150/22-BEJ1527 Citation Details
Adarsh Barik and Jean Honorio "Fair Sparse Regression with Clustering: An Invex Relaxation for a Combinatorial Problem" Advances in neural information processing systems , 2021 Citation Details
Adarsh Barik and Jean Honorio "Sparse Mixed Linear Regression with Guarantees: Taming an Intractable Problem with Invex Relaxation" International conference on machine learning , 2022 Citation Details
Chan, Stanley H "Computational Image Formation: Simulators in the Deep Learning Era" Journal of Imaging Science and Technology , v.67 , 2023 https://doi.org/10.2352/J.ImagingSci.Technol.2023.67.6.060405 Citation Details
Chan, Stanley H. "On the Insensitivity of Bit Density to Read Noise in One-Bit Quanta Image Sensors" IEEE Sensors Journal , v.23 , 2023 https://doi.org/10.1109/JSEN.2023.3235493 Citation Details
Chan, Stanley H. "Tilt-Then-Blur or Blur-Then-Tilt? Clarifying the Atmospheric Turbulence Model" IEEE Signal Processing Letters , 2022 https://doi.org/10.1109/LSP.2022.3200551 Citation Details
Chan, Stanley H. "What Does a One-Bit Quanta Image Sensor Offer?" IEEE Transactions on Computational Imaging , 2022 https://doi.org/10.1109/TCI.2022.3202012 Citation Details
Chan, Stanley H and Weerasooriya, Hashan K and Zhang, Weijian and Abshire, Pamela and Gyongy, Istvan and Henderson, Robert K "Resolution Limit of Single-Photon LiDAR" , 2024 https://doi.org/10.1109/CVPR52733.2024.02391 Citation Details
Chimitt, Nicholas and Almuallem, Ali and Chan, Stanley H "Phase retrieval of a point spread function" , 2024 https://doi.org/10.1117/12.3028192 Citation Details
Chimitt, Nicholas and Zhang, Xingguang and Chi, Yiheng and Chan, Stanley H "Scattering and Gathering for Spatially Varying Blurs" IEEE Transactions on Signal Processing , v.72 , 2024 https://doi.org/10.1109/TSP.2024.3375638 Citation Details
(Showing: 1 - 10 of 76)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page