Award Abstract # 2208361
Collaborative Research: Algorithms, Theory, and Validation of Deep Graph Learning with Limited Supervision: A Continuous Perspective

NSF Org: DMS
Division Of Mathematical Sciences
Recipient: UNIVERSITY OF UTAH
Initial Amendment Date: August 23, 2022
Latest Amendment Date: August 13, 2024
Award Number: 2208361
Award Instrument: Continuing Grant
Program Manager: Yuliya Gorb
ygorb@nsf.gov
 (703)292-2113
DMS
 Division Of Mathematical Sciences
MPS
 Directorate for Mathematical and Physical Sciences
Start Date: September 1, 2022
End Date: August 31, 2026 (Estimated)
Total Intended Award Amount: $240,000.00
Total Awarded Amount to Date: $240,000.00
Funds Obligated to Date: FY 2022 = $73,168.00
FY 2023 = $82,551.00

FY 2024 = $84,281.00
History of Investigator:
  • Bao Wang (Principal Investigator)
    bwang@math.utah.edu
Recipient Sponsored Research Office: University of Utah
201 PRESIDENTS CIR
SALT LAKE CITY
UT  US  84112-9049
(801)581-6903
Sponsor Congressional District: 01
Primary Place of Performance: University of Utah
72 CENTRAL CAMPUS DR RM 3750
SALT LAKE CITY
UT  US  84112-9200
Primary Place of Performance
Congressional District:
01
Unique Entity Identifier (UEI): LL8GLEVH6MG3
Parent UEI:
NSF Program(s): COMPUTATIONAL MATHEMATICS
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
01002223DB NSF RESEARCH & RELATED ACTIVIT

01002425DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 079Z, 9263
Program Element Code(s): 127100
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.049

ABSTRACT

Graph-structured data is ubiquitous in scientific and artificial intelligence applications, for instance, particle physics, computational chemistry, drug discovery, neural science, recommender systems, robotics, social networks, and knowledge graphs. Graph neural networks (GNNs) have achieved tremendous success in a broad class of graph learning tasks, including graph node classification, graph edge prediction, and graph generation. Nevertheless, there are several bottlenecks of GNNs: 1) In contrast to many deep networks such as convolutional neural networks, it has been noticed that increasing the depth of GNNs results in a severe accuracy degradation, which has been interpreted as over-smoothing in the machine learning community. 2) The performance of GNNs relies heavily on a sufficient number of labeled graph nodes; the prediction of GNNs will become significantly less reliable when less labeled data is available. This research aims to address these challenges by developing new mathematical understanding of GNNs and theoretically-principled algorithms for graph deep learning with less training data. The project will train graduate students and postdoctoral associates through involvement in the research. The project will also integrate the research into teaching to advance data science education.

This project aims to develop next-generation continuous-depth GNNs leveraging computational mathematics tools and insights and to advance data-driven scientific simulation using the new GNNs. This project has three interconnected thrusts that revolve around pushing the envelope of theory and practice in graph deep learning with limited supervision using PDE and harmonic analysis tools: 1) developing a new generation of diffusion-based GNNs that are certifiable to learning with deep architectures and less training data; 2) developing a new efficient attention-based approach for learning graph structures from the underlying data accompanied by uncertainty quantification; and 3) application validation in learning-assisted scientific simulation and multi-modal learning and software development.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Baker, Justin and Cherkaev, Elena and Narayan, Akil and Wang, Bao "Learning Proper Orthogonal Decomposition of Complex Dynamics Using Heavy-ball Neural ODEs" Journal of Scientific Computing , v.95 , 2023 https://doi.org/10.1007/s10915-023-02176-8 Citation Details
Hua, Yifan and Miller, Kevin and Bertozzi, Andrea L. and Qian, Chen and Wang, Bao "Efficient and Reliable Overlay Networks for Decentralized Federated Learning" SIAM Journal on Applied Mathematics , v.82 , 2022 https://doi.org/10.1137/21M1465081 Citation Details
Hu, Mengqi and Lou, Yifei and Wang, Bao and Yan, Ming and Yang, Xiu and Ye, Qiang "Accelerated Sparse Recovery via Gradient Descent with Nonlinear Conjugate Gradient Momentum" Journal of Scientific Computing , v.95 , 2023 https://doi.org/10.1007/s10915-023-02148-y Citation Details
Justin Baker, Qingsong Wang "Implicit Graph Neural Networks: A Monotone Operator Viewpoint" International Conference on Machine Learning , 2023 Citation Details
Kreusser, L. M. and Osher, S. J. and Wang, B. "A deterministic gradient-based approach to avoid saddle points" European Journal of Applied Mathematics , 2022 https://doi.org/10.1017/S0956792522000316 Citation Details
Wang, Bao and Xia, Hedi and Nguyen, Tan and Osher, Stanley "How does momentum benefit deep neural networks architecture design? A few case studies" Research in the Mathematical Sciences , v.9 , 2022 https://doi.org/10.1007/s40687-022-00352-0 Citation Details
Wang, Bao and Ye, Qiang "Improving Deep Neural Networks Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum" IEEE Transactions on Neural Networks and Learning Systems , 2023 https://doi.org/10.1109/TNNLS.2023.3255783 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page