Award Abstract # 2323532
CIF: Small: An Algebraic, Convex, and Scalable Framework for Kernel Learning with Activation Functions

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: ARIZONA STATE UNIVERSITY
Initial Amendment Date: November 28, 2023
Latest Amendment Date: November 28, 2023
Award Number: 2323532
Award Instrument: Standard Grant
Program Manager: Alfred Hero
ahero@nsf.gov
 (703)292-0000
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: December 1, 2023
End Date: November 30, 2026 (Estimated)
Total Intended Award Amount: $334,202.00
Total Awarded Amount to Date: $334,202.00
Funds Obligated to Date: FY 2024 = $334,202.00
History of Investigator:
  • Matthew Peet (Principal Investigator)
    mpeet@asu.edu
Recipient Sponsored Research Office: Arizona State University
660 S MILL AVENUE STE 204
TEMPE
AZ  US  85281-3670
(480)965-5479
Sponsor Congressional District: 04
Primary Place of Performance: Arizona State University
660 S MILL AVE STE 312
TEMPE
AZ  US  85281-3670
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): NTLHJXM55KZ6
Parent UEI:
NSF Program(s): Comm & Information Foundations
Primary Program Source: 01002425DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 079Z, 7797, 7923, 7935
Program Element Code(s): 779700
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Public interest in machine learning has increased significantly in recent years, with application in a diversity of fields, from medical diagnosis to speech recognition to autonomous driving to advertising. The ability to sustain this interest, however, will depend on whether machine learning algorithms continue to advance in terms of both reliability, scalability, and interpretability. As more data becomes available, Will self-driving cars become safer? Will Siri understand you better? Will doctors be able to better understand the causes and treatments for diseases? While neural networks and deep learning have seen widespread adoption in recent years, the algorithms which underly these methods have not changed substantially in over 20 years. This project, therefore, revisits the fundamental mathematics which underly machine learning algorithms ? integrating classical results with the popular neural-network based approaches. This mathematical framework is then used to propose new methods for improving the accuracy of machine learning, for increasing the ability to process large data sets, and for allowing the results of machine learning algorithms to be more readily interpreted in terms of measurable physical quantities.

To achieve the goals of accuracy, scalability and interpretability, the project poses an algebraic reformulation of the classical problem of learning the kernel. Specifically, for any given kernel algebra, the positive kernels in that algebra and their associated feature maps may be represented by positive matrices ? leading to a convex optimization problem whose solution yields an explicit feature map which may be interpreted in terms of measurable physical quantities. Based on this framework, activation functions are used to define kernel algebras which are universal, yet which are dense in the set of all kernels and whose feature maps mimic those of the neural tangent kernel which defines neural networks ? leading to improved accuracy of the algorithms. Next, a saddle-point representation and primal-dual approach is used to convert the kernel learning problem to quadratic programming ? resulting in more scalable kernel learning algorithms. Finally, a singular value decomposition of the resulting feature map is obtained by solving an associated partial differential equation. This decomposition is used to identify key features in the data and, furthermore, yields reduced algorithms which scale linearly with the number of samples ? implying scalability to datasets with tens of thousands of samples.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Jagt, Declan and Peet, Matthew "Constructive Representation of Functions in N-Dimensional Sobolev Space" arXivorg , 2024 Citation Details
Talitckii, Aleksandr and Colbert, Brendon and Peet, Matthew M "Efficient Convex Algorithms for Universal Kernel Learning" Journal of machine learning research , v.25 , 2024 Citation Details
Talitckii, Aleksandr and Mangal, Joslyn L. and Colbert, Brendon K. and Acharya, Abhinav P. and Peet, Matthew M. "Employing Feature Selection Algorithms to Determine the Immune State of Mice Model of Rheumatoid Arthritis" IEEE Journal of Biomedical and Health Informatics , 2023 https://doi.org/10.1109/JBHI.2023.3327230 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page