Award Abstract # 2145670
CAREER: Foundations of Federated Multi-Task Learning

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: CARNEGIE MELLON UNIVERSITY
Initial Amendment Date: May 25, 2022
Latest Amendment Date: April 25, 2023
Award Number: 2145670
Award Instrument: Continuing Grant
Program Manager: Vladimir Pavlovic
vpavlovi@nsf.gov
 (703)292-8318
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: June 1, 2022
End Date: May 31, 2027 (Estimated)
Total Intended Award Amount: $597,153.00
Total Awarded Amount to Date: $589,057.00
Funds Obligated to Date: FY 2022 = $368,720.00
FY 2023 = $220,337.00
History of Investigator:
  • Virginia Smith (Principal Investigator)
    smithv@andrew.cmu.edu
Recipient Sponsored Research Office: Carnegie-Mellon University
5000 FORBES AVE
PITTSBURGH
PA  US  15213-3890
(412)268-8746
Sponsor Congressional District: 12
Primary Place of Performance: Carnegie-Mellon University
Pittsburgh
PA  US  15213-3815
Primary Place of Performance
Congressional District:
12
Unique Entity Identifier (UEI): U3NKNFLNQ613
Parent UEI: U3NKNFLNQ613
NSF Program(s): Robust Intelligence
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
01002627DB NSF RESEARCH & RELATED ACTIVIT

01002223DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 7495
Program Element Code(s): 749500
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Mobile phones, wearable devices, and smart homes form just a few of the modern distributed networks generating a wealth of data each day. Due to the growing computational power of edge devices, coupled with concerns over transmitting private data, it is increasingly attractive to store data locally and push network computation to the edge. Federated learning explores training machine learning models at the edge in distributed networks. While federated learning has shown tremendous promise for enabling edge applications, practical deployment is currently stymied by a number of competing constraints. In addition to being accurate, federated learning methods must scale to potentially massive networks of devices, and must exhibit trustworthy behavior---addressing pragmatic concerns related to issues such as user privacy, fairness, and robustness. In this project, we explore multi-task learning, a technique that learns separate but related models for each device in the network, as a unified approach to address the competing constraints of federated learning. The objective of the project is to develop scalable multi-task learning methods that are suitable for practical federated networks, and to rigorously study the foundational properties of federated multi-task learning in terms of the goals of accuracy, scalability, and trustworthiness. In doing so, the research will unlock a new generation of federated learning systems that can holistically address the constraints of realistic federated networks.

The goal of this project is to establish and rigorously study the use of federated multi-task learning. While the accuracy benefits of federated multi-task learning are well-known, the work charts two new directions. First, the project develops methods to realize multi-task learning at scale in massive federated networks. Secondly, the project shows that multi-task learning, by improving privacy, fairness, and robustness, is in fact key for trustworthy federated learning. The technical aims of the project work are divided into three thrusts. First, by approximating standard notions of multi-task learning, the project will develop and rigorously study a family of highly scalable federated multi-task learning objectives. Second, the privacy implications of multi-task learning will be analyzed and evaluated in order to understand trade-offs between privacy and utility in federated networks. Finally, this project will explore tensions between fairness (in terms of performance disparities across devices) and robustness (to data and model poisoning attacks) in federated learning. Although these goals may be at odds, this project aims to show that multi-task learning can inherently improve both fairness and robustness, helping both to be achieved jointly. Taken together, this work has the potential to cause a paradigm-shift in the way federated learning systems are designed, implemented, and analyzed.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Cho, Yae Jee and Jhunjhunwala, Divyansh and Li, Tian and Smith, Virginia and Joshi, Gauri "Maximizing Global Model Appeal in Federated Learning" Transactions on machine learning research , 2024 Citation Details
Liu, Z. and Hu, S. and Wu, Z. and Smith, V. "On Privacy and Personalization in Cross-Silo Federated Learning" Advances in neural information processing systems , 2022 Citation Details
Li, T. and Beirami, A. and Sanjabi, M. and Smith, V. "On Tilted Losses in Machine Learning: Theory and Applications" Journal of machine learning research , 2023 Citation Details
Hu, Shengyuan and Wu, Zhiwei Steven and Smith, Virginia "Fair Federated Learning via Bounded Group Loss" , 2024 https://doi.org/10.1109/SaTML59370.2024.00015 Citation Details
Hu, S. and Wu, Z. and Smith, V. "Private Multi-Task Learning: Formulation and Applications to Federated Learning" Transactions on Machine Learning Research , 2024 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page