Award Abstract # 2245765
CRII: CNS: A Systematic Multi-Task Learning Framework for Improving Deep Learning Efficiency on Edge Platforms

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: CLEVELAND STATE UNIVERSITY
Initial Amendment Date: March 13, 2023
Latest Amendment Date: March 13, 2023
Award Number: 2245765
Award Instrument: Standard Grant
Program Manager: Marilyn McClure
mmcclure@nsf.gov
 (703)292-5197
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: June 1, 2023
End Date: May 31, 2026 (Estimated)
Total Intended Award Amount: $174,233.00
Total Awarded Amount to Date: $174,233.00
Funds Obligated to Date: FY 2023 = $174,233.00
History of Investigator:
  • Tianyun Zhang (Principal Investigator)
    t.zhang85@csuohio.edu
Recipient Sponsored Research Office: Cleveland State University
2121 EUCLID AVE
CLEVELAND
OH  US  44115-2226
(216)687-3630
Sponsor Congressional District: 11
Primary Place of Performance: Cleveland State University
2121 Euclid Avenue
Cleveland
OH  US  44115-2214
Primary Place of Performance
Congressional District:
11
Unique Entity Identifier (UEI): YKGMTXA2NVL6
Parent UEI:
NSF Program(s): CSR-Computer Systems Research
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 102Z, 8228
Program Element Code(s): 735400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Multi-task learning is a subfield of machine learning in which the data is trained with a shared model to solve different tasks simultaneously. Multi-task learning highly reduces the number of parameters in the machine learning models and thus reduces the computational and storage requirements. For example, there are multiple tasks to be done in real-time in self-driving cars, including object detection and depth estimation. If these tasks can be trained on a single model with shared parameters, the model size and the inference time can be highly reduced. This project aims to further compress the model used for multi-task learning as the model size of a single deep neural network is still a critical challenge to many computation systems, especially for edge platforms. This project proposes an approach to learn the difficulty of every task and maintain the performance of the most difficult task when compressing a multi-task learning model. It increases the potential in the compression rate with acceptable performance for all the tasks as the performance of the most difficult task needs to be guaranteed to provide a satisfactory user experience. This project also designs an efficient multi-task federated learning approach for edge platforms. It improves the convergence rate of multi-task federated learning and reduces the communication costs in every iteration. Finally, this project proposes to solve an algorithm-hardware co-design problem to maximize the implementation efficiency of the compressed multi-task DNN models on edge platforms.

The files of compressed DNN models and the ideas on efficient DNN training and implementation may be useful to researchers who focus on improving the computation efficiency of DNN models on edge platforms and other hardware platforms.This project will involve undergraduate and graduate students in the research. The research achievements of this project will be incorporated into a current senior-level undergraduate course, a new planned advanced-level graduate course, and seminars for both undergraduate and graduate students. There are also planned research demonstrations during the workshops and summer camps for the K-12 students.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page