Skip to feedback

Award Abstract # 2144751
CAREER: Efficient, Dynamic, Robust, and On-Device Continual Deep Learning with Non-Volatile Memory based In-Memory Computing System

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: ARIZONA STATE UNIVERSITY
Initial Amendment Date: January 11, 2022
Latest Amendment Date: January 31, 2023
Award Number: 2144751
Award Instrument: Continuing Grant
Program Manager: Sankar Basu
sabasu@nsf.gov
 (703)292-7843
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: January 15, 2022
End Date: November 30, 2023 (Estimated)
Total Intended Award Amount: $500,000.00
Total Awarded Amount to Date: $177,735.00
Funds Obligated to Date: FY 2022 = $0.00
FY 2023 = $0.00
History of Investigator:
  • Deliang Fan (Principal Investigator)
    dfan@asu.edu
Recipient Sponsored Research Office: Arizona State University
660 S MILL AVENUE STE 204
TEMPE
AZ  US  85281-3670
(480)965-5479
Sponsor Congressional District: 04
Primary Place of Performance: Arizona State University
PO Box 876011
Tempe
AZ  US  85287-6011
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): NTLHJXM55KZ6
Parent UEI:
NSF Program(s): Software & Hardware Foundation
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
01002425DB NSF RESEARCH & RELATED ACTIVIT

01002526DB NSF RESEARCH & RELATED ACTIVIT

01002627DB NSF RESEARCH & RELATED ACTIVIT

010V2122DB R&RA ARP Act DEFC V
Program Reference Code(s): 102Z, 1045, 7945
Program Element Code(s): 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This award is funded in whole or in part under the American Rescue Plan Act of 2021 (Public Law 117-2).

Over past decades, there have existed grand challenges in developing high performance and energy-efficient computing solutions for big-data processing. Meanwhile, owing to the boom in artificial intelligence (AI), especially Deep Neural Networks (DNNs), such big-data processing requires efficient, intelligent, fast, dynamic, robust, and on-device adaptive cognitive computing. However, those requirements are not sufficiently satisfied by existing computing solutions due to the well-known power wall in silicon-based semiconductor devices, the memory wall in traditional Von-Neuman computing architectures, and computation-/memory-intensive DNN computing algorithms. This project aims to foster a systematic breakthrough in developing AI-in-Memory computing systems, through collaboratively developing ahybrid in-memory computing (IMC) hardware platform integrating the benefits of emerging non-volatile resistive memory (RRAM) and Static Random Access Memory (SRAM) technologies, as well as incorporating IMC-aware deep-learning algorithm innovations. The overarching goal of this project is to design, implement, and experimentally validate a new hybrid in-memory computing system that is collaboratively optimized for energy efficiency, inference accuracy, spatiotemporal dynamics, robustness, and on-device learning, which will greatly advance AI-based big-data processing fields such as computer vision, autonomous driving, robotics, etc. The research will also be extended into an educational platform, providing a user-friendly learning framework, and will serve the educational objectives for K-12 students, undergraduate, graduate, and under-represented students.

This project will advance knowledge and produce scientific principles and tools for a new paradigm of AI-in-Memory computing featuring significant improvements in energy efficiency, speed, dynamics, robustness, and on-device learning capability. This cross-layer project spans from device, circuit, and architecture to DNN algorithm exploration. First, a hybrid RRAM-SRAM based in-memory computing chip will be designed, optimized, and fabricated. Second, based on this new computing platform, the on-device spatiotemporal dynamic neural network structure will be developed to provide an enhanced run-time computing profile (latency, resource allocation, working load, power budget, etc.), as well as improve the robustness of the system against hardware intrinsic and adversarial noise injection. Then, efficient on-device learning methodologies with the developed computing platform will be investigated. In the last thrust, an end-to-end DNN training, optimization, mapping, and evaluation CAD tool will be developed that integrates the developed hardware platform and algorithm innovations, for optimizing the software and hardware co-designs to achieve the user-defined multi-objectives in latency, energy efficiency, dynamics, accuracy, robustness, on-device adaption, etc.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 16)
Jyotishman Saikia, Amitesh Sridharan "FP-IMC: A 28nm All-Digital Configurable Floating-Point In-Memory Computing Macro" Proceedings of ESSCIRC , 2023 Citation Details
Krishnan, Gokul and Wang, Zhenyu and Yeo, Injune and Yang, Li and Meng, Jian and Liehr, Maximilian and Joshi, Rajiv V. and Cady, Nathaniel C. and Fan, Deliang and Seo, Jae-sun and Cao, Yu "Hybrid RRAM/SRAM In-Memory Computing for Robust DNN Acceleration" IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 2022 https://doi.org/10.1109/TCAD.2022.3197516 Citation Details
Lin, Sen and Yang, Li and Fan, Deliang and Zhang, Junshan "Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer" Thirty-Sixth Conference on Neural Information Processing Systems , 2022 Citation Details
Rakin, Adnan Siraj and Chowdhuryy, Md Hafizul and Yao, Fan and Fan, Deliang "DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories" 2022 IEEE Symposium on Security and Privacy (SP) , 2022 https://doi.org/10.1109/SP46214.2022.9833743 Citation Details
Sridharan, Amitesh and Angizi, Shaahin and Cherupally, Sai Kiran and Zhang, Fan and Seo, Jae-Sun and Fan, Deliang "A 1.23-GHz 16-kb Programmable and Generic Processing-in-SRAM Accelerator in 65nm" 2022- IEEE 48th European Solid State Circuits Conference (ESSCIRC) , 2022 https://doi.org/10.1109/ESSCIRC55480.2022.9911440 Citation Details
Sridharan, Amitesh and Saikia, Jyotishman and Anupreetham and Zhang, Fan and Seo, Jae-Sun and Fan, Deliang "PS-IMC: A 2385.7-TOPS/W/b Precision Scalable In-Memory Computing Macro With Bit-Parallel Inputs and Decomposable Weights for DNNs" IEEE Solid-State Circuits Letters , v.7 , 2024 https://doi.org/10.1109/LSSC.2024.3369058 Citation Details
Sridharan, Amitesh and Zhang, Fan and Fan, Deliang "MnM: A Fast and Efficient Min/Max Searching in MRAM" Great Lakes Symposium on VLSI , 2022 https://doi.org/10.1145/3526241.3530349 Citation Details
Sridharan, Amitesh and Zhang, Fan and Sui, Yang and Yuan, Bo and Fan, Deliang "DSPIMM: A Fully Digital SParse In-Memory Matrix Vector Multiplier for Communication Applications" 2023 60th ACM/IEEE Design Automation Conference (DAC) , 2023 Citation Details
Yang, Li and Meng, Jian and Seo, Jae-sun and Fan, Deliang "Get More at Once: Alternating Sparse Training with Gradient Correction" Thirty-Sixth Conference on Neural Information Processing Systems , 2022 Citation Details
Yang, Li and Rakin, Adnan Siraj and Fan, Deliang "Rep-Net: Efficient On-Device Learning via Feature Reprogramming" IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2022 Citation Details
Yang, Li' Rakin and Fan, Deliang "DA3: Dynamic Additive Attention Adaption for Memory-Efficient On-Device Multi-Domain Learning" IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops , 2022 https://doi.org/10.1109/CVPRW56347.2022.00295 Citation Details
(Showing: 1 - 10 of 16)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page