Award Abstract # 2328803
Collaborative Research: FuSe: Efficient Situation-Aware AI Processing in Advanced 2-Terminal SOT-MRAM

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: ARIZONA STATE UNIVERSITY
Initial Amendment Date: September 13, 2023
Latest Amendment Date: September 13, 2023
Award Number: 2328803
Award Instrument: Continuing Grant
Program Manager: Sankar Basu
sabasu@nsf.gov
 (703)292-7843
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2023
End Date: February 29, 2024 (Estimated)
Total Intended Award Amount: $700,000.00
Total Awarded Amount to Date: $224,413.00
Funds Obligated to Date: FY 2023 = $0.00
History of Investigator:
  • Deliang Fan (Principal Investigator)
    dfan@asu.edu
Recipient Sponsored Research Office: Arizona State University
660 S MILL AVENUE STE 204
TEMPE
AZ  US  85281-3670
(480)965-5479
Sponsor Congressional District: 04
Primary Place of Performance: Arizona State University
P.O. Box 876011
Tempe
AZ  US  85287-6011
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): NTLHJXM55KZ6
Parent UEI:
NSF Program(s): NSF-Samsung Partnership,
FuSe-Future of Semiconductors,
NSF-Intel Semiconductr Partnrs
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
01002425DB NSF RESEARCH & RELATED ACTIVIT

01002526DB NSF RESEARCH & RELATED ACTIVIT

4082CYXXDB NSF TRUST FUND

4082PYXXDB NSF TRUST FUND
Program Reference Code(s): 7945
Program Element Code(s): 254Y00, 216Y00, 241Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070, 47.084

ABSTRACT

The amount of data required to be analyzed by computing systems has been increasing drastically to exascale (i.e., billions of gigabytes) and beyond. Meanwhile, owing to the boom in artificial intelligence (AI), especially Deep Neural Network (DNN), there is a need for high performance, efficient, fast, and adaptive AI-based big data processing systems. However, those requirements are not sufficiently met by existing computing solutions due to the power-wall in silicon-based semiconductor devices, memory-wall in traditional Von-Neuman computing architecture, and ultra computation- and memory-intensive DNN-based AI algorithms. This project brings together an interdisciplinary group of researchers, with expertise spanning from material science, device fabrication, integrated circuit design, computer architecture, and AI algorithms to undertake innovative device-circuit-algorithm co-design for developing an AI Processing-In-Memory (AI-PIM) system that could leverage the emerging non-volatile magnetic memory technology to implement efficient AI data processing, as well as situation-aware on-chip continual learning. This project targets to significantly improve the AI data processing energy efficiency, with 100X higher efficiency than that of state-of-the-art Graph Processing Units (GPUs). The project will greatly benefit various application areas, such as autonomous driving, robotics, personalized cognitive speech, and smart connected health, etc. This project will also involve education and workforce development activities, including K-12 STEM outreach, undergraduate/graduate training, curriculum development in semiconductor, semiconductor industry internship mentoring, cleanroom fab internships, advance integrated circuit design courses. It will also encourage broader participation of female and under-represented minorities in the microelectronics and semiconductor chip industry.

This project will advance knowledge and conduct cross-layer research spanning from emerging Spin-Orbit Torque Magnetic Random Access Memory (SOT-MRAM) material, device, circuit, architecture, to AI algorithm exploration with three main interweaved thrusts. Thrust 1 will explore unconventional spins in SOT materials, e.g., MnPd3, and novel device geometry to fabricate a new design of 2-terminal SOT-MRAM, which simultaneously delivers unlimited endurance, nano-seconds programming time, very high cell density, deterministic programming without external magnetic field, zero leakage, and non-volatility. Leveraging the developed 2-terminal SOT-MRAM, Thrust 2 will design and tape-out an AI Processing-in-Memory (PIM) chip to implement fully digital ?in-memory sparse multiplication-and-accumulation (MAC)? operations that support both forward and backward computations of neural networks. Following a co-design methodology, Thrust 3 will first investigate automated network architecture search methods to construct AI model best suitable for given situation while considering our AI-PIM system constraint. This thrust will further develop novel PIM-friendly, compute- and memory-efficient, situation-aware continual learning algorithms that could minimize the power-hungry on-chip weight update (i.e., memory write) complexity, while learning new situation- and user-specific data.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Sridharan, Amitesh and Saikia, Jyotishman and Anupreetham and Zhang, Fan and Seo, Jae-Sun and Fan, Deliang "PS-IMC: A 2385.7-TOPS/W/b Precision Scalable In-Memory Computing Macro With Bit-Parallel Inputs and Decomposable Weights for DNNs" IEEE Solid-State Circuits Letters , v.7 , 2024 https://doi.org/10.1109/LSSC.2024.3369058 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page