Skip to feedback

Award Abstract # 2409697
SHF: Small: Cross-Layer Design Automation for In-Memory Analog Computing

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: UNIVERSITY OF SOUTH CAROLINA
Initial Amendment Date: May 24, 2024
Latest Amendment Date: May 24, 2024
Award Number: 2409697
Award Instrument: Standard Grant
Program Manager: Hu, X. Sharon
xhu@nsf.gov
 (703)292-8910
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: June 1, 2024
End Date: May 31, 2027 (Estimated)
Total Intended Award Amount: $588,801.00
Total Awarded Amount to Date: $588,801.00
Funds Obligated to Date: FY 2024 = $588,801.00
History of Investigator:
  • Ramtin Mohammadizand (Principal Investigator)
    ramtin@cse.sc.edu
  • Jason Bakos (Co-Principal Investigator)
Recipient Sponsored Research Office: University of South Carolina at Columbia
1600 HAMPTON ST
COLUMBIA
SC  US  29208-3403
(803)777-7093
Sponsor Congressional District: 06
Primary Place of Performance: University of South Carolina at Columbia
1600 HAMPTON ST # 414
COLUMBIA
SC  US  29208-3403
Primary Place of Performance
Congressional District:
06
Unique Entity Identifier (UEI): J22LNTMEDP73
Parent UEI: Q93ZDA59ZAR5
NSF Program(s): Software & Hardware Foundation
Primary Program Source: 01002425DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 7945, 9150
Program Element Code(s): 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Machine learning (ML) has become ubiquitous and interwoven into many applications that are important to our daily lives, societal prosperity, and technological progress. However, the large data centers that serve ML workloads are facing tremendous challenges in keeping pace with demand. Additionally, the surge in ML workload demands has positioned data centers as major contributors to annual energy consumption. This project?s goal is to develop a transformative technology to substantially improve the energy efficiency of ML systems allowing for a corresponding reduction in their carbon emissions. Existing computing platforms are fundamentally limited by the memory wall and power wall and incremental technological improvements are proving inadequate to satisfy future demand. Transformative improvements in platform technology, such as in-memory analog computing, offer a potential solution, but it faces several practical challenges to overcome before becoming commercially viable. This project aims to address several of these challenges by developing a novel computer architecture and a corresponding framework for deploying ML workloads to it. The project is aligned with established national priorities seeking to sustain economic leadership goals in artificial intelligence, computing, and nanotechnology disciplines by bringing emerging technologies and computing architectures into wide use, providing a practical alternative to current high-energy ML systems.

The project spans various layers of design abstraction, encompassing circuit, architecture, and computer-aided design tools. It addresses several critical aspects, including (1) the development of a novel in-memory analog computing (IMAC) architecture that realizes both matrix multiplication and nonlinear vector operations in the analog domain, (2) the design of a hierarchical analog network-on-chip to support the deployment of large ML workloads on IMAC architecture with minimal need for signal conversion from the analog domain to digital and vice versa, (3) heterogeneous integration of IMAC with existing ML hardware platforms enabling fine-grained function mapping of targeted applications on the developed heterogeneous systems, and (4) development of a fast and accurate simulation framework, incorporating lightweight solvers specifically designed to solve the nodal conductance matrices of memristive crossbars in IMAC architecture. To evaluate the end-to-end scalability and efficiency of the presented heterogeneous system in terms of performance, energy, and accuracy, the research team will use standard ML benchmark suites providing a wide range of ML models, realistic end-user scenarios, and standardized evaluation metrics.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page