Award Abstract # 1844952
CAREER: Advancing STTRAM Caches for Runtime Adaptable and Energy-Efficient Microarchitectures

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: UNIVERSITY OF ARIZONA
Initial Amendment Date: April 25, 2019
Latest Amendment Date: May 4, 2023
Award Number: 1844952
Award Instrument: Continuing Grant
Program Manager: Marilyn McClure
mmcclure@nsf.gov
 (703)292-5197
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: June 1, 2019
End Date: May 31, 2025 (Estimated)
Total Intended Award Amount: $500,000.00
Total Awarded Amount to Date: $532,000.00
Funds Obligated to Date: FY 2019 = $94,349.00
FY 2020 = $113,067.00

FY 2021 = $99,890.00

FY 2022 = $118,822.00

FY 2023 = $105,872.00
History of Investigator:
  • Tosiron Adegbija (Principal Investigator)
    tosiron@arizona.edu
Recipient Sponsored Research Office: University of Arizona
845 N PARK AVE RM 538
TUCSON
AZ  US  85721
(520)626-6000
Sponsor Congressional District: 07
Primary Place of Performance: University of Arizona
888 N Euclid Ave
Tucson
AZ  US  85719-4824
Primary Place of Performance
Congressional District:
07
Unique Entity Identifier (UEI): ED44Y3W6P7B9
Parent UEI:
NSF Program(s): Special Projects - CNS,
CSR-Computer Systems Research
Primary Program Source: 01001920DB NSF RESEARCH & RELATED ACTIVIT
01002223DB NSF RESEARCH & RELATED ACTIVIT

01002324DB NSF RESEARCH & RELATED ACTIVIT

01002223DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT

01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 9251
Program Element Code(s): 171400, 735400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

On-chip caches are important due to their substantial impact on the energy consumption and performance of a wide variety of computer systems, including desktop computers, embedded systems, mobile devices, servers, etc. As an alternative to traditional static random-access memory (SRAM) for implementing caches, the spin-transfer torque RAM (STTRAM) is a type of non-volatile memory that promises several advantages, such as high density, low leakage, high endurance, and compatibility with complementary metal-oxide-semiconductor (CMOS). However, STTRAM caches still face critical challenges that impede their widespread adoption, such as high write latency and energy. In addition, users of computer systems and the programs that run on the systems typically have variable resource requirements, necessitating caches that can dynamically adapt to runtime needs.

This CAREER project will investigate several interrelated research problems, including: STTRAM's characteristics and how they can be leveraged for improving the energy efficiency and performance of computer systems that run diverse programs; techniques for improving the user's experience while running the programs; new architectures and management techniques for enabling STTRAM caches that are energy-efficient and can dynamically adapt to running programs? individual needs; and novel methods to address the challenges of implementing STTRAM caches in complex multicore computer systems. Ultimately, the project will develop STTRAM cache architectures that can automatically adapt to the execution needs of diverse programs, resulting in more energy-efficient and faster computer systems.

The project's broader impacts include architectures and methods that will improve the performance and energy efficiency of a wide variety of computer systems for running a wide variety of programs. With the growth of the Internet of Things (IoT), spanning diverse computing and user needs, this project represents an important and necessary step towards adaptable and low-overhead computer systems. This CAREER project also seeks to foster education and diversity in science, technology, engineering, and math (STEM) fields through K-12 seminars, and by engaging and equipping a diverse group of young engineers with necessary techniques and skills to design innovative solutions for energy-efficient and adaptable Internet of Things architectures.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 19)
Aliyev, Ilkin and Adegbija, Tosiron "Fine-Tuning Surrogate Gradient Learning for Optimal Hardware Performance in Spiking Neural Networks" , 2024 https://doi.org/10.23919/DATE58400.2024.10546820 Citation Details
Aliyev, Ilkin and Adegbija, Tosiron "PULSE: Parametric Hardware Units for Low-power Sparsity-Aware Convolution Engine" , 2024 https://doi.org/10.1109/ISCAS58744.2024.10558062 Citation Details
Aliyev, Ilkin and Svoboda, Kama and Adegbija, Tosiron "Design Space Exploration of Sparsity-Aware Application-Specific Spiking Neural Network Accelerators" IEEE Journal on Emerging and Selected Topics in Circuits and Systems , v.13 , 2023 https://doi.org/10.1109/JETCAS.2023.3327746 Citation Details
Cordeiro, Renato and Gajaria, Dhruv and Limaye, Ankur and Adegbija, Tosiron and Karimian, Nima and Tehranipoor, Fatemeh "ECG-Based Authentication Using Timing-Aware Domain-Specific Architecture" IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , v.39 , 2020 https://doi.org/10.1109/TCAD.2020.3012169 Citation Details
Dhruv Gajaria and Tosiron Adegbija "Exploring Domain-Specific Architectures for Energy-Efficient Wearable Computing" Journal of signal processing systems for signal image and video technology , 2021 https://doi.org/https://doi.org/10.1007/s11265-021-01682-y Citation Details
Gajaria, Dhruv and Adegbija, Tosiron "ARC: DVFS-aware asymmetric-retention STT-RAM caches for energy-efficient multicore processors" Proceedings of the International Symposium on Memory Systems (MEMSYS), 2019 , 2019 10.1145/3357526.3357553 Citation Details
Gajaria, Dhruv and Adegbija, Tosiron "Evaluating the performance and energy of STT-RAM caches for real-world wearable workloads" Future Generation Computer Systems , v.136 , 2022 https://doi.org/10.1016/j.future.2022.05.023 Citation Details
Gajaria, Dhruv and Adegbija, Tosiron and Gomez, Kevin "CHIME: Energy-Efficient STT-RAM-Based Concurrent Hierarchical In-Memory Processing" , 2024 https://doi.org/10.1109/ASAP61560.2024.00053 Citation Details
Gajaria, Dhruv and Antony Gomez, Kevin and Adegbija, Tosiron "A Study of STT-RAM-based In-Memory Computing Across the Memory Hierarchy" 2022 IEEE 40th International Conference on Computer Design (ICCD) , 2022 https://doi.org/10.1109/ICCD56317.2022.00105 Citation Details
Gajaria, Dhruv and Gomez, Kevin Antony and Adegbija, Tosiron "STT-RAM-Based Hierarchical in-Memory Computing" IEEE Transactions on Parallel and Distributed Systems , v.35 , 2024 https://doi.org/10.1109/TPDS.2024.3430853 Citation Details
Gajaria, Dhruv and Kuan, Kyle and Adegbija, Tosiron "SCART: Predicting STT-RAM Cache Retention Times Using Machine Learning" 2019 Tenth International Green and Sustainable Computing Conference (IGSC) , 2019 10.1109/IGSC48788.2019.8957182 Citation Details
(Showing: 1 - 10 of 19)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This CAREER project advanced the state of the art in adaptable and energy-efficient Spin-Transfer Torque RAM (STTRAM) caches for resource-constrained multicore systems. The research addressed three key challenges that have historically limited the widespread adoption of STTRAM in on-chip caches: (1) understanding application-driven sensitivity to STTRAM characteristics, (2) developing techniques for logical adaptability in multi-level STTRAM caches, and (3) addressing fundamental architectural challenges in designing STTRAM-based caches and systems (e.g., in-memory computing).

The project developed and evaluated novel frameworks for runtime adaptability in STTRAM caches, focusing on adapting retention time and cache management policies to dynamic application and user behavior. These frameworks incorporated workload characterization across a wide spectrum of applications—ranging from healthcare and IoT to machine learning and multiprogrammed/multithreaded workloads—and leveraged simulation platforms such as GEM5, NVSim, and FPGA-based prototypes for validation. Key innovations included:

  • Logical Adaptable Retention Time (LARS) architectures that dynamically match cache retention times to application needs, reducing energy consumption without performance degradation.
  • Techniques to significantly lower refresh overheads in relaxed-retention STTRAM caches, leveraging STTRAM’s density for intra-cache refresh rather than external buffers.
  • Adaptable last-level STTRAM cache designs that optimize retention time assignments at a fine granularity (e.g., per cache way), based on observed workload patterns.
  • New hierarchical in-memory computing system architectures to enable better efficiency of in-memory computing by incorporating heterogeneous compute units across multiple levels of the memory hierarchy.

The research yielded substantial quantitative improvements. For example, across diverse benchmarks, extensive experimental evaluations showed that the proposed adaptable cache architectures achieved significant energy savings relative to SRAM and prior STTRAM approaches (e.g., 35.8% savings from LARS; 60.6% savings from adaptable last-level STTRAM cache)—while maintaining or improving performance. These results directly support the design of future cache hierarchies for mobile, embedded, and IoT systems, as well as high-performance heterogeneous processors.

Beyond technical contributions, the project achieved significant broader impacts in workforce development and STEM education:

  • Graduate Student Support and Training: The project provided support for three Ph.D. students, all of whom have since graduated and are working in industry or academia in roles that draw directly on the skills developed during the project. Their dissertations and publications contributed to the research community’s understanding of energy-efficient, adaptable cache architectures.
  • Undergraduate Research Opportunities: Through NSF REU supplements, the project supported six undergraduate students, offering them hands-on experience in architecture modeling, memory system design, and experimental evaluation. Several of these students pursued graduate study or industry positions in computer engineering.
  • Publications and Dissemination: Work supported by this project contributed to over 20 publications in top-tier conferences and journals, including DATE, ASPDAC, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), IEEE Transactions on Computers and ICCAD. These publications have broadened the impact of the research by making the methods, models, and insights available to the global community.
  • Educational Integration: Results from the project were incorporated into undergraduate and graduate courses on computer architecture and embedded systems at the University of Arizona, helping to train the next generation of computer engineers.

In addition, the project’s simulation frameworks and hardware designs were made available to the research community, enabling reproducibility and accelerating further innovation in adaptable memory system design.

Overall, this CAREER project has established a foundation for runtime-adaptable STTRAM-based caches that can intelligently balance performance and energy efficiency. It has produced widely cited publications, trained highly skilled graduates now contributing to the field, and delivered tangible tools and insights to both academia and industry. The approaches developed here are expected to remain relevant as emerging memory technologies mature and the demand for energy-efficient computing continues to grow across domains such as AI, IoT, and mobile systems.


Last Modified: 08/04/2025
Modified by: Tosiron Adegbija

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page