Award Abstract # 2019536
Collaborative Research: SaTC: CORE: Small: Understanding and Taming Deterministic Model Bit Flip attacks in Deep Neural Networks

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: THE UNIVERSITY OF CENTRAL FLORIDA BOARD OF TRUSTEES
Initial Amendment Date: August 15, 2020
Latest Amendment Date: October 20, 2020
Award Number: 2019536
Award Instrument: Standard Grant
Program Manager: Dan Cosley
dcosley@nsf.gov
 (703)292-8832
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2020
End Date: September 30, 2025 (Estimated)
Total Intended Award Amount: $250,000.00
Total Awarded Amount to Date: $250,000.00
Funds Obligated to Date: FY 2020 = $250,000.00
History of Investigator:
  • Fan Yao (Principal Investigator)
    fan.yao@ucf.edu
Recipient Sponsored Research Office: The University of Central Florida Board of Trustees
4000 CENTRAL FLORIDA BLVD
ORLANDO
FL  US  32816-8005
(407)823-0387
Sponsor Congressional District: 10
Primary Place of Performance: University of Central Florida
4000 Central Florida Blvd
Orlando
FL  US  32816-8005
Primary Place of Performance
Congressional District:
10
Unique Entity Identifier (UEI): RD7MXJV7DKT9
Parent UEI:
NSF Program(s): Secure &Trustworthy Cyberspace
Primary Program Source: 01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 025Z
Program Element Code(s): 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Deep neural network (DNN) is widely deployed for a variety of decision-making tasks such as access control, medical diagnostics, and autonomous driving. Compromise of DNN models can severely disrupt inference behavior, leading to catastrophic outcomes for security and safety-sensitive applications. While a tremendous amount of efforts have been made to secure DNNs against external adversaries (e.g., adversarial examples), internal adversaries that tamper DNN model integrity through exploiting hardware threats (i.e., fault injection attacks) can raise unprecedented concerns. This project aims to offer insights into DNN security issues due to hardware-based fault attacks, and explore ways to promote the robustness and security of future deep learning system against such internal adversaries.

This project targets one critical research topic, namely securing deep learning systems against hardware-based model tampering. Recent advances in hardware fault attacks (e.g., rowhammer) can deterministically inject faults to DNN models, causing bit flips in key DNN parameters including model weights. Such threats can be extremely dangerous as they could potentially enable malicious manipulation of prediction outcomes in the inference stage by the adversary. The project seeks to systematically understand the practicality and severity of DNN model bit flip attacks in real systems and investigate software/architecture level protection techniques to secure DNNs against internal tampering. The study focuses on quantized DNNs which exhibit higher robustness against model tampering. This project will incorporate the following research efforts: (1) Investigate the vulnerability of quantized DNNs to deterministic bit flipping of model weights concerning various attack objectives; (2) Explore algorithmic approaches to enhance the intrinsic robustness of quantized DNN models; (3) Design effective and efficient system and architecture level defense mechanisms to comprehensively defeat DNN model bit flip attacks. This project will result in the dissemination of shared data, attack artifacts, algorithms and tools to the broader hardware security and AI security community.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 11)
Cai, Kunbei and Chowdhuryy, Md Hafizul and Zhang, Zhenkai and Yao, Fan "Seeds of SEED: NMT-Stroke: Diverting Neural Machine Translation through Hardware-based Faults" 2021 International Symposium on Secure and Private Execution Environment Design (SEED) , 2021 https://doi.org/10.1109/SEED51797.2021.00019 Citation Details
Cai, Kunbei and Chowdhuryy, Md_Hafizul Islam and Zhang, Zhenkai and Yao, Fan "DeepVenom: Persistent DNN Backdoors Exploiting Transient Weight Perturbations in Memories" , 2024 https://doi.org/10.1109/SP54263.2024.00223 Citation Details
Cai, Kunbei and Zhang, Zhenkai and Yao, Fan "On the Feasibility of Training-time Trojan Attacks through Hardware-based Faults in Memory" IEEE International Symposium on Hardware Oriented Security and Trust (HOST) , 2022 https://doi.org/10.1109/HOST54066.2022.9840266 Citation Details
Chowdhuryy, Md Hafizul and Rashed, Muhammad Rashedul and Awad, Amro and Ewetz, Rickard and Yao, Fan "LADDER: Architecting Content and Location-aware Writes for Crossbar Resistive Memories" MICRO '21: MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture , 2021 https://doi.org/10.1145/3466752.3480054 Citation Details
Liang, Sisheng and Zhan, Zihao and Yao, Fan and Cheng, Long and Zhang, Zhenkai "Clairvoyance: Exploiting Far-field EM Emanations of GPU to "See" Your DNN Models through Obstacles at a Distance" IEEE Security and Privacy Workshops (SPW) , 2022 https://doi.org/10.1109/SPW54247.2022.9833894 Citation Details
Rafi, Mujahid Al and Feng, Yuan and Yao, Fan and Tang, Meng and Jeon, Hyeran "Decepticon: Attacking Secrets of Transformers" , 2023 https://doi.org/10.1109/IISWC59245.2023.00028 Citation Details
Rakin, Adnan Siraj and Chowdhuryy, Md Hafizul and Yao, Fan and Fan, Deliang "DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories" 2022 IEEE Symposium on Security and Privacy (SP) , 2022 https://doi.org/10.1109/SP46214.2022.9833743 Citation Details
Rakin, Adnan Siraj and He, Zhezhi and Li, Jingtao and Yao, Fan and Chakrabarti, Chaitali and Fan, Deliang "T-BFA: Targeted Bit-Flip Adversarial Weight Attack" IEEE Transactions on Pattern Analysis and Machine Intelligence , 2021 https://doi.org/10.1109/TPAMI.2021.3112932 Citation Details
Side, Mert and Yao, Fan and Zhang, Zhenkai "LockedDown: Exploiting Contention on Host-GPU PCIe Bus for Fun and Profit" EuroSP , 2022 https://doi.org/10.1109/EuroSP53844.2022.00025 Citation Details
Zhang, Zhenkai and Cai, Kunbei and Guo, Yanan and Yao, Fan and Gao, Xing "InvalIdate+Compare: A Timer-Free GPU Cache Attack Primitive" , 2024 Citation Details
Zhan, Zihao and Zhang, Zhenkai and Liang, Sisheng and Yao, Fan and Koutsoukos, Xenofon "Graphics Peeping Unit: Exploiting EM Side-Channel Information of GPUs to Eavesdrop on Your Neighbors" IEEE Symposium on Security and Privacy (SP) , 2022 https://doi.org/10.1109/SP46214.2022.9833773 Citation Details
(Showing: 1 - 10 of 11)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page