Skip to feedback

Award Abstract # 2011260
EAGER: Invisible Shield: Can Compression Harden Deep Neural Networks Universally Against Adversarial Attacks?

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: LEHIGH UNIVERSITY
Initial Amendment Date: December 17, 2019
Latest Amendment Date: December 17, 2019
Award Number: 2011260
Award Instrument: Standard Grant
Program Manager: Daniela Oliveira
doliveir@nsf.gov
 (703)292-0000
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: November 7, 2019
End Date: August 31, 2021 (Estimated)
Total Intended Award Amount: $149,180.00
Total Awarded Amount to Date: $149,180.00
Funds Obligated to Date: FY 2018 = $149,180.00
History of Investigator:
  • Wujie Wen (Principal Investigator)
    wwen2@ncsu.edu
Recipient Sponsored Research Office: Lehigh University
526 BRODHEAD AVE
BETHLEHEM
PA  US  18015-3008
(610)758-3021
Sponsor Congressional District: 07
Primary Place of Performance: Lehigh University
Bethlehem
PA  US  18015-3005
Primary Place of Performance
Congressional District:
07
Unique Entity Identifier (UEI): E13MDBKHLDB5
Parent UEI:
NSF Program(s): Secure &Trustworthy Cyberspace
Primary Program Source: 01001819DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 025Z, 7434, 7916
Program Element Code(s): 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Deep neural networks (DNNs) are finding applications in wide-ranging applications such as image recognition, medical diagnosis and self-driving cars. However, DNNs suffer from a security threat: decisions can be misled by adversarial inputs crafted by adding human-imperceptible perturbations into normal inputs during training of DNN model. Defending against adversarial attacks is challenging due to multiple attack vectors, unknown adversary's strategies and cost. This project investigates a compression/decompression-based defense strategy to protect DNNs against any attack, with low cost and high accuracy.

The project aims to create a new paradigm of safeguarding DNNs from a radically different perspective by using signal compression with a focus on integrating defenses into compression of the inputs and DNN models. The research tasks include: (i) developing defensive compression for visual/audio inputs to maximize defense efficiency without compromising testing accuracy; (ii) developing defensive model compression, and novel gradient masking/obfuscating methods without involving retraining, to universally harden DNN models; and (iii) conducting attack-defense evaluations through algorithm-level simulation and live platform experimentation.

Any success from this EAGER project will be useful to research community interested in deep learning, hardware- and cyber- security, and multimedia. This project enhances economic opportunities by promoting wider applications of deep learning into realistic systems, and gives special attention to educating women and students from traditionally under-represented/under-served groups in Florida International University (FIU).

The project repository will be stored on a publicly accessible server at FIU (http://web.eng.fiu.edu/wwen/). Data will be maintained for at least 5 years after the project period.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Liu, Qi and Jiang, Han and Liu, Tao and Liu, Zihao and Li, Sicheng and Wen, Wujie and Shi, Yiyu "Defending Deep Learning-based Biomedical Image Segmentation from Adversarial Attacks: A Low-cost Frequency Refinement Approach" the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2020 , 2020 https://doi.org/ Citation Details
Liu, Qi and Wen, Wujie "Model Compression Hardens Deep Neural Networks: A New Perspective to Prevent Adversarial Attacks" IEEE Transactions on Neural Networks and Learning Systems , 2021 https://doi.org/10.1109/TNNLS.2021.3089128 Citation Details
Liu, Qi and Wen, Wujie and Wang, Yanzhi "Concurrent Weight Encoding-based Detection for Bit-Flip Attack on Neural Network Accelerators" IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 2020 , 2020 https://doi.org/ Citation Details
Liu, Tao and Liu, Zihao and Liu, Qi and Wen, Wujie and Xu, Wenyao and Li, Ming "StegoNet: Turn Deep Neural Network into a Stegomalware" ACM 36th Annual Computer Security Application Conference (ACSAC) , 2020 https://doi.org/10.1145/3427228.3427268 Citation Details
Ma, Xiaolong and Niu, Wei and Zhang, Tianyun and Liu, Sijia and Lin, Sheng and Li, Hongjia and Wen, Wujie and Chen, Xiang and Tang, Jian and Ma, Kaishen and Ren, Bin and Wang, Yanzhi Wang "An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices" Proceeding in European Conference on Computer Vision (ECCV), 2020 , 2020 https://doi.org/ Citation Details
Xie, Jiafeng and He, Pengzhou and Wen, Wujie "Efficient Implementation of Finite Field Arithmetic for Binary Ring-LWE Post-Quantum Cryptography Through a Novel Lookup-Table-Like Method" Proc. ACM/IEEE 58th Design Automation Conference (DAC) , 2021 Citation Details
Xu, Nuo and Liu, Qi and Liu, Tao and Liu, Zihao and Guo, Xiaochen and Wen, Wujie "Stealing Your Data from Compressed Machine Learning Models" IEEE/ACM Design Automation Conference (DAC) 2020 , 2020 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Deep neural networks (DNNs) are finding broad applications in many real-world applications from daily image recognition to safety- and security-sensitive medical diagnosis and self-driving cars. However, the input based adversarial-example attacks bring in ever-increasing security challenges to DNNs, as decisions can be precisely misled by simply adding human imperceptible perturbations into the legitimate model input by attackers. The primary goal of this project is to create a new paradigm of safeguarding DNNs against such attacks by directly rooting defense into compression of the inputs and DNN models, with the guarantee of low cost and high accuracy.  

We have accomplished these goals and more specifically,

(1)  We developed a JPEG-based defensive compression framework, namely "Feature Distillation", to effectively rectify adversarial examples without impacting classification accuracy on benign data with theoretical guarantee. The work for the first time unleashes the defense potentials of the input-transformation techniques with almost zero loss of DNN testing accuracy by re-architecting the fundamental entities of JPEG compression/decompression. This includes defensive quantization to maximize the filtering of adversarial inputs and DNN-oriented quantization to restore DNN testing accuracy. It is extremely low cost and can be widely applicable to many essential components in modern DNN powered intelligent cyber physical systems, such as image sensors, cameras. It also significantly outperforms the state-of-the-art input-transformation based defense and serves as a new benchmark.

(2)  We developed a low-cost frequency refinement approach to defend DNN-based biomedical image segmentation from adversarial attacks. The key idea is to redesign the quantization of JEPG compression based on the unique statistical pattern of adversarial perturbations in frequency domain in image segmentation. It almost recovers the low segmentation prediction to the original level without impacting that of benign images under adversarial settings. This is the very first study that targets the defense against the practical adversarial attacks in the context of biomedical image segmentation.

(3)  We developed a defensive dropout solution (a type of model compression techniques) to harden DNN models under adversarial attacks. The key is to develop a defensive dropout algorithm that determines an optimal test dropout rate given the DNN model and the attacker?s strategy for generating adversarial examples to trade-off the defense effect and testing accuracy.

(4)  We developed a holistic defensive model compression solution set consisting of defensive to harden DNNs? intrinsic resistance capability to a variety of unknown adversarial attacks. It consists of defensive hash compression to strengthen the decision boundary of pretrained DNN models, and retraining-free gradient inhibition methods to effectively eliminate the remaining impact of adversarial gradients with marginal accuracy loss. The work shows how weight compression-an indispensable technique originally aiming to ease the memory/storage overhead during DNN hardware implementation, can be redesigned for enhancing the robustness of DNN models.

The research findings of this project have been disseminated in the forms of conferences papers and presentations, journal papers, as well as invited seminars. The developed software algorithms have been open sourced in Github and served as new benchmarks for the community to further advance the field. This project is the core of the thesis and project of two graduate students (one Ph.D. and one Master). This EAGER research project supported three graduate students and publications of 12 conference and 1 journal papers in total.

 

 

 


Last Modified: 01/05/2022
Modified by: Wujie Wen

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page