
NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | December 17, 2019 |
Latest Amendment Date: | December 17, 2019 |
Award Number: | 2011260 |
Award Instrument: | Standard Grant |
Program Manager: |
Daniela Oliveira
doliveir@nsf.gov (703)292-0000 CNS Division Of Computer and Network Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | November 7, 2019 |
End Date: | August 31, 2021 (Estimated) |
Total Intended Award Amount: | $149,180.00 |
Total Awarded Amount to Date: | $149,180.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
526 BRODHEAD AVE BETHLEHEM PA US 18015-3008 (610)758-3021 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
Bethlehem PA US 18015-3005 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Secure &Trustworthy Cyberspace |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Deep neural networks (DNNs) are finding applications in wide-ranging applications such as image recognition, medical diagnosis and self-driving cars. However, DNNs suffer from a security threat: decisions can be misled by adversarial inputs crafted by adding human-imperceptible perturbations into normal inputs during training of DNN model. Defending against adversarial attacks is challenging due to multiple attack vectors, unknown adversary's strategies and cost. This project investigates a compression/decompression-based defense strategy to protect DNNs against any attack, with low cost and high accuracy.
The project aims to create a new paradigm of safeguarding DNNs from a radically different perspective by using signal compression with a focus on integrating defenses into compression of the inputs and DNN models. The research tasks include: (i) developing defensive compression for visual/audio inputs to maximize defense efficiency without compromising testing accuracy; (ii) developing defensive model compression, and novel gradient masking/obfuscating methods without involving retraining, to universally harden DNN models; and (iii) conducting attack-defense evaluations through algorithm-level simulation and live platform experimentation.
Any success from this EAGER project will be useful to research community interested in deep learning, hardware- and cyber- security, and multimedia. This project enhances economic opportunities by promoting wider applications of deep learning into realistic systems, and gives special attention to educating women and students from traditionally under-represented/under-served groups in Florida International University (FIU).
The project repository will be stored on a publicly accessible server at FIU (http://web.eng.fiu.edu/wwen/). Data will be maintained for at least 5 years after the project period.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Deep neural networks (DNNs) are finding broad applications in many real-world applications from daily image recognition to safety- and security-sensitive medical diagnosis and self-driving cars. However, the input based adversarial-example attacks bring in ever-increasing security challenges to DNNs, as decisions can be precisely misled by simply adding human imperceptible perturbations into the legitimate model input by attackers. The primary goal of this project is to create a new paradigm of safeguarding DNNs against such attacks by directly rooting defense into compression of the inputs and DNN models, with the guarantee of low cost and high accuracy.
We have accomplished these goals and more specifically,
(1) We developed a JPEG-based defensive compression framework, namely "Feature Distillation", to effectively rectify adversarial examples without impacting classification accuracy on benign data with theoretical guarantee. The work for the first time unleashes the defense potentials of the input-transformation techniques with almost zero loss of DNN testing accuracy by re-architecting the fundamental entities of JPEG compression/decompression. This includes defensive quantization to maximize the filtering of adversarial inputs and DNN-oriented quantization to restore DNN testing accuracy. It is extremely low cost and can be widely applicable to many essential components in modern DNN powered intelligent cyber physical systems, such as image sensors, cameras. It also significantly outperforms the state-of-the-art input-transformation based defense and serves as a new benchmark.
(2) We developed a low-cost frequency refinement approach to defend DNN-based biomedical image segmentation from adversarial attacks. The key idea is to redesign the quantization of JEPG compression based on the unique statistical pattern of adversarial perturbations in frequency domain in image segmentation. It almost recovers the low segmentation prediction to the original level without impacting that of benign images under adversarial settings. This is the very first study that targets the defense against the practical adversarial attacks in the context of biomedical image segmentation.
(3) We developed a defensive dropout solution (a type of model compression techniques) to harden DNN models under adversarial attacks. The key is to develop a defensive dropout algorithm that determines an optimal test dropout rate given the DNN model and the attacker?s strategy for generating adversarial examples to trade-off the defense effect and testing accuracy.
(4) We developed a holistic defensive model compression solution set consisting of defensive to harden DNNs? intrinsic resistance capability to a variety of unknown adversarial attacks. It consists of defensive hash compression to strengthen the decision boundary of pretrained DNN models, and retraining-free gradient inhibition methods to effectively eliminate the remaining impact of adversarial gradients with marginal accuracy loss. The work shows how weight compression-an indispensable technique originally aiming to ease the memory/storage overhead during DNN hardware implementation, can be redesigned for enhancing the robustness of DNN models.
The research findings of this project have been disseminated in the forms of conferences papers and presentations, journal papers, as well as invited seminars. The developed software algorithms have been open sourced in Github and served as new benchmarks for the community to further advance the field. This project is the core of the thesis and project of two graduate students (one Ph.D. and one Master). This EAGER research project supported three graduate students and publications of 12 conference and 1 journal papers in total.
Last Modified: 01/05/2022
Modified by: Wujie Wen
Please report errors in award information by writing to: awardsearch@nsf.gov.