Skip to feedback

Award Abstract # 1801495
SaTC: CORE: Medium: Collaborative: Towards Trustworthy Deep Neural Network Based AI: A Systems Approach

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: NEW YORK UNIVERSITY
Initial Amendment Date: July 20, 2018
Latest Amendment Date: July 20, 2018
Award Number: 1801495
Award Instrument: Standard Grant
Program Manager: Daniela Oliveira
doliveir@nsf.gov
 (703)292-0000
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 1, 2018
End Date: July 31, 2022 (Estimated)
Total Intended Award Amount: $899,990.00
Total Awarded Amount to Date: $899,990.00
Funds Obligated to Date: FY 2018 = $899,990.00
History of Investigator:
  • Siddharth Garg (Principal Investigator)
    sg175@nyu.edu
  • Brendan Dolan-Gavitt (Co-Principal Investigator)
  • Anna Choromanska (Co-Principal Investigator)
Recipient Sponsored Research Office: New York University
70 WASHINGTON SQ S
NEW YORK
NY  US  10012-1019
(212)998-2121
Sponsor Congressional District: 10
Primary Place of Performance: New York University
70 Washington Square S
New York
NY  US  10012-1019
Primary Place of Performance
Congressional District:
10
Unique Entity Identifier (UEI): NX9PXMKW5KW8
Parent UEI:
NSF Program(s): Secure &Trustworthy Cyberspace
Primary Program Source: 01001819DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 025Z, 7434, 7924
Program Element Code(s): 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Artificial intelligence (AI) is poised to revolutionize the world in fields ranging from technology to medicine, physics and the social sciences. Yet as AI is deployed in these domains, recent work has shown that systems may be vulnerable to different types of attacks that cause them to misbehave; for instance, attacks that cause an AI system to recognize a stop sign as a speed-limit sign. The project seeks to develop methodologies for testing, verifying and debugging AI systems, with a specific focus on deep neural network (DNN)-based AI systems, to ensure their safety and security.

The intellectual merits of the proposed research are encompassed in four new software tools that will be developed: (1) DeepXplore, a tool for automated and systematic testing of DNNs that discovers erroneous behavior that might be either inadvertently or maliciously introduced; (2) BadNets, a framework that automatically generated DNNs with known and stealthy misbehaviours in order to stress-test DeepXplore; (3) SafetyNets; a low-overhead scheme for safe and verifiable execution of DNNs in the cloud; and (4) VisualBackProp; a visual debugging tool for DNNs. The synergistic use of these tools for secure deployment of an AI system for autonomous driving will be demonstrated.

The project outcomes will significantly improve the security and safety of AI systems and increase their deployment in safety- and security-critical settings, resulting in broad societal impact. The results of the project will be widely disseminated via publications, talks, open access code, and competitions hosted on sites such as Kaggle and NYU's annual Cyber-Security Awareness Week (CSAW). Furthermore, students from under-represented minority groups in science, technology, engineering and mathematics (STEM) will be actively recruited and mentored to be leaders in this critical area.

The code for this project will be made publicly available via github.com. Preliminary code for the tools that will be developed is already hosted on this website, including DeepXplore (https://github.com/peikexin9/deepxplore) and BadNets (https://github.com/Kooscii/BadNets/). These repositories will be linked to from a homepage that describes the entire project. The project homepage will be hosted on wp.nyu.edu/mlsecproject.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Gu, Tianyu and Liu, Kang and Dolan-Gavitt, Brendan and Garg, Siddharth "BadNets: Evaluating Backdooring Attacks on Deep Neural Networks" IEEE Access , v.7 , 2019 https://doi.org/10.1109/ACCESS.2019.2909068 Citation Details
Liu, Kang and Dolan-Gavitt, Brendan and Garg, Siddharth "Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks" Research in Attacks, Intrusions, and Defenses , 2018 10.1007/978-3-030-00470-5_13 Citation Details
Liu, Kang and Tan, Benjamin and Karri, Ramesh and Garg, Siddharth "Poisoning the (Data) Well in ML-Based CAD: A Case Study of Hiding Lithographic Hotspots" 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE) , 2020 10.23919/DATE48585.2020.9116489 Citation Details
Liu, Kang and Tan, Benjamin and Karri, Ramesh and Garg, Siddharth "Training Data Poisoning in ML-CAD: Backdooring DL-Based Lithographic Hotspot Detectors" IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , v.40 , 2021 https://doi.org/10.1109/TCAD.2020.3024780 Citation Details
Liu, Kang and Tan, Benjamin and Reddy, Gaurav Rajavendra and Garg, Siddharth and Makris, Yiorgos and Karri, Ramesh "Bias Busters: Robustifying DL-Based Lithographic Hotspot Detectors Against Backdooring Attacks" IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , v.40 , 2021 https://doi.org/10.1109/TCAD.2020.3033749 Citation Details
Patel, Naman and Krishnamurthy, Prashanth and Garg, Siddharth and Khorrami, Farshad "Adaptive Adversarial Videos on Roadside Billboards: Dynamically Modifying Trajectories of Autonomous Vehicles" 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , 2019 10.1109/IROS40897.2019.8968267 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The projects goals were to secure AI against hackers who can, via even subtle modifications to the data on which AI is trained or its inputs, cause the AI to misbehave with potentially disastrous consequences. To this end, our project aimed to develop methods for the design of robust AI technologies to ensure that the AI behaves safely and securely, especially when deployed in applications that can impact human health and safety, for example, autonomous driving.

The project has made significant progress in achieving thse goals. We have developed new ways of detecting and quarantining malicious inputs to deep networks, and demonstrated how these quarantined inputs can be used to repair vulnerable nets. We have also demonstrated new attack vectors and corresponding defenses on autonomous driving systems, for instance, how an attacker can deploy bildboards near traffic signs in a city and fool an autonomous driving system to incorrectly identify specific billboard ads as correlates for red/green traffic signals. In parallel, we stuidied the robustness of deep learning methods used in the domain of AI-enbabled chip design, and demonstrated new attacks and defenses.

A number of PHD students have been trained via this project, three of whom have graduated (one of these three is a woman). We have conducted an annual AI/ML summer school for K-2 students, and help ML hacking competitions at NYU's Cyber-security Awareness Week.

 

 

 


Last Modified: 01/16/2023
Modified by: Siddharth Garg

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page