
NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | July 20, 2018 |
Latest Amendment Date: | July 20, 2018 |
Award Number: | 1801495 |
Award Instrument: | Standard Grant |
Program Manager: |
Daniela Oliveira
doliveir@nsf.gov (703)292-0000 CNS Division Of Computer and Network Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 1, 2018 |
End Date: | July 31, 2022 (Estimated) |
Total Intended Award Amount: | $899,990.00 |
Total Awarded Amount to Date: | $899,990.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
70 WASHINGTON SQ S NEW YORK NY US 10012-1019 (212)998-2121 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
70 Washington Square S New York NY US 10012-1019 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Secure &Trustworthy Cyberspace |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Artificial intelligence (AI) is poised to revolutionize the world in fields ranging from technology to medicine, physics and the social sciences. Yet as AI is deployed in these domains, recent work has shown that systems may be vulnerable to different types of attacks that cause them to misbehave; for instance, attacks that cause an AI system to recognize a stop sign as a speed-limit sign. The project seeks to develop methodologies for testing, verifying and debugging AI systems, with a specific focus on deep neural network (DNN)-based AI systems, to ensure their safety and security.
The intellectual merits of the proposed research are encompassed in four new software tools that will be developed: (1) DeepXplore, a tool for automated and systematic testing of DNNs that discovers erroneous behavior that might be either inadvertently or maliciously introduced; (2) BadNets, a framework that automatically generated DNNs with known and stealthy misbehaviours in order to stress-test DeepXplore; (3) SafetyNets; a low-overhead scheme for safe and verifiable execution of DNNs in the cloud; and (4) VisualBackProp; a visual debugging tool for DNNs. The synergistic use of these tools for secure deployment of an AI system for autonomous driving will be demonstrated.
The project outcomes will significantly improve the security and safety of AI systems and increase their deployment in safety- and security-critical settings, resulting in broad societal impact. The results of the project will be widely disseminated via publications, talks, open access code, and competitions hosted on sites such as Kaggle and NYU's annual Cyber-Security Awareness Week (CSAW). Furthermore, students from under-represented minority groups in science, technology, engineering and mathematics (STEM) will be actively recruited and mentored to be leaders in this critical area.
The code for this project will be made publicly available via github.com. Preliminary code for the tools that will be developed is already hosted on this website, including DeepXplore (https://github.com/peikexin9/deepxplore) and BadNets (https://github.com/Kooscii/BadNets/). These repositories will be linked to from a homepage that describes the entire project. The project homepage will be hosted on wp.nyu.edu/mlsecproject.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
The projects goals were to secure AI against hackers who can, via even subtle modifications to the data on which AI is trained or its inputs, cause the AI to misbehave with potentially disastrous consequences. To this end, our project aimed to develop methods for the design of robust AI technologies to ensure that the AI behaves safely and securely, especially when deployed in applications that can impact human health and safety, for example, autonomous driving.
The project has made significant progress in achieving thse goals. We have developed new ways of detecting and quarantining malicious inputs to deep networks, and demonstrated how these quarantined inputs can be used to repair vulnerable nets. We have also demonstrated new attack vectors and corresponding defenses on autonomous driving systems, for instance, how an attacker can deploy bildboards near traffic signs in a city and fool an autonomous driving system to incorrectly identify specific billboard ads as correlates for red/green traffic signals. In parallel, we stuidied the robustness of deep learning methods used in the domain of AI-enbabled chip design, and demonstrated new attacks and defenses.
A number of PHD students have been trained via this project, three of whom have graduated (one of these three is a woman). We have conducted an annual AI/ML summer school for K-2 students, and help ML hacking competitions at NYU's Cyber-security Awareness Week.
Last Modified: 01/16/2023
Modified by: Siddharth Garg
Please report errors in award information by writing to: awardsearch@nsf.gov.