Award Abstract # 2038029
EAGER: SaTC-EDU: Privacy Enhancing Techniques and Innovations for AI-Cybersecurity Cross Training

NSF Org: DGE
Division Of Graduate Education
Recipient: GEORGIA TECH RESEARCH CORP
Initial Amendment Date: July 28, 2020
Latest Amendment Date: July 28, 2020
Award Number: 2038029
Award Instrument: Standard Grant
Program Manager: Li Yang
liyang@nsf.gov
 (703)292-2677
DGE
 Division Of Graduate Education
EDU
 Directorate for STEM Education
Start Date: September 1, 2020
End Date: August 31, 2023 (Estimated)
Total Intended Award Amount: $300,000.00
Total Awarded Amount to Date: $300,000.00
Funds Obligated to Date: FY 2020 = $300,000.00
History of Investigator:
  • Ling Liu (Principal Investigator)
    lingliu@cc.gatech.edu
Recipient Sponsored Research Office: Georgia Tech Research Corporation
926 DALNEY ST NW
ATLANTA
GA  US  30318-6395
(404)894-4819
Sponsor Congressional District: 05
Primary Place of Performance: Georgia Institute of Technology
225 North Avenue
Atlanta
GA  US  30332-0002
Primary Place of Performance
Congressional District:
05
Unique Entity Identifier (UEI): EMW9FC8J3HN4
Parent UEI: EMW9FC8J3HN4
NSF Program(s): Secure &Trustworthy Cyberspace
Primary Program Source: 01002021DB NSF RESEARCH & RELATED ACTIVIT
04002021DB NSF Education & Human Resource
Program Reference Code(s): 025Z, 093Z, 7916, 9102
Program Element Code(s): 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.076

ABSTRACT

Artificial intelligence (AI) is being rapidly deployed in many security-critical applications. This has fueled the use of AI to improve cybersecurity via speed of reasoning and reaction (AI for cybersecurity). At the same time, the widespread use of AI introduces new adversarial threats to AI systems and highlights a need for robustness and resilience guarantees for AI (cybersecurity for AI), while ensuring fairness of and trust in AI algorithmic decision making. Not surprisingly, privacy-enhancing technologies and innovations are critical to mitigating the adverse effects of intentional exploitation and protecting AI systems. However, resources for AI-cybersecurity cross-training are limited, and even fewer programs integrate topics, techniques and research innovations pertaining to privacy in their basic curricula covering AI or cybersecurity. To bridge this cross-training gap and to advance AI-cybersecurity education, this project will create a pilot program on privacy-enhancing AI-cybersecurity cross-training, which will provide a transformative learning experience for students. The results of this project will provide students with the AI-cybersecurity knowledge and skills that will enable them to enter the workforce and contribute to the creation of a secure and trustworthy AI-cybersecurity environment that simultaneously supports AI safety, AI privacy and AI fairness for all.

The intellectual merit of this project stems from the development of a first-of-its-kind research and teaching methodology that will provide effective AI-cybersecurity cross-training in the context of privacy. This will include developing a privacy foundation virtual laboratory (vLab) and three advanced topic vLabs, each representing a unique educational innovation for AI-cybersecurity cross-training. The AI for Security vLab will enable students to learn that privacy is a critical system property for all AI-enabled cybersecurity systems and applications. The Security of AI vLab will assist students in learning that privacy is an important safety guarantee against a variety of privacy leakage risks. The AI Fairness and Trust vLab will empower students to learn that privacy is an essential measure of trust and fairness of AI systems by ensuring the right to privacy and AI ethics for all. By participating in these vLabs, students will learn to use risk assessment tools to understand new vulnerabilities to attack of AI models and to design risk-mitigation tools to protect AI model learning and reasoning against security or privacy violations and algorithmic biases.

This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Kamal, Imam Mustafa and Bae, Hyerim and Liu, Ling "Metric Learning as a Service With Covariance Embedding" IEEE Transactions on Services Computing , v.16 , 2023 https://doi.org/10.1109/TSC.2023.3266445 Citation Details
Wang, Chen and Yuan, Mengting and Zhang, Rui and Peng, Kai and Liu, Ling "Efficient Point-of-Interest Recommendation Services With Heterogenous Hypergraph Embedding" IEEE Transactions on Services Computing , v.16 , 2023 https://doi.org/10.1109/TSC.2022.3187038 Citation Details
Wu, Yanzhao and Liu, Ling "Selecting and Composing Learning Rate Policies for Deep Neural Networks" ACM Transactions on Intelligent Systems and Technology , v.14 , 2023 https://doi.org/10.1145/3570508 Citation Details
Chow, Ka-Ho and Liu, Ling "Boosting Object Detection Ensembles with Error Diversity" , 2022 https://doi.org/10.1109/ICDM54844.2022.00105 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Artificial Intelligence (AI) is being rapidly deployed in many security-critical applications. This has fueled the use of AI to improve Cybersecurity in speed of reasoning and reaction (AI for security), and at the same time, wide spread use of AI introduces a variety of new adversarial threats inherent AI systems and calls for developing robustness guarantees for AI (Security for AI).  We argue that privacy enhancing technologies and innovations are at the center of these three facets for mitigating adverse effect of intentional exploitations and protecting privately trained models and their sensitive training data. However, very limited curriculum or hands-on learning resources are available for AI-Cybersecurity cross training, and even fewer programs in computer science, data science and engineering integrate privacy enhancing fundamentals, techniques and research innovations in their basic curriculum covering AI or Cybersecurity.

To bridge such cross training gap, this project takes a principled approach to create a privacy-enhancing AI-Cybersecurity cross training research pilot program. We develop the risk assessment tools to identify and categorize the types of risks that threaten the privacy, utility and trust of AI models, and the risk mitigation tools to manage and detect risks, regulate access to sensitive data, and secure the AI models trained over sensitive or proprietary data. These tools are packaged in AI Privacy vLab and AI Security vLab as educational vehicles to provide hand-on experiences and enable students to gain an in-depth understanding of AI privacy risks and AI Security threats, to learn how to design, measure, evaluate both the robustness of AI and the effectiveness of risk mitigation strategies. 

This project has designed a first-of-its-kind research-teaching alliance method  towards effective AI-Cybersecurity cross training through the AI privacy lens. We develop a privacy foundation virtual laboratory to experiment a unique educational innovation to AI-Cybersecurity cross-training.  The AI for Security vLab has enabled students to learn that privacy is a critical system property for all AI-enabled cybersecurity systems and applications, given the high sensitivity of cybersecurity-related training data and high confidentiality of AI models trained for cybersecurity applications. 

Our NSF SaTC EDU funded AI-Cybersecurity cross training project took a radically different and yet methodical approach to AI-Cybersecurity education. It creates a new way of providing AI-Cybersecurity cross training program by synergizing AI-Privacy and AI-Security for training the next generation of AI-Cybersecurity workforce. We conjecture that the findings could help laying the groundwork for training and preparing students with the knowledge and skills required to develop and deploy next generation of secure and trustworthy AI-Cybersecurity systems.

 

 


Last Modified: 11/26/2023
Modified by: Ling Liu

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page