
NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | January 6, 2022 |
Latest Amendment Date: | May 12, 2025 |
Award Number: | 2146548 |
Award Instrument: | Continuing Grant |
Program Manager: |
Dan Cosley
dcosley@nsf.gov (703)292-8832 CNS Division Of Computer and Network Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | June 1, 2022 |
End Date: | May 31, 2027 (Estimated) |
Total Intended Award Amount: | $494,192.00 |
Total Awarded Amount to Date: | $413,970.00 |
Funds Obligated to Date: |
FY 2023 = $98,167.00 FY 2024 = $98,819.00 FY 2025 = $119,451.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
6823 SAINT CHARLES AVE NEW ORLEANS LA US 70118-5665 (504)865-4000 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
6823 St Charles Avenue New Orleans LA US 70118-5698 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Secure &Trustworthy Cyberspace |
Primary Program Source: |
01002324DB NSF RESEARCH & RELATED ACTIVIT 01002425DB NSF RESEARCH & RELATED ACTIVIT 01002526DB NSF RESEARCH & RELATED ACTIVIT 01002627DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Cooperative multi-agent learning (MAL), where multiple intelligent agents learn to coordinate with each other and with humans, is emerging as a promising paradigm for solving some of the most challenging problems in various security and safety-critical domains, including transportation, power systems, robotics, and healthcare. The decentralized nature of MAL systems and agents' exploration behavior, however, introduce new vulnerabilities unseen in standalone machine learning systems and traditional distributed systems. This project aims to develop a data-driven approach to MAL security that can provide an adequate level of protection even in the presence of persistent, coordinated, and stealthy malicious insiders or external adversaries. The main novelty of the project is to go beyond heuristics-based attack and defense schemes by incorporating opponent modeling and adaptation into security-related decision-making in a principled way. The project contributes to the emerging fields of science of security and trustworthy artificial intelligence via a cross-disciplinary approach that integrates cybersecurity, multi-agent systems, machine learning, and cognitive science. The interdisciplinary nature of this project also brings unique opportunities for both curriculum development and student training.
Developing robust defenses for large-scale MAL systems faces fundamental challenges induced by the hidden behavioral patterns of malicious agents, the dynamics and uncertainty of the environment, and the necessity of protecting benign agents' local data in many privacy-sensitive settings. This project tackles the challenges by incrementally developing a (machine) theory of mind for adversarial decision-making in three research thrusts. The first thrust develops learning-based targeted and untargeted attacks against federated and decentralized machine learning systems. These attacks first infer a world model from publicly available data and then apply model-based reinforcement learning to identify an adaptive attack policy that can fully exploit the vulnerabilities of the systems. The second thrust investigates a proactive defense framework that combines adversarial training and local adaptation, utilizing the automated attack framework developed in the first thrust as a simulator of adversaries to obtain robust defenses. The third thrust studies security in cooperative multi-agent reinforcement learning systems by addressing a set of new challenges, including complicated interactions among agents, non-stationarity, and partial observability. The goal is to understand how malicious attacks and deceptions can prevent benign agents from reaching a socially preferred outcome and how accounting for a higher order of beliefs can help an agent (benign or malicious) in both fully cooperative and mixed-motive settings.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.