Skip to feedback

Award Abstract # 2212427
Collaborative Research: SHF: Medium: Approximate Computing for Machine Learning Security: Foundations and Accelerator Design

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: GEORGE MASON UNIVERSITY
Initial Amendment Date: July 21, 2022
Latest Amendment Date: August 28, 2024
Award Number: 2212427
Award Instrument: Continuing Grant
Program Manager: Almadena Chtchelkanova
achtchel@nsf.gov
 (703)292-7498
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 1, 2022
End Date: July 31, 2026 (Estimated)
Total Intended Award Amount: $400,000.00
Total Awarded Amount to Date: $296,497.00
Funds Obligated to Date: FY 2022 = $195,362.00
FY 2024 = $101,135.00
History of Investigator:
  • Khaled Khasawneh (Principal Investigator)
    kkhasawn@gmu.edu
Recipient Sponsored Research Office: George Mason University
4400 UNIVERSITY DR
FAIRFAX
VA  US  22030-4422
(703)993-2295
Sponsor Congressional District: 11
Primary Place of Performance: George Mason University
4400 UNIVERSITY DR
FAIRFAX
VA  US  22030-4422
Primary Place of Performance
Congressional District:
11
Unique Entity Identifier (UEI): EADLFP7Z72E5
Parent UEI: H4NRWLFCDF43
NSF Program(s): Information Technology Researc,
Software & Hardware Foundation
Primary Program Source: 01002223DB NSF RESEARCH & RELATED ACTIVIT
01002425DB NSF RESEARCH & RELATED ACTIVIT

01002526DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1640, 7924, 7941
Program Element Code(s): 164000, 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Deep Neural Networks (DNNs) are achieving state-of-the-art performance on a large and expanding number of application domains. However, one of the threats to their wide-scale deployment is vulnerability to adversarial machine learning attacks, where an adversary injects small perturbations to the input data that cause the DNN to misclassify, with potentially dangerous outcomes (for example, mistaking a stop sign for a speed limit sign). In this project, the researchers will explore how building DNNs with approximate computing elements improves their robustness to these adversarial attacks. Approximate computing is a technique to build computing elements that are simpler (and therefore higher performing and more sustainable) but do not compute the exact result of an operation. The investigators will explore how to select approximate computing elements and use them in building sustainable DNN accelerators that balance performance, accuracy, and security.

The proposal's expected contributions include developing new insights into the relationship between approximation and robustness of DNNs. The project will explore what types of approximation techniques result in effective DNNs that balance accuracy, performance, sustainability, and protection against adversarial attacks and develop optimization frameworks that can find optimal operating points along these dimensions. It will also explore how to build new approximate computing elements specifically targeted toward this application. The project will use these findings to build sustainable, performant, and accurate DNN accelerators. The project will also explore other approximate computing-based techniques to protect against other types of attacks threatening the security and privacy of DNNs, as well as for different deep neural network learning structures. The project is expected to have significant impacts on security, sustainability, and accuracy of machine learning models. The research team will share all of the byproducts of the research with the research community. The project will train graduate and undergraduate students. The investigators will develop new educational material for use in machine learning, computer architecture, and computer security classes.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Islam, Md Shohidul and Alouani, Ihsen and Khasawneh, Khaled N. "SecureVolt: Enhancing Deep Neural Networks Security via Undervolting" IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 2023 https://doi.org/10.1109/TCAD.2023.3296379 Citation Details
Islam, Md Shohidul and Alouani, Ihsen and Khasawneh, Khaled N. "Stochastic-HMDs: Adversarial-Resilient Hardware Malware Detectors via Undervolting" Proceedings ACM IEEE Design Automation Conference , 2023 Citation Details
Islam, Md Shohidul and Omidi, Behnam and Alouani, Ihsen and Khasawneh, Khaled N. "VPP: Privacy Preserving Machine Learning via Undervolting" IEEE International Symposium on Hardware Oriented Security and Trust (HOST) , 2023 https://doi.org/10.1109/HOST55118.2023.10133266 Citation Details
Parsa, Maryam and Khasawneh, Khaled N. and Alouani, Ihsen "A Brain-inspired Approach for Malware Detection using Sub-semantic Hardware Features" Proceedings Great Lakes Symposium on VLSI , 2023 https://doi.org/10.1145/3583781.3590293 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page