Award Abstract # 1937403
RTML: Large: Real-Time Autonomic Decision Making on Sparsity-Aware Accelerated Hardware via Online Machine Learning and Approximation

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: RUTGERS, THE STATE UNIVERSITY
Initial Amendment Date: September 8, 2019
Latest Amendment Date: May 1, 2023
Award Number: 1937403
Award Instrument: Standard Grant
Program Manager: Sankar Basu
sabasu@nsf.gov
 (703)292-7843
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2019
End Date: September 30, 2024 (Estimated)
Total Intended Award Amount: $1,400,000.00
Total Awarded Amount to Date: $1,695,998.00
Funds Obligated to Date: FY 2019 = $1,400,000.00
FY 2021 = $279,998.00

FY 2023 = $16,000.00
History of Investigator:
  • Dario Pompili (Principal Investigator)
    pompili@rutgers.edu
  • Saman Zonouz (Co-Principal Investigator)
  • Bo Yuan (Co-Principal Investigator)
Recipient Sponsored Research Office: Rutgers University New Brunswick
3 RUTGERS PLZ
NEW BRUNSWICK
NJ  US  08901-8559
(848)932-0150
Sponsor Congressional District: 12
Primary Place of Performance: Department of ECE
Frelinghuysen Road
Piscataway
NJ  US  08854-3925
Primary Place of Performance
Congressional District:
06
Unique Entity Identifier (UEI): M1LVPE5GLSD9
Parent UEI:
NSF Program(s): Special Projects - CCF,
Software & Hardware Foundation
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
01001920DB NSF RESEARCH & RELATED ACTIVIT

01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 082Z, 2878, 7798, 7925, 7945, 9251
Program Element Code(s): 287800, 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Real-time smart and autonomic decision making involves two major stages, sensing (of sensor data and then transformation into actionable knowledge) and planning (taking decisions using this knowledge). These two stages happen in both internal and external operations of an Intelligent Physical System (IPS). In case of internal operations, sensing refers to reading data from on-board sensors and planning refers to smart execution of the firmware running on the IPS. In case of external operations, sensing refers to sensing data from externally-mounted sensors and planning refers to executing the software that constitutes an application. In the sensing stage, an IPS should be able to cope with different forms of uncertainty, especially data and model uncertainties. The goal of this research project is to achieve the objectives of online autonomic decision making on sparsity-aware accelerated hardware via Real-Time Machine Learning (RTML) and approximation for a group of IPSs such as drones performing data collection and/or multi-object tracking/classification and operating in a highly dynamic environment that is difficult to model. Remarkably, the techniques adopted in this project generalize well as they can be applied to a variety of IPS domains including natural calamities, man-made disasters, and terrorist attacks. The drone-based distributed multi-object tracking/classification will enable stakeholders such as citizens, government bodies, rescue agencies, and industries to comprehend the extent of damage, and to develop more effective mitigation policies. The research will also train students including minority and underrepresented students in the field.

There are three specific tasks in this project. In Task 1, a real-time decision-making approach will be proposed via online deep reinforcement learning with inherent distributed training capability; temporal and spatial correlation in streaming video will then be exploited towards real-time multi-object tracking/detection. In Task 2, novel hardware architectures will be designed to support sparse Convolution Neural Networks (CNN). Considering the dual benefits of sparsity on both lower computational and space complexity for Deep Neural Network (DNN) models, a sparsity-aware CNN accelerator can achieve significant hardware performance improvements in term of latency, throughput, and energy efficiency over non-sparsity-aware techniques. Finally, in Task 3, hardware-aware software engineering solutions will be studied for accelerated execution. The idea of leveraging compiler optimization and the underlying hardware features in combination will be investigated in order to optimize execution performance; then, data-driven modeling techniques will be presented to replace the time-consuming segments of the ML software packages with their equivalent data-driven models, namely micro-neural networks. Once these three research tasks are validated individually via principled experimentation in terms of their stated goals, they will be integrated into a unified framework, which will be thoroughly studied via multiple trials on complementary field scenarios. The project will also collaborate with a synergistic DARPA program for related hardware development.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Anjum, Khizar and Chowdhury, Tahmeed and Mandava, Sreeram and Piccoli, Benedetto and Pompili, Dario "Leveraging On-Board UAV Motion Estimation for Lightweight Macroscopic Crowd Identification" , 2024 https://doi.org/10.1109/PerCom59722.2024.10494446 Citation Details
Anjum, Khizar and Pompili, Dario "Battery-Less Implantable Continuous EEG Monitoring via Anisotropic Diffusion" IEEE Journal on Selected Areas in Communications , v.42 , 2024 https://doi.org/10.1109/JSAC.2024.3399203 Citation Details
Hsieh, Yung-Ting and Anjum, Khizar and Pompili, Dario "Ultra-Low Power Analog Folded Neural Network for Cardiovascular Health Monitoring" IEEE Journal of Biomedical and Health Informatics , 2024 https://doi.org/10.1109/JBHI.2024.3375762 Citation Details
Huang, Songjun and Sun, Chuanneng and Gong, Jie and Pompili, Dario "A Bi-Layer Joint Training Reinforcement Learning Framework for Post-Disaster Rescue" , 2024 https://doi.org/10.1109/DCOSS-IoT61029.2024.00060 Citation Details
Huang, Songjun and Sun, Chuanneng and Wang, Ruo-Qian and Pompili, Dario "Multi-Behavior Multi-Agent Reinforcement Learning for Informed Search via Offline Training" , 2024 https://doi.org/10.1109/DCOSS-IoT61029.2024.00014 Citation Details
Jiang, Tingcong and Sun, Chuanneng and El_Rouayheb, Salim and Pompili, Dario "FaceGroup: Continual Face Authentication via Partially Homomorphic Encryption & Group Testing" , 2023 https://doi.org/10.1109/MASS58611.2023.00062 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The scope of this research project is to develop novel engineering solutions to enable real-time autonomic decision making on sparsity-aware accelerated hardware via online Machine Learning (ML) and approximation. Our research focuses on ultra-low power computing and real-time processing across multiple domains including aerial robotics, healthcare monitoring, and IoT systems. The project addresses key challenges in implementing sophisticated real-time and data-driven computing capabilities within strict power and resource limitations. This includes developing battery-less solutions for continuous monitoring, context-aware computational frameworks for drones, and efficient Neural Network (NN) architectures for resource-constrained devices. The designs are intentionally kept fundamental and theoretically solid to ensure their applicability across various cyber-physical platforms, such as autonomous driving and critical infrastructure operations including industrial control and smart grid systems.

The primary objectives include: (1) Developing real-time decision-making capabilities through online Deep Reinforcement Learning (DRL) for multi-drone coordination, focusing on extending Deep Q-Networks (DQN) and Deep Deterministic Policy Gradient (DDPG) algorithms for continuous domains; (2) Creating data-aware methods that leverage temporal and spatial correlation in streaming video for efficient object tracking and detection; (3) Designing sparsity-aware dataflow architectures for CNN inference, particularly for complex CNN architectures with shortcuts and interconnections; (4) Developing sparsity-aware architectures for real-time CNN training, including efficient DRAM access schemes for backward propagation; (5) Optimizing ML software through performance-aware runtime reconfiguration and hardware-specific optimizations; (6) Implementing approximate computing solutions to accelerate execution through dynamic library loading and neural network approximation models; and (7) Creating reliable multi-agent coordination mechanisms to address challenges including local optima, absence of global environmental information, and inconsistent communication in complex environments. These objectives collectively address the challenge of enabling real-time autonomic decision-making on resource-constrained hardware while maintaining computational efficiency.

Our research has yielded several breakthrough results with significant practical implications: (1) Successfully demonstrated battery-less operation for implantable EEG monitoring devices; (2) Achieved significant improvements in drone resource utilization and real-time performance, validated through hardware-in-the-loop emulations on NVIDIA Jetson TX2 GPU using the Microsoft AirSim simulator; (3) Developed an analog Folded Neural Network achieving optimal performance for cardiovascular monitoring with 6 layers and a hidden size of 30; (4) Created a crowd monitoring system processing patterns in just 2 milliseconds, representing a 45x speed improvement; (5) Achieved 55% reduction in computation for face authentication while maintaining security, with encryption time savings of 20 to 55 times; (6) Demonstrated superior performance in multi-agent coordination for complex Search And Rescue (SAR) operations. These advances have broad applications in healthcare, surveillance, disaster response, and IoT systems, making sophisticated AI capabilities more accessible and energy efficient. Additionally, the research has contributed to underwater joint source-channel coding, helping democratize acoustic underwater communications.

The project has made significant contributions to education and community development through: (1) Training graduate students in deep learning, hardware architecture design, and NN optimization; (2) Mentoring undergraduate students in robotics, underwater acoustics, AI, and ML through programs like the Aresty Program and departmental summer internships; (3) Organizing educational events like the HackRU Capture The Flag Hackathon focusing on cyber-physical drone security; (4) Creating new courses, such as Co-PI Yuan's course on advanced digital systems for ML and signal processing, and incorporating research findings into existing curricula; (5) Supporting minority student outreach programs and hosting diverse student internships; (6) Developing STEM literacy and workforce skills related to smart and autonomous drones for disaster response. Additionally, our research has resulted in several provisional patents filed through Rutgers University. The technology developed also shows promise in precise agriculture applications for early pest detection and disease intervention.


 

 


Last Modified: 02/10/2025
Modified by: Dario Pompili

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page