Award Abstract # 2048168
CAREER: Interplay between Control Theory and Machine Learning

NSF Org: ECCS
Division of Electrical, Communications and Cyber Systems
Recipient: UNIVERSITY OF ILLINOIS
Initial Amendment Date: February 24, 2021
Latest Amendment Date: April 26, 2022
Award Number: 2048168
Award Instrument: Continuing Grant
Program Manager: Eyad Abed
eabed@nsf.gov
 (703)292-2303
ECCS
 Division of Electrical, Communications and Cyber Systems
ENG
 Directorate for Engineering
Start Date: March 1, 2021
End Date: February 28, 2026 (Estimated)
Total Intended Award Amount: $500,000.00
Total Awarded Amount to Date: $500,000.00
Funds Obligated to Date: FY 2021 = $394,076.00
FY 2022 = $105,924.00
History of Investigator:
  • Bin Hu (Principal Investigator)
    binhu7@illinois.edu
Recipient Sponsored Research Office: University of Illinois at Urbana-Champaign
506 S WRIGHT ST
URBANA
IL  US  61801-3620
(217)333-2187
Sponsor Congressional District: 13
Primary Place of Performance: University of Illinois at Urbana-Champaign
506 S. Wright St.
Urbana
IL  US  61801-3620
Primary Place of Performance
Congressional District:
13
Unique Entity Identifier (UEI): Y8CWNJRCNN91
Parent UEI: V2PHZ2CSCH63
NSF Program(s): EPCN-Energy-Power-Ctrl-Netwrks
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
01002223DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045
Program Element Code(s): 760700
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.041

ABSTRACT

Control and machine learning are two high-impact research areas. Both are important for managing complex systems such as self-driving vehicles, humanoid robotics, smart buildings, and automated healthcare. This CAREER proposal aims at building fundamental connections between control theory and machine learning. On one hand, control theory provides mathematically rigorous tools for addressing the robustness requirement of modern safety-critical systems such as commercial aircraft and nuclear plants. On the other hand, machine learning techniques have been used to achieve the state-of-the-art performance for many artificial intelligence tasks in computer vision, natural language processing, and Go. A rapprochement of control theory and machine learning will significantly broaden the class of engineering problems that can be solved efficiently. This proposal aims at reconciling these two areas with a comprehensive interdisciplinary approach that spans and connects the forefronts of robust control theory, nonlinear system theory, jump system theory, supervised learning, reinforcement learning, imitation learning, semidefinite programming, and non-convex optimization. The proposed research will lay the theoretical foundation for reliable integration of control and machine learning in modern safety-critical intelligent systems. The research progress will promote multidisciplinary collaborations and benefit the researchers from learning, control, optimization, artificial intelligence, autonomy, and robotics. In addition, the research will be strongly coupled with educational developments which will promote students from different departments to develop solid multidisciplinary proficiency. New course materials resulting from the research will provide new concepts and ideas to inspire the next generation of academic and industrial leaders.

This proposal takes an interdisciplinary perspective on machine learning and control. The proposed research is centered around two thrusts. The first thrust focuses on tailoring control theory to unify, streamline, and automate the analysis and design of machine learning algorithms. Specifically, algorithms in supervised/reinforcement/unsupervised learning will be modeled as Markovian jump systems and nonlinear systems which have been extensively studied in controls literature. The combination of this idea with modern control-theoretical tools such as stochastic dissipation inequalities will pave the way for a unified principled approach to the design of high-performance algorithmic pipelines in machine learning. The second thrust focuses on borrowing recent results in non-convex learning to push control theory beyond the convex optimization regime. The recently developed non-convex learning theory will be leveraged to derive various theoretical guarantees for non-convex optimization problems in control. The proposed research is expected to advance the state-of-the-art algorithms in large-scale control and broaden the class of nonlinear/robust control problems that can be solved with guarantees. The proposed research covers both ?control for learning? and ?learning for control,? deepening the connections of control and learning by showing that the techniques used by each side can be explored to impact the other side.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 18)
Guo, Xingang and Hu, Bin "Convex Programs and Lyapunov Functions for Reinforcement Learning: A Unified Perspective on the Analysis of Value-Based Methods" 2022 American Control Conference (ACC) , 2022 https://doi.org/10.23919/ACC53348.2022.9867291 Citation Details
Guo, Xingang and Hu, Bin "Global Convergence of Direct Policy Search for State-Feedback $\mathcal{H}_\infty$ Robust Control: A Revisit of Nonsmooth Synthesis with Goldstein Subdifferential" Advances in neural information processing systems , 2022 Citation Details
Guo, Xingang and Keivan, Darioush and Dullerud, Geir and Seiler, Peter and Hu, Bin "Complexity of Derivative-Free Policy Optimization for Structured H-infinity Control" , 2023 Citation Details
Guo, Xingang and Yu, Fangxu and Zhang, Huan and Qin, Lianhui and Hu, Bin "COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability" , 2024 Citation Details
Havens, A. and Araujo, A. and Garg, S. and Khorrami, F. and Hu, B. "Exploiting Connections between Lipschitz Structures for Certifiably Robust Deep Equilibrium Models" , 2023 Citation Details
Havens, Aaron and Araujo, Alexandre and Zhang, Huan and Hu, Bin "Fine-grained Local Sensitivity Analysis of Standard Dot-Product Self-Attention" , 2024 Citation Details
Havens, Aaron and Hu, Bin "On Imitation Learning of Linear Control Policies: Enforcing Stability and Robustness Constraints via LMI Conditions" 2021 American Control Conference (ACC) , 2021 https://doi.org/10.23919/ACC50511.2021.9483019 Citation Details
Havens, Aaron and Kevian, Darioush and Seiler, Peter and Dullerud, Geir and Hu, Bin "Revisiting PGD Attacks for Stability Analysis of High-Dimensional Nonlinear Systems and Perception-Based Control" IEEE Control Systems Letters , v.7 , 2023 https://doi.org/10.1109/LCSYS.2022.3188016 Citation Details
Hu, Bin and Zheng, Yang "Connectivity of the Feasible and Sublevel Sets of Dynamic Output Feedback Control With Robustness Constraints" IEEE Control Systems Letters , v.7 , 2023 https://doi.org/10.1109/LCSYS.2022.3188008 Citation Details
Jansch-Porto, Joao Paulo and Hu, Bin and Dullerud, Geir E. "Policy Optimization for Markovian Jump Linear Quadratic Control: Gradient Method and Global Convergence" IEEE Transactions on Automatic Control , 2022 https://doi.org/10.1109/TAC.2022.3176439 Citation Details
Keivan, Darioush and Havens, Aaron and Seiler, Peter and Dullerud, Geir and Hu, Bin "Model-Free Synthesis via Adversarial Reinforcement Learning" 2022 American Control Conference (ACC) , 2022 https://doi.org/10.23919/ACC53348.2022.9867674 Citation Details
(Showing: 1 - 10 of 18)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page