
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | August 25, 2020 |
Latest Amendment Date: | March 29, 2023 |
Award Number: | 2007688 |
Award Instrument: | Standard Grant |
Program Manager: |
Alfred Hero
ahero@nsf.gov (703)292-0000 CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | October 1, 2020 |
End Date: | September 30, 2025 (Estimated) |
Total Intended Award Amount: | $507,999.00 |
Total Awarded Amount to Date: | $539,999.00 |
Funds Obligated to Date: |
FY 2021 = $16,000.00 FY 2023 = $16,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
660 S MILL AVENUE STE 204 TEMPE AZ US 85281-3670 (480)965-5479 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
PO Box 876011 Tempe AZ US 85287-6011 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Special Projects - CCF, Comm & Information Foundations |
Primary Program Source: |
01002021DB NSF RESEARCH & RELATED ACTIVIT 01002122DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
At the heart of the machine learning (ML) and artificial intelligence (AI) revolution are models that are trained using vast amounts of data. Given the increasing use of such data-driven modeling, there is an urgent need to understand and leverage the tradeoffs between various performance characteristics such as accuracy (statistical efficiency), computational speed (computational efficiency), and robustness (say to noise, adversarial tampering, and imbalance or biased data). This project develops a unified and powerful framework for understanding and trading off these facets by introducing the family of alpha-loss functions -- often-used loss functions such as the 0-1 loss, the log-loss, and the exponential-loss appear as instantiations of the alpha-loss framework. Over the past few years, we have seen a steadily growing recognition amongst advocates, regulators, and scientists that data-driven inference and decision engines pose significant challenges for ensuring non-discrimination, and fair and inclusive representation. The alpha-loss framework, combined with several technological advances, will allow practitioners to incorporate fairness as an explicit knob to be tuned during the development of machine learning models. Broader impacts of this work also include developing ML modules for a week-long summer camp for high school students as well as providing research opportunities for such students.
This project: (i) develops theoretical results on the behavior of the loss landscape as a function of the tuning parameter alpha, thereby illuminating the value and limitation of the industry standard log-loss, (ii) establishes accuracy-speed tradeoffs and generalization bounds, and (iii) designs practical adaptive algorithms with guarantees for tuning the hyperparameter alpha to achieve various operating points along the tradeoff. This project establishes the robustness properties of alpha-loss via the theory of influence functions. By introducing much-needed models for noise and adversarial examples, this work develops a principled method to choose alpha slightly larger than 1 to design models more robust to noise and adversaries. Using both influence functions and constrained learning settings such as fair classification, this project studies the efficacy of tuning alpha below one in order to enhance sensitivity to limited samples in highly imbalanced training datasets. Finally, this project also develops alpha-Boost as a tunable boosting algorithm with guaranteed convergence, robustness to noise and, where needed, online adaptation. Research is enhanced at every stage of this project through rigorous testing of algorithms on both synthetic and publicly available real datasets.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.