Award Abstract # 2147212
FAI: Breaking the Tradeoff Barrier in Algorithmic Fairness

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: TRUSTEES OF THE UNIVERSITY OF PENNSYLVANIA, THE
Initial Amendment Date: February 11, 2022
Latest Amendment Date: February 11, 2022
Award Number: 2147212
Award Instrument: Standard Grant
Program Manager: Todd Leen
tleen@nsf.gov
 (703)292-7215
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: June 1, 2022
End Date: May 31, 2025 (Estimated)
Total Intended Award Amount: $392,992.00
Total Awarded Amount to Date: $392,992.00
Funds Obligated to Date: FY 2022 = $392,992.00
History of Investigator:
  • AARON ROTH (Principal Investigator)
    aaroth@cis.upenn.edu
  • Michael Kearns (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Pennsylvania
3451 WALNUT ST STE 440A
PHILADELPHIA
PA  US  19104-6205
(215)898-7293
Sponsor Congressional District: 03
Primary Place of Performance: University of Pennsylvania
200 S. 33rd Street, Moore
Philadelphia
PA  US  19104-6314
Primary Place of Performance
Congressional District:
03
Unique Entity Identifier (UEI): GM1XX56LEP58
Parent UEI: GM1XX56LEP58
NSF Program(s): Fairness in Artificial Intelli
Primary Program Source: 01002223DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 075Z
Program Element Code(s): 114Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

In order to be robust and trustworthy, algorithmic systems need to usefully serve diverse populations of users. Standard machine learning methods can easily fail in this regard, e.g. by optimizing for majority populations represented within their training data at the expense of worse performance on minority populations. A large literature on "algorithmic fairness" has arisen to address this widespread problem. However, at a technical level, this literature has viewed various technical notions of "fairness" as constraints, and has therefore viewed "fair learning" through the lens of constrained optimization. Although this has been a productive viewpoint from the perspective of algorithm design, it has led to tradeoffs being centered as the central object of study in "fair machine learning". In the standard framing, adding new protected populations, or quantitatively strengthening fairness constraints, necessarily leads to decreased accuracy overall and within each group. This has the effect of pitting the interests of different stakeholders against one another, and making it difficult to build consensus around "fair machine learning" techniques. The over-arching goal of this project is to break through this "fairness/accuracy tradeoff" paradigm.

Specifically, we will draw on ideas from learning theory and uncertainty estimation to introduce notions of fairness that can be satisfied in ways that are monotonically error improving. For example, if it is discovered that a deployed model has error that is unacceptably high on some population, our aim will be to find ways to decrease the error on that population without increasing the error on any other population. We also aim to find methods that do not require identifying which groups might be disadvantaged by a particular application of machine learning ahead of time, since this can be very hard to predict. Instead, we will develop methods to dynamically update models as it is discovered that they are performing poorly on populations of interest. Finally, rather than talking about "fairness" of predictive models in the abstract, we will aim to formulate and implement notions of fairness that have meaning in the context of particular downstream applications, and find methods of training upstream predictive methods that will guarantee these kinds of fairness when the predictive models are deployed in these downstream use case. In addition to research papers and software, this project will develop human capital by training PhD students to be leading researchers in trustworthy machine learning. It will also develop educational materials aimed at researchers, students, and the general public.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 16)
Acharya, Krishna and Arunachaleswaran, Eshwar Ram and Kannan, Sampath and Roth, Aaron and Ziani, Juba "Oracle Efficient Algorithms for Groupwise Regret" , 2024 Citation Details
Acharya, Krishna and Arunachaleswaran, Eshwar Ram and Kannan, Sampath and Roth, Aaron and Ziani, Juba "Wealth Dynamics Over Generations: Analysis and Interventions" IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) , 2023 https://doi.org/10.1109/SaTML54575.2023.00013 Citation Details
Arunachaleswaran, E and Collina, N and Kannan, S and Roth, A and Ziani, J "Algorithmic Collusion Without Threats" , 2025 Citation Details
Arunachaleswaran, E and Collina, N and Roth, A and Shi, M "An Elementary Predictor Obtaining 2T Distance to Calibration" , 2025 Citation Details
Bastani, Osbert and Gupta, Varun and Jung, Christopher and Noarov, Georgy and Ramalingam, Ramya and Roth, Aaron "Practical Adversarial Multivalid Conformal Prediction" Advances in neural information processing systems , 2022 Citation Details
Bechavod, Yahav and Roth, Aaroh "Individually Fair Learning with One Sided Feedback" International Conference on Machine Learning , 2023 Citation Details
Collina, N and Goel, S and Gupta, V and Roth, A "Tractable Agreement Protocols" , 2025 Citation Details
Dick, Travis and Dwork, Cynthia and Kearns, Michael and Liu, Terrance and Roth, Aaron and Vietri, Giuseppe and Wu, Zhiwei Steven "Confidence-ranked reconstruction of census microdata from published statistics" Proceedings of the National Academy of Sciences , v.120 , 2023 https://doi.org/10.1073/pnas.2218605120 Citation Details
Eaton, E and Hussing, M and Kearns, M and Roth, A and Sengupta, S and Sorrell, J "Intersectional Fairness in Reinforcement Learning with Large State and Constraint Spaces" , 2025 Citation Details
Globus-Harris, Ira and Harrison, Declan and Kearns, Michael and Roth, Aaron and Sorrell, Jessica "Multicalibration as Boosting for Regression" International Conference on Machine Learning , 2023 Citation Details
Jung, Christopher and Noarov, Georgy and Ramalingam, Ramya and Roth, Aaron "Batch Multivalid Conformal Prediction" International Conference on Learning Representations (ICLR) , 2023 Citation Details
(Showing: 1 - 10 of 16)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page