
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | February 11, 2022 |
Latest Amendment Date: | February 11, 2022 |
Award Number: | 2147212 |
Award Instrument: | Standard Grant |
Program Manager: |
Todd Leen
tleen@nsf.gov (703)292-7215 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | June 1, 2022 |
End Date: | May 31, 2025 (Estimated) |
Total Intended Award Amount: | $392,992.00 |
Total Awarded Amount to Date: | $392,992.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
3451 WALNUT ST STE 440A PHILADELPHIA PA US 19104-6205 (215)898-7293 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
200 S. 33rd Street, Moore Philadelphia PA US 19104-6314 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Fairness in Artificial Intelli |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
In order to be robust and trustworthy, algorithmic systems need to usefully serve diverse populations of users. Standard machine learning methods can easily fail in this regard, e.g. by optimizing for majority populations represented within their training data at the expense of worse performance on minority populations. A large literature on "algorithmic fairness" has arisen to address this widespread problem. However, at a technical level, this literature has viewed various technical notions of "fairness" as constraints, and has therefore viewed "fair learning" through the lens of constrained optimization. Although this has been a productive viewpoint from the perspective of algorithm design, it has led to tradeoffs being centered as the central object of study in "fair machine learning". In the standard framing, adding new protected populations, or quantitatively strengthening fairness constraints, necessarily leads to decreased accuracy overall and within each group. This has the effect of pitting the interests of different stakeholders against one another, and making it difficult to build consensus around "fair machine learning" techniques. The over-arching goal of this project is to break through this "fairness/accuracy tradeoff" paradigm.
Specifically, we will draw on ideas from learning theory and uncertainty estimation to introduce notions of fairness that can be satisfied in ways that are monotonically error improving. For example, if it is discovered that a deployed model has error that is unacceptably high on some population, our aim will be to find ways to decrease the error on that population without increasing the error on any other population. We also aim to find methods that do not require identifying which groups might be disadvantaged by a particular application of machine learning ahead of time, since this can be very hard to predict. Instead, we will develop methods to dynamically update models as it is discovered that they are performing poorly on populations of interest. Finally, rather than talking about "fairness" of predictive models in the abstract, we will aim to formulate and implement notions of fairness that have meaning in the context of particular downstream applications, and find methods of training upstream predictive methods that will guarantee these kinds of fairness when the predictive models are deployed in these downstream use case. In addition to research papers and software, this project will develop human capital by training PhD students to be leading researchers in trustworthy machine learning. It will also develop educational materials aimed at researchers, students, and the general public.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.