
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | April 23, 2024 |
Latest Amendment Date: | April 23, 2024 |
Award Number: | 2336236 |
Award Instrument: | Continuing Grant |
Program Manager: |
Todd Leen
tleen@nsf.gov (703)292-7215 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | May 1, 2024 |
End Date: | April 30, 2029 (Estimated) |
Total Intended Award Amount: | $528,346.00 |
Total Awarded Amount to Date: | $99,746.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
926 DALNEY ST NW ATLANTA GA US 30318-6395 (404)894-4819 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
755 Ferst Dr NW NW Atlanta GA US 30332-0205 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | HCC-Human-Centered Computing |
Primary Program Source: |
01002526DB NSF RESEARCH & RELATED ACTIVIT 01002627DB NSF RESEARCH & RELATED ACTIVIT 01002728DB NSF RESEARCH & RELATED ACTIVIT 01002829DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Algorithmic and machine learning tools are heavily involved in high-stakes, life-altering decision-making. Yet, previous research has shown that these algorithmic tools can induce unfairness and cause disparities in individuals of varying race, gender, and socio-economic status. The traditional approach to this problem is to independently fix each given decision-making tool to make it fair. While valuable, this view fails to reason about the long-term impact on the complex socio-technical systems involved. This project will provide novel paradigms and foundations to adopt a long-term and in-context view of fairness within these systems. The research will take the academic study of fairness a step closer to practical issues and concerns related to educational opportunities. This will lead to significant longer-term societal benefits. The project will support and promote education about responsible AI and machine learning and will broaden participation through workshops and mentoring activities.
The project combines tools from game theory, optimization, and machine learning to contribute new advancements to our scientific understanding of algorithmic fairness. It does so along three axes: First, it models how humans can strategically respond to high-stake machine learning algorithms and aims to understand the impact of such strategic behavior on fairness. Second, it proposes new modeling paradigms and approaches to fairness interventions for complex decision-making pipelines comprising many inter-connected stages. Third, it reasons about feedback loops and their long-term, inter-generational effects on disparities.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Please report errors in award information by writing to: awardsearch@nsf.gov.