Award Abstract # 2147187
FAI: A Normative Economic Approach to Fairness in AI

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: PRESIDENT AND FELLOWS OF HARVARD COLLEGE
Initial Amendment Date: February 23, 2022
Latest Amendment Date: February 23, 2022
Award Number: 2147187
Award Instrument: Standard Grant
Program Manager: Todd Leen
tleen@nsf.gov
 (703)292-7215
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: March 1, 2022
End Date: August 31, 2025 (Estimated)
Total Intended Award Amount: $560,345.00
Total Awarded Amount to Date: $560,345.00
Funds Obligated to Date: FY 2022 = $560,345.00
History of Investigator:
  • Yiling Chen (Principal Investigator)
    yiling@seas.harvard.edu
  • Ariel Procaccia (Co-Principal Investigator)
Recipient Sponsored Research Office: Harvard University
1033 MASSACHUSETTS AVE STE 3
CAMBRIDGE
MA  US  02138-5366
(617)495-5501
Sponsor Congressional District: 05
Primary Place of Performance: Harvard University
MA  US  02138-0001
Primary Place of Performance
Congressional District:
07
Unique Entity Identifier (UEI): LN53LCFJFL45
Parent UEI:
NSF Program(s): Fairness in Artificial Intelli
Primary Program Source: 01002223DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 075Z
Program Element Code(s): 114Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070, 47.075

ABSTRACT

A vast body of work in algorithmic fairness is devoted to preventing artificial intelligence (AI) from exacerbating societal biases. The predominant viewpoints in this literature equates fairness with lack of bias or seeks to achieve some form of statistical parity between demographic groups. By contrast, this project pursues alternative approaches rooted in normative economics, the field that evaluates policies and programs by asking "what should be". The work is driven by two observations. First, fairness to individuals and groups can be realized according to people?s preferences represented in the form of utility functions. Second, traditional notions of algorithmic fairness may be at odds with welfare (the overall utility of groups), including the welfare of those groups the fairness criteria intend to protect. The goal of this project is to establish normative economic approaches as a central tool in the study of fairness in AI. Towards this end the team pursues two research questions. First, can the perspective of normative economics be reconciled with existing approaches to fairness in AI? Second, how can normative economics be drawn upon to rethink what fairness in AI should be? The project will integrate theoretical and algorithmic advances into real systems used to inform refugee resettlement decisions. The system will be examined from a fairness viewpoint, with the goal of ultimately ensuring fairness guarantees and welfare.

The research plan includes two main directions based on previous work has shown that classifiers incorporating parity-based fairness criteria can be Pareto inefficient. That is, the welfare of all groups?including the protected group?would be higher under a classifier that is less fair. In the first direction, the project extends this observation to non-convex problems and then from in-processing to post-processing bias mitigation. The planned research will also study the interaction between multiple policy makers and its impact on social goals such as fairness and welfare. In the second direction, the project develops a new conceptualization in which classifiers are viewed as public resources or goods. This work then draws on ideas from fair division, a long-established branch of normative economics that defines and applies rigorous notions of fairness, and on the specific notion of the core. To put this idea into practice, there are several challenges that must be addressed: conditions for the existence of classifiers in the core, algorithms for their computation, and generalization from a training set.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 26)
Ariel D. Procaccia, Isaac Robinson "School Redistricting: Wiping Unfairness Off the Map" , 2024 Citation Details
Bailey Flanigan, Ariel D. "Distortion Under Public-Spirited Voting" EC , 2023 Citation Details
Bailey Flanigan, Jennifer Liang "Manipulation-Robust Selection of Citizens Assemblies" , 2024 Citation Details
Chen, Yiling and Feng, Shi and Yu, Fang-Yi "Carrot and Stick: Eliciting Comparison Information and Beyond" , 2024 Citation Details
Daniel Halpern, Gregory Kehne "Representation with Incomplete Votes" AAAI , 2023 Citation Details
Daniel Halpern, Joseph Y. "In Defense of Liquid Democracy" EC , 2023 Citation Details
Daniel Halpern, Rachel Li "Strategyproof Voting under Correlated Beliefs" , 2023 Citation Details
Dominik Peters, Ariel D. "Robust Rent Division" NeurIPS , 2022 Citation Details
Fabian Baumann, Daniel Halpern "Optimal Engagement-Diversity Tradeoffs in Social Media" , 2024 Citation Details
Feng, Shi and Yu, Fang-Yi and Chen, Yiling "Peer Prediction for Learning Agents" 36th Conference on Neural Information Processing Systems (NeurIPS 2022) , 2023 Citation Details
Gerdus Benadè, Ariel D. "You Can Have Your Cake and Redistrict It Too" EC , 2023 Citation Details
(Showing: 1 - 10 of 26)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page