Award Abstract # 2107577
III: Medium: Collaborative Research: Fair Recommendation Through Social Choice

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: THE REGENTS OF THE UNIVERSITY OF COLORADO
Initial Amendment Date: August 6, 2021
Latest Amendment Date: June 17, 2022
Award Number: 2107577
Award Instrument: Standard Grant
Program Manager: Sylvia Spengler
sspengle@nsf.gov
 (703)292-7347
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2021
End Date: March 31, 2026 (Estimated)
Total Intended Award Amount: $938,381.00
Total Awarded Amount to Date: $938,381.00
Funds Obligated to Date: FY 2021 = $938,381.00
History of Investigator:
  • Robin Burke (Principal Investigator)
    robin.burke@colorado.edu
  • Amy Voida (Co-Principal Investigator)
  • Pradeep Ragothaman (Co-Principal Investigator)
  • melissa fabros (Former Co-Principal Investigator)
Recipient Sponsored Research Office: University of Colorado at Boulder
3100 MARINE ST
Boulder
CO  US  80309-0001
(303)492-6221
Sponsor Congressional District: 02
Primary Place of Performance: University of Colorado at Boulder
1045 18th Street
Boulder
CO  US  80309-0315
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): SPVKK1RC2MZ3
Parent UEI:
NSF Program(s): Info Integration & Informatics
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7364, 7924
Program Element Code(s): 736400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Recommender systems are machine learning systems that provide personalized access to information, media and e-commerce catalogs. These systems are widely used and are central to Americans' experience of the Internet. However, concern has grown that these systems can have negative impacts on both individuals and society more generally, by propagating biases, excluding minoritized sub-groups from recommendation results, and offering less optimal performance to individuals with non-mainstream viewpoints. These issues, as well as other potential harms, have been the topic of recent research attention. However, the practical success of this work has been limited because fairness has generally been conceived in simple, narrow ways, e.g. fairness relative to a single group, and because it has remained largely divorced from real-world organizational practices. In this research, the investigators will overcome both of these limitations. They will conduct a detailed contextual analysis of fairness within a non-profit organization, ensuring that their fairness concepts are grounded in real organizational needs. The ensuing implementation of fair recommendation will reflect the complexities of practice by representing and balancing the viewpoints of different stakeholders. The work will enhance our understanding of algorithmic fairness as a situated and complex concept and of the development challenges arising throughout the full life-cycle of fair machine learning.

The multidisciplinary team on this project includes experts in recommender systems, computational social choice, and philanthropic informatics. The team will create new fairness-aware recommendation algorithms that are fundamentally multi-agent in nature and based on algorithmic game theory. From this novel vantage point, the project will reformulate recommendation fairness as a combination of social choice allocation and aggregation problems, which integrate both fairness concerns and personalized recommendation provisions, and derive new recommendation techniques based on this formulation. Working with their non-profit partner, the researchers will conduct interviews and focus groups with diverse stakeholders, building models of the different ways that fairness is operationalized within this organizational context, and generalize these techniques to apply to other organizations. The project will create a model deployment of their multi-stakeholder fairness solution and use both quantitative and qualitative techniques to evaluate it from the perspective of both users and internal stakeholders.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Aird, A. and Farastu, P. and Sun, J. and Voida, A. and Mattei, N. & "Dynamic Fairness-Aware Recommendation through Multi-Agent Social Choice" 9th International Workshop on Computational Social Choice (COMSOC 2023) , 2023 Citation Details
Burke, R. and Ragothaman, P. and Mattei, N. and Kimmig, B. and Voida, A. and Sonboli, N. and Kathait, A. and Fabros, M. "A Performance-preserving Fairness Intervention for Adaptive Microfinance Recommendation" KDD Workshop on Online and Adaptive Recommender Systems at the 28th SIGKDD Conference on Knowledge Discovery and Data Mining , 2022 Citation Details
Burke, R. and Voida, A. and Mattei, N. and Sonboli, N. and Eskandanian, F. "Algorithmic fairness, institutional logics, and social choice" Social Responsibility of Algorithms 2022 , 2022 Citation Details
Burke, Robin and Mattei, Nicholas and Grozin, Vladislav and Voida, Amy and Sonboli, Nasim "Multi-agent Social Choice for Dynamic Fairness-aware Recommendation" Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization (UMAP '22 Adjunct) , 2022 https://doi.org/10.1145/3511047.3538032 Citation Details
Smith, Jessie J. and Buhayh, Anas and Kathait, Anushka and Ragothaman, Pradeep and Mattei, Nicholas and Burke, Robin and Voida, Amy "The Many Faces of Fairness: Exploring the Institutional Logics of Multistakeholder Microlending Recommendation" FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency , 2023 https://doi.org/10.1145/3593013.3594106 Citation Details
Smith, Jessie J and Satwani, Aishwarya and Burke, Robin and Fiesler, Casey "Recommend Me? Designing Fairness Metrics with Providers" , 2024 https://doi.org/10.1145/3630106.3659044 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page