Award Abstract # 1751278
CAREER: User-Based Simulation Methods for Quantifying Sources of Error and Bias in Recommender Systems

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: BOISE STATE UNIVERSITY
Initial Amendment Date: April 23, 2018
Latest Amendment Date: June 29, 2022
Award Number: 1751278
Award Instrument: Continuing Grant
Program Manager: Dan Cosley
dcosley@nsf.gov
 (703)292-8832
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 1, 2018
End Date: February 29, 2024 (Estimated)
Total Intended Award Amount: $482,081.00
Total Awarded Amount to Date: $514,081.00
Funds Obligated to Date: FY 2018 = $97,702.00
FY 2019 = $92,651.00

FY 2020 = $112,784.00

FY 2021 = $111,350.00

FY 2022 = $55,997.00
History of Investigator:
  • Michael Ekstrand (Principal Investigator)
    mde48@drexel.edu
Recipient Sponsored Research Office: Boise State University
1910 UNIVERSITY DR
BOISE
ID  US  83725-0001
(208)426-1574
Sponsor Congressional District: 02
Primary Place of Performance: Boise State University
1910 University Drive
Boise
ID  US  83725-1135
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): HYWTVM5HNFM3
Parent UEI: HYWTVM5HNFM3
NSF Program(s): HCC-Human-Centered Computing
Primary Program Source: 01001819DB NSF RESEARCH & RELATED ACTIVIT
01001920DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT

01002122DB NSF RESEARCH & RELATED ACTIVIT

01002223DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT

01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 7364, 7367, 9150, 9251
Program Element Code(s): 736700
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Systems that recommend products, places, and services are an increasingly common part of everyday life and commerce, making it important to understand how recommendation algorithms affect outcomes for both individual users and larger social groups. To do this, the project team will develop novel methods of simulating users' behavior based on large-scale historical datasets. These methods will be used to better understand vulnerabilities that underlying biases in training datasets pose to commonly-used machine learning-based methods for building and testing recommender systems, as well as characterize the effectiveness of common evaluation metrics such as recommendation accuracy and diversity given different models of how people interact with recommender systems in practice. The team will publicly release its datasets, software, and novel metrics for the benefit of other researchers and developers of recommender systems. The work also will inform the development of computer science course materials about the social impact of data analytics as well as outreach activities for librarians, who are often in the position of helping information seekers understand the way search engines and other recommender systems affect their ability to get what they need.

The work is organized around two main themes. The first will quantify and mitigate the popularity bias and misclassified decoy problems in offline recommender evaluation that tend to lead to popular, known recommendations. To do this, the team will develop simulation-based evaluation models that encode a variety of assumptions about how users select relevant items to buy and rate and use them to quantify the statistical biases these assumptions induce in recommendation quality metrics. They will calibrate these simulations by comparing with existing data sets covering books, research papers, music, and movies. These models and datasets will help drive the second main project around measuring the impact of feature distributions in training data on recommender algorithm accuracy and diversity, while developing bias-resistant algorithms. The team will use data resampling techniques along with the simulation models, extended to model system behavior over time, to evaluate how different algorithms mitigate, propagate, or exacerbate underlying distributional biases through their recommendations, and how those biased recommendations in turn affect future user behavior and experience.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 16)
Diaz, Fernando and Mitra, Bhaskar and Ekstrand, Michael D. and Biega, Asia J. and Carterette, Ben "Evaluating Stochastic Rankings with Expected Exposure" Proceedings of the 29th ACM International Conference on Information and Knowledge Management , 2020 https://doi.org/10.1145/3340531.3411962 Citation Details
Ekstrand, Michael D. "LensKit for Python: Next-Generation Software for Recommender Systems Experiments" Proceedings of the 29th ACM International Conference on Information and Knowledge Management , 2020 https://doi.org/10.1145/3340531.3412778 Citation Details
Ekstrand, Michael D. and Carterette, Ben and Diaz, Fernando "Distributionally-Informed Recommender System Evaluation" ACM Transactions on Recommender Systems , 2023 https://doi.org/10.1145/3613455 Citation Details
Ekstrand, Michael D. and Kluver, Daniel "Exploring author gender in book rating and recommendation" User Modeling and User-Adapted Interaction , 2021 https://doi.org/10.1007/s11257-020-09284-2 Citation Details
Ihemelandu, Ngozi and Ekstrand, Michael D. "Candidate Set Sampling for Evaluating Top-N Recommendation" , 2023 https://doi.org/10.1109/WI-IAT59888.2023.00018 Citation Details
Ihemelandu, Ngozi and Ekstrand, Michael D. "Inference at Scale: Significance Testing for Large Search and Recommendation Experiments" Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '23) , 2023 Citation Details
Krnap, Ömer and Diaz, Fernando and Biega, Asia and Ekstrand, Michael and Carterette, Ben and Yilmaz, Emine "Estimation of Fair Ranking Metrics with Incomplete Judgments" Proceedings of the Web Conference 2021 , 2021 https://doi.org/10.1145/3442381.3450080 Citation Details
Milton, Ashlee and Green, Michael and Keener, Adam and Ames, Joshua and Ekstrand, Michael D and Pera, Maria Soledad "StoryTime: eliciting preferences from children for book recommendations" Proceedings of the 13th ACM Conference on Recommender Systems , 2019 10.1145/3298689.3347048 Citation Details
Pinney, Christine and Raj, Amifa and Hanna, Alex and Ekstrand, Michael D. "Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information Access" CHIIR '23: Proceedings of the 2023 Conference on Human Information Interaction and Retrieval , 2023 https://doi.org/10.1145/3576840.3578316 Citation Details
Raj, Amifa and Ekstrand, Michael D. "Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison" Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval , 2022 https://doi.org/10.1145/3477495.3532018 Citation Details
Raj, Amifa and Milton, Ashlee and Ekstrand, Michael D. "Pink for Princesses, Blue for Superheroes: The Need to Examine Gender Stereotypes in Kid's Products in Search and Recommendations" KidRec '21: 5th International and Interdisciplinary Perspectives on Children \& Recommender and Information Retrieval Systems (KidRec) Search and Recommendation Technology through the Lens of a Teacher- Co-located with ACM IDC 2021 , 2021 https://doi.org/10.48550/arXiv.2105.09296 Citation Details
(Showing: 1 - 10 of 16)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page