
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | September 7, 2021 |
Latest Amendment Date: | July 27, 2024 |
Award Number: | 2142675 |
Award Instrument: | Standard Grant |
Program Manager: |
Sorin Draghici
sdraghic@nsf.gov (703)292-2232 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | September 15, 2021 |
End Date: | August 31, 2025 (Estimated) |
Total Intended Award Amount: | $79,999.00 |
Total Awarded Amount to Date: | $79,999.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1500 HORNING RD KENT OH US 44242-0001 (330)672-2070 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
OH US 44242-0001 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Info Integration & Informatics |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This project develops a new theoretical foundation for evaluating the performance of recommendation systems (RS), a crucial component guiding online users and shoppers to navigate a sea of products and websites. Despite the Covid-19 pandemic, online retail sales in the US totaled nearly $1 trillion dollars in 2020. Since online purchasing is forecasted to increase, proper design of RS will improve shopping/browsing, help small online businesses to survive, and contribute to the nation?s economy. Recent studies have noted the sizeable improvements obtained from deep learning-based recommendations. However, several studies suggest that these improvements may be spurious due to poorly designed experiments with ill-chosen baselines, cherry-picked datasets, inaccurate metrics of RS performance, and the use of ineffective evaluation protocols that result in performance discrepancies between evaluation and production environments. Recognizing that baseline and dataset problems can be addressed by using standard benchmarks, this project focuses on designing reliable new computation tools, metrics, and evaluation protocols for analyzing recommendation systems. The tools will include new ways to score an RS based on accurate statistical models of user behaviors and a suite of new algorithms that use fewer samples and computational resources that produce more accurate estimations of performance.
From a technical standpoint, this project will develop theoretical tools to analyze evaluation metrics and protocols for RS based on statistical learning theory and stochastic processes. The project focuses on three tasks. First, designing efficient metrics estimation procedures that resolve the mismatch between sampling and top-K evaluation metrics (e.g., normalized discounted cumulative gain (nDCG) and Recall) by unifying two recently proposed ad hoc approaches for recovering the top-K metrics based on sampling and searching for an overall best estimator. Second, the develops methods to quantify the sensitivity and robustness of the top-K metrics, and design new item sampling procedures that improve the robustness of existing metrics, The finally, the project will analyze the performance gap between offline evaluations and production environments (the online settings), and proposing a new offline evaluation metrics that can better mimic online performance.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.