
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | August 24, 2021 |
Latest Amendment Date: | August 24, 2021 |
Award Number: | 2134857 |
Award Instrument: | Standard Grant |
Program Manager: |
Andy Duan
yduan@nsf.gov (703)292-4286 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | January 1, 2022 |
End Date: | December 31, 2025 (Estimated) |
Total Intended Award Amount: | $308,919.00 |
Total Awarded Amount to Date: | $308,919.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
6823 SAINT CHARLES AVE NEW ORLEANS LA US 70118-5665 (504)865-4000 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
6823 St. Charles Avenue New Orleans LA US 70118-5698 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Robust Intelligence |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
The process of peer review, evaluation, and selection is a fundamental aspect of modern science. Funding bodies and academic publications around the world employ experts to review and select the best science for funding and publication. The process of evaluating and selecting the best from among a group of peers is much more general problem. For example, a professional society may want to give a subset of its members awards based on the opinions of all members or an instructor for a Massive Open Online Course (MOOC) may want to crowdsource grading or a marketing company may select ideas from group brainstorming sessions based on peer evaluation. In all of these settings, we wish to select a small set of winners that are judged to be the best by the community itself -- which includes those who are competing and who may have conflict of interests. This problem, known as the peer selection problem, is the focus of this research. Within a peer selection setting there may be competing priorities and inherent biases amongst the set of reviewers, and it is necessary to develop methods and algorithms that align the individual incentives of reviewers with the overall goal of selecting the best set. The intellectual merit of this project lies in expanding our understanding and developing novel algorithms for the process of peer evaluation and peer selection. Within the fields that use peer review, conflict of interest and peer selection bias have been cited as an impediment for broader participation in the science. This project will have broad impact through making the peer review process more robust to equitable selection by filtering some reviewers? unconscious biases and conflict of interest thus resulting in a better infrastructure for research and education.
The project will achieve its goal of expanding our knowledge and building mechanisms for peer evaluation and selection through four specific aims. The first aim is to develop novel metrics for the evaluation of peer selection mechanisms by defining both normative and quantitative properties that allow to precisely describe features of the peer evaluation and selection process. The second aim is to develop distributed peer selection mechanisms that are able to be used without requiring a centralized controller. This project will develop tools to understand how these mechanisms behave in this distributed setting as well as opportunities to create novel mechanisms for the unique challenges this setting poses. The third aim is to develop our understanding of multi-stage peer evaluation for peer selection. Motivated by the rolling review cycle of many academic conferences, journals, and even some NSF programs, there is a need to investigate the properties of peer evaluation and selection mechanisms when reviews (evaluations) may propagate between specific selection settings. The final aim is to incentivize effort in peer selection: There is a fundamental tension between the classic social choice properties of impartiality, i.e., an agent may not affect their own probability of getting accepted, and provide incentives for reviewers to invest effort in the peer evaluation process. This project will develop a tool kit of mechanisms that allow system designers to rationally choose tradeoffs between the amount of information an agent knows, incentives for effort, and potential for malicious behavior.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.