
Administratively Terminated Award | |
NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | July 26, 2022 |
Latest Amendment Date: | May 16, 2025 |
Award Number: | 2217770 |
Award Instrument: | Standard Grant |
Program Manager: |
Jason D. Borenstein
jborenst@nsf.gov (703)292-4207 CNS Division Of Computer and Network Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | October 1, 2022 |
End Date: | April 18, 2025 (Estimated) |
Total Intended Award Amount: | $550,000.00 |
Total Awarded Amount to Date: | $672,287.00 |
Funds Obligated to Date: |
FY 2023 = $104,087.00 FY 2024 = $18,200.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
1 SILBER WAY BOSTON MA US 02215-1703 (617)353-4365 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
595 Commonwealth Ave Boston MA US 02215-1300 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Information Technology Researc, Special Projects - CNS, Secure &Trustworthy Cyberspace |
Primary Program Source: |
01002425DB NSF RESEARCH & RELATED ACTIVIT 01AB2324DB R&RA DRSA DEFC AAB 01002324DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070, 47.075 |
ABSTRACT
Societies function poorly without free speech. They also function poorly when members cannot agree on basic facts. This research seeks to discover technology-aided social structures that minimize the adverse impact of confusion about facts while promoting free speech. To accomplish these goals, the project develops, prototypes, and tests market mechanisms to dissuade sources of information from dissembling, to decentralize detection of false claims, and to change the incentive structure under which producing false claims is cheaper than producing honest news. It also seeks to decentralize governance so that no single party, neither a government nor a private firm, has content moderation authority. Finally, it provides a principled basis for updating Internet and media law concerning platform liability exemptions for user-generated content.
The proposed mechanism extends established economic theories of signaling and screening that allow authors to credibly signal information regarding the veracity of their claims while helping recipients believe which claims are honest. This mechanism puts the burden of proof on the author, in contrast to extant mechanisms that put the burden of proof on the recipients of information or on the platform. Testing is accomplished in a laboratory setting, using randomized control trials, the gold standard for establishing causality. Experimentation tests, for example, whether more honest candidates are more likely to win a tournament and whether more honest firms can sell more products. The claims made by authors will be decentralized. Based on market design principles, the detection and the adjudication of false claims will also be decentralized. Security, privacy, and anonymity is proposed to be enforced by technology advances in the developed system.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.