
NSF Org: |
ITE Innovation and Technology Ecosystems |
Recipient: |
|
Initial Amendment Date: | September 20, 2021 |
Latest Amendment Date: | September 20, 2021 |
Award Number: | 2137724 |
Award Instrument: | Standard Grant |
Program Manager: |
Mike Pozmantier
ITE Innovation and Technology Ecosystems TIP Directorate for Technology, Innovation, and Partnerships |
Start Date: | October 1, 2021 |
End Date: | August 31, 2023 (Estimated) |
Total Intended Award Amount: | $750,000.00 |
Total Awarded Amount to Date: | $750,000.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
21 N PARK ST STE 6301 MADISON WI US 53715-1218 (608)262-3822 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
5164 VIlas Hall, 821 University Madison WI US 53706-1412 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Convergence Accelerator Resrch |
Primary Program Source: |
|
Program Reference Code(s): | |
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.084 |
ABSTRACT
Democracy and public health in the United States rely on trust in institutions. Skepticism regarding the integrity of U.S. elections and hesitancy related to COVID-19 vaccines are two consequences of a decline in confidence in basic political processes and core medical institutions. Social media serve as a major source of delegitimizing information about elections and vaccines, with networks of users actively sowing doubts about election integrity and vaccine efficacy, fueling the spread of misinformation. This project seeks to support and empower efforts by journalists, developers, and citizens to fact-check such misinformation. They urgently need tools that can 1) enable testing of fact-checking stories on topics like elections and vaccines as they move across social media platforms like Twitter, Reddit, and Facebook, and 2) deliver feedback on how well the corrections worked in real time and with full performance transparency. Accordingly, this project will develop an interactive system that enables fact-checkers to perform rapid-cycle testing of fact-checking messages and monitor their real-time performance among online communities at-risk of misinformation exposure. To be transparent, all of the underlying code, surveys, and data will be available to share with the social science and computer science communities, and all evidence-based messages of immediate utility to public health professionals and electoral administrators will be made publicly accessible.
This project is motivated by a desire to understand and help address two democratic and public health crises facing the U.S.: skepticism regarding the integrity of U.S. elections and hesitancy related to COVID-19 vaccines. Both of these crises are fueled by online misinformation, widely circulating on social media, with networks of users actively sowing doubts about election integrity and vaccine efficacy. The project will deliver an innovative, three-step method to identify, test, and correct real-world instances of these forms of online misinformation. First, using computational means, such as techniques in natural language processing, machine learning, social network analysis and modeling, and computer vision to identify posts and accounts circulating and susceptible to misinformation. Second, lab-tested corrections to the most prominent forms of misinforming claims using recommender systems to optimize message efficacy will be produced. And third, the project will disseminate and evaluate the effectiveness of evidence-based corrections using various scalable intervention techniques available through the platforms sponsored content systems. More specifically, for the first step of the method, the project will use multimodal signal detection and knowledge graph to engage in knowledge driven information extraction about electoral skepticism and vaccine hesitancy on social media, integrating user attributes, message features, and online network structural properties to predict likely exposure to future misinformation and identify susceptible online communities for intervention. The second step will consist of working with professional fact-checking organizations to lab test two types of intervention messages?pre-exposure inoculation and post-exposure correction? aimed at mitigating electoral skepticism and vaccine hesitancy, optimizing them using recommender system techniques. For the third step, field experiments will be conducted that deploy the lab-developed interventions, delivered through a combination of ad-purchasing, automated bots, and online influencers; and assess the success of our interventions with respect to optimal decision-making in both health and democracy-related arenas. Ultimately, this three-step approach can be applied across a range of topics in politics and health.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This project seeks to 1) detect networks that see and share unverified/low-quality information online, 2) identify characteristics of those networks; 3) share transparently verified accurate information into those networks; 4) evaluate whether the transparent, verified information is associated with any changes in the seeing and sharing of low-quality/unverified information in those networks.
In Phase I of the project, we conducted interviews with end users and related professionals about their needs and desires for such a system, worked to improve and then demonstrate the quality of our ability to detect networks seeing and sharing low-quality information over time, conducted pilot studies examining the effectiveness of sharing verified, accurate information with those exposed to low quality/unverified information online, and began building the beta version of our tool, Chime In, which will deliver upon the goals of the project. We also spent time in weekly sessions teaching our team about team-based science, the constructing of a minimum viable product, and issues related to bringing a new product to market and sustaining it.
We presented examples of our pilot studies on both the intervention and detection sides of our project at major conferences, shared our insights with news organizations, technology companies, and government agencies to gauge their potential interest in Chime In as a tool that can share accurate, transparent, and verified information into social media networks. We ended Phase I ready to develop a working prototype of our tool, begin the publishing process for our scholalry proof-of-concept evidence, and expanding our team.
Last Modified: 02/09/2024
Modified by: Michael W Wagner
Please report errors in award information by writing to: awardsearch@nsf.gov.