Award Abstract # 1928434
FW-HTF-RM: Collaborative Research: Augmenting Social Media Content Moderation

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: REGENTS OF THE UNIVERSITY OF MICHIGAN
Initial Amendment Date: September 4, 2019
Latest Amendment Date: July 20, 2023
Award Number: 1928434
Award Instrument: Standard Grant
Program Manager: Dan Cosley
dcosley@nsf.gov
 (703)292-8832
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2019
End Date: September 30, 2024 (Estimated)
Total Intended Award Amount: $347,685.00
Total Awarded Amount to Date: $417,039.00
Funds Obligated to Date: FY 2019 = $347,685.00
FY 2023 = $69,354.00
History of Investigator:
  • Libby Hemphill (Principal Investigator)
    libbyh@umich.edu
Recipient Sponsored Research Office: Regents of the University of Michigan - Ann Arbor
1109 GEDDES AVE STE 3300
ANN ARBOR
MI  US  48109-1015
(734)763-6438
Sponsor Congressional District: 06
Primary Place of Performance: University of Michigan Ann Arbor
3003 South State Street
Ann Arbor
MI  US  48109-1274
Primary Place of Performance
Congressional District:
06
Unique Entity Identifier (UEI): GNJ7BBP73WE9
Parent UEI:
NSF Program(s): FW-HTF Futr Wrk Hum-Tech Frntr
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
01001920DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 063Z
Program Element Code(s): 103Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.041, 47.070

ABSTRACT

Around the world, users of social media platforms generate millions of comments, videos, and photos per day. Within this content is dangerous material such as child pornography, sex trafficking, and terrorist propaganda. Though platforms leverage algorithmic systems to facilitate detection and removal of problematic content, decisions about whether to remove content, whether it's as benign as an off-topic comment or as dangerous as self-harm or abuse videos, are often made by humans. Companies are hiring moderators by the thousands and tens of thousands work as volunteer moderators. This work involves economic, emotional, and often physical safety risks. With social media content moderation as the focus of work and the content moderators as the workers, this project facilitates the human-technology partnership by designing new technologies to augment moderator performance. The project will improve moderators' quality of life, augment their capabilities, and help society understand how moderation decisions are made and how to support the workers who help keep the internet open and enjoyable. These advances will enable moderation efforts to keep pace with user-generated content and ensure that problematic content does not overwhelm internet users. The project includes outreach and engagement activities with academic, industry, policy-makers, and the public that ensure the project's findings and tools support broad stakeholders impacted by user-generated content and its moderation.

Specifically, the project involves five main research objectives that will be met through qualitative, historical, experimental, and computational research approaches. First, the project will improve understanding of human-in-the-loop decision making practices and mental models of moderation by conducting interviews and observations with moderators across different content domains. Second, it will assess the socioeconomic impact of technology-augmented moderation through industry personnel interviews. Third, the project will test interventions to decrease the emotional toll on human moderators and optimize their performance through a series of experiments utilizing theories of stress alleviation. Fourth, the project will design, develop, and test a suite of cognitive assistance tools for live streaming moderators. These tools will focus on removing easy decisions and helping moderators dynamically manage their emotional and cognitive capabilities. Finally, the project will employ a historical perspective to analyze companies' content moderation policies to inform legal and platform policies.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Atreja, Shubham and Im, Jane and Resnick, Paul and Hemphill, Libby "AppealMod: Inducing Friction to Reduce Moderator Workload of Handling User Appeals" Proceedings of the ACM on Human-Computer Interaction , v.8 , 2024 https://doi.org/10.1145/3637296 Citation Details
Li, Lingyao and Fan, Lizhou and Atreja, Shubham and Hemphill, Libby "HOT ChatGPT: The Promise of ChatGPT in Detecting and Discriminating Hateful, Offensive, and Toxic Comments on Social Media" ACM Transactions on the Web , v.18 , 2024 https://doi.org/10.1145/3643829 Citation Details
Schöpke-Gonzalez, Angela_M and Atreja, Shubham and Shin, Han_Na and Ahmed, Najmin and Hemphill, Libby "Why do volunteer content moderators quit? Burnout, conflict, and harmful behaviors" New Media & Society , v.26 , 2022 https://doi.org/10.1177/14614448221138529 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Our goal was to create tools for online content moderators that would reduce their cognitive load and overall workload. We were especially interested in helping moderators who are volunteers. We focused on their workloads, not on the content or communities they were responsible for moderating. We interviewed Reddit moderators and worked with them to develop AppealMod, a bot that helps moderators address bans and ban appeals. Moderators can set up AppealMod on any subreddit they moderate. In our field experiments, we found that the AppealMod process reduces moderator workloads by 70% without impacting the decisions made or negatively impacting the community. The code for AppealMod is open-source, and any moderator can sign up to use it at http://appealmod.com. We traveled to conferences that moderators attend to advertise the tool and get more feedback about how to improve it.

We also studied Twitch chats to understand how moderator behavior and other users’ behavior impacts other users. Our goal was to identify opportunities to build assistants for Twitch moderators. We found that people using Twitch chat emulate moderator behaviors. We also found that users “pay it forward” and give one another gifts when they receive them. This suggests assistants could help moderators model the kind of behavior they want to see in the chat. 

Our project provided PhD students opportunities to learn new research methods, to participate in professional conferences, and to improve their independent research skills.


Last Modified: 01/27/2025
Modified by: Libby Hemphill

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page