Award Abstract # 2006854
CHS: Small: Investigating and Designing for Behavioral Improvement in Online Community Moderation

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: THE PENNSYLVANIA STATE UNIVERSITY
Initial Amendment Date: August 12, 2020
Latest Amendment Date: March 4, 2022
Award Number: 2006854
Award Instrument: Standard Grant
Program Manager: Cindy Bethel
cbethel@nsf.gov
 (703)292-4420
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2020
End Date: September 30, 2024 (Estimated)
Total Intended Award Amount: $249,420.00
Total Awarded Amount to Date: $265,420.00
Funds Obligated to Date: FY 2020 = $249,420.00
FY 2022 = $16,000.00
History of Investigator:
  • Yubo Kou (Principal Investigator)
Recipient Sponsored Research Office: Pennsylvania State Univ University Park
201 OLD MAIN
UNIVERSITY PARK
PA  US  16802-1503
(814)865-1372
Sponsor Congressional District: 15
Primary Place of Performance: Pennsylvania State Univ University Park
PA  US  16802-1503
Primary Place of Performance
Congressional District:
15
Unique Entity Identifier (UEI): NPM2J7MSCF61
Parent UEI:
NSF Program(s): HCC-Human-Centered Computing
Primary Program Source: 01002223DB NSF RESEARCH & RELATED ACTIVIT
01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7367, 7923, 9251
Program Element Code(s): 736700
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This is a study of human implications of online moderation systems that deal with disruptive online behaviors, such as offensive language and hate speech, by issuing penalties such as content removal or account suspension to users they determine to be disruptive. These moderation systems usually fail to provide punished users enough support in terms of explaining why they are punished and suggesting how they can improve. Such severe limitations in fairness, accountability, and transparency lead to enormous challenges to online moderation and community wellbeing. Punished users may not understand the rationale behind penalties, and risk becoming repeat offenders. This is even more challenging when newcomers are punished for violating community norms which they were previously not aware of.

The study site is a high-population online community, where the research will document and describe human-punishment interaction (HPI) in terms of how users experience punishment, what are users' post-penalty actions, and what support resources users use for a better understanding of community behavioral standards and behavioral improvement. The research has two goals. First, empirical methods such as interview and survey will be used to investigate and theorize major sociotechnical dimensions of HPI. This will extend the existing moderation literature by articulating interactions and experiences associated with punishment and punished users. Second, more concentrated empirical methods such as narrative interview and focus group will be used to identify user-initiated ways of understanding community norms and behavioral improvement, with a focus on existing support resources that users have drawn from. Cognitive and social theories of behavior change will be used to understand existing, user-initiated ways of behavior improvement. This prioritizes a dynamic, evolving view of disruptive behavior, tracing the temporal development of user online behavior and studying and theorizing critical moments where users internalize community norms and improve behavior.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 11)
Kou, Yubo "Punishment and Its Discontents: An Analysis of Permanent Ban in an Online Game Community" Proceedings of the ACM on Human-Computer Interaction , v.5 , 2021 https://doi.org/10.1145/3476075 Citation Details
Kou, Yubo and Gui, Xinning "Harmful Design in the Metaverse and How to Mitigate it: A Case Study of User-Generated Virtual Worlds on Roblox" DIS '23: Proceedings of the 2023 ACM Designing Interactive Systems Conference , 2023 https://doi.org/10.1145/3563657.3595960 Citation Details
Kou, Yubo and Moradzadeh, Sam and Gui, Xinning "Trading as Gambling: Social Investing and Financial Risks on the r/WallStreetBets Subreddit" , 2024 https://doi.org/10.1145/3613904.3642768 Citation Details
Kou, Yubo Gui "Flag and Flaggability in Automated Moderation: The Case of Reporting Toxic Behavior in an Online Game Community" the 2021 CHI Conference on Human Factors in Computing Systems , 2021 https://doi.org/10.1145/3410404.3414243 Citation Details
Ma, Renkai and Gui, Xinning and Kou, Yubo "Esports Governance: An Analysis of Rule Enforcement in League of Legends" Proceedings of the ACM on Human-Computer Interaction , v.6 , 2022 https://doi.org/10.1145/3555541 Citation Details
Ma, Renkai and Gui, Xinning and Kou, Yubo "Multi-Platform Content Creation: The Configuration of Creator Ecology through Platform Prioritization, Content Synchronization, and Audience Management" Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems , 2023 https://doi.org/10.1145/3544548.3581106 Citation Details
Ma, Renkai and Kou, Yubo ""Defaulting to boilerplate answers, they didn't engage in a genuine conversation": Dimensions of Transparency Design in Creator Moderation" Proceedings of the ACM on Human-Computer Interaction , v.7 , 2023 https://doi.org/10.1145/3579477 Citation Details
Ma, Renkai and Kou, Yubo ""How advertiser-friendly is my video?": YouTuber's Socioeconomic Interactions with Algorithmic Content Moderation" Proceedings of the ACM on Human-Computer Interaction , v.5 , 2021 https://doi.org/10.1145/3479573 Citation Details
Ma, Renkai and Kou, Yubo ""I'm not sure what difference is between their content and mine, other than the person itself": A Study of Fairness Perception of Content Moderation on YouTube" Proceedings of the ACM on Human-Computer Interaction , v.6 , 2022 https://doi.org/10.1145/3555150 Citation Details
Ma, Renkai and Li, Yao and Kou, Yubo "Transparency, Fairness, and Coping: How Players Experience Moderation in Multiplayer Online Games" Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems , 2023 https://doi.org/10.1145/3544548.3581097 Citation Details
Ma, Renkai and You, Yue and Gui, Xinning and Kou, Yubo "How Do Users Experience Moderation?: A Systematic Literature Review" Proceedings of the ACM on Human-Computer Interaction , v.7 , 2023 https://doi.org/10.1145/3610069 Citation Details
(Showing: 1 - 10 of 11)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The goal of this project was to examine end users’ experiences with online moderation systems. End users’ experiences with moderation are a critical dimension of how they interact with digital technologies and engage within the online spaces afforded by them, as these online spaces become an integral part of our everyday lives. As such, moderation systems become critical in ensuring that social interactions happening in these online spaces are civil and constructive. Moderation systems play an increasingly important role in our online lives, but less is understood as to how moderation systems impact users’ experiences and make users’ online lives better. The project used a combination of online data analysis, interviews, and surveys to describe how users are impacted by and perceive punishments issues from online moderation systems in online venues such as social media sites and online communities, as well as how punished users adjust their behaviors after being punished.

 

Findings from the project advanced the scientific understanding of human implications of online moderation systems. First, the project contributed to a critical reflection on the limitations of many existing moderation systems that relies on punishments to enable users to better comply with behavior standards and initiate positive behavior change. The project demonstrated that punishment alone cannot fully reform punished users, punished users have their own unique needs.

 

Second, the project developed a comprehensive empirical account of what kinds of support resources punished users need. For example, punished users can benefit from informational support that helps them to know why their behaviors are considered disruptive. Punished users can also benefit from social and emotional support from peers, because receiving a penalty can be an emotionally challenging event for them to cope with.

 

Third, the project further explored the limitations of design of online moderation systems from punished users’ perspectives. The project showed that in multiple types of online platforms, ranging from multiplayer online games to social media platforms, moderation systems fall short in several key aspects. Moderation systems are lacking in transparency, manifest in providing insufficient information to punished users for them to understand why they are punished. They fall short in fairness, where users perceive differential treatment across user groups from different demographics. They also are limited in accountability, where punished users are not offered an effective means to question or challenge moderation decisions.

 

The outcomes from the project hold important practical implications. First, by developing a better understanding of online moderation systems, the project facilitates punished users’ behavior change and smooth social reintegration into online communities, thus improving punished users’ experiences and well-being.

 

Second, by improving online moderation systems, the project ultimately contributes to the sustainability and well-being of online communities. More efficient moderation systems can be developed to better help people understand community norms and seek policy compliance. Better behavior standards and platform policies can be developed so that end users can easily understand what acceptable behaviors or content on online venues are.

 


Last Modified: 10/21/2024
Modified by: Yubo Kou

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page