
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | June 18, 2018 |
Latest Amendment Date: | April 20, 2020 |
Award Number: | 1832811 |
Award Instrument: | Continuing Grant |
Program Manager: |
William Bainbridge
IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 16, 2017 |
End Date: | January 31, 2023 (Estimated) |
Total Intended Award Amount: | $482,075.00 |
Total Awarded Amount to Date: | $482,075.00 |
Funds Obligated to Date: |
FY 2017 = $111,118.00 FY 2018 = $139,482.00 FY 2019 = $95,764.00 FY 2020 = $122,494.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
1109 GEDDES AVE STE 3300 ANN ARBOR MI US 48109-1015 (734)763-6438 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
MI US 48109-1274 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | HCC-Human-Centered Computing |
Primary Program Source: |
01001718DB NSF RESEARCH & RELATED ACTIVIT 01001819DB NSF RESEARCH & RELATED ACTIVIT 01001920DB NSF RESEARCH & RELATED ACTIVIT 01002021DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This research aims to computationally model abusive online behavior to build tools that help counter it, with the goal of making the Internet a more welcoming place. Since its earliest days, flaming, trolling, harassment and abuse have plagued the Internet. This project will lay bare the structure of online abuse over many types of online conversations, a major step forward for the study of computer-mediated communication. This will result from modeling abuse with statistical machine learning algorithms as a function of theoretically inspired, sociolinguistic variables, and will entail new technical and methodological advances. This work will enable a transformative new class of automated and semi-automated applications that depend on computationally generated abuse predictions. The education and outreach plan is deeply tied to the research activities, and focuses on scaling-up the research's broader impacts. A public application programming interface (API) will enable developers and online community managers around the world to integrate into their own sites the defenses against abuse developed by this research.
The work will consist of two major phases. In the first, the research will develop a deep understanding of abusive online behavior via statistical machine learning techniques. Specifically, the work will appropriate theories from social science and linguistics to inform the creation of features for robust statistical machine learning algorithms to predict abuse. These proposed abuse models will enable a brand new, transformative class of mixed-initiative artifacts capable of intervening in social media and online communities. In the second phase, this project will explore this newly enabled class of artifacts by building, deploying and evaluating sociotechnical tools for combatting abuse. Specifically, it will explore two classes of tools that use the abuse predictions: shields and moderator tools. The first, shields, will proactively block inbound abuse from reaching people. The second class of tools, moderator tools, will flag and triage abuse for community moderators.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Abusive behavior presents a deep threat to today's Internet. This project aimed to make significant advances on this problem using a novel technical approach--with the long-term goal of making online communities more welcoming places. In the project's first phase, we developed a deep understanding of abusive online behavior via statistical machine learning techniques. In the project's second phase, we used models to build, deploy and evaluate anti-abuse tools.
In the first phase, we published a number of key papers studying abusive behavior on the internet via statistical and technical approaches. The papers led to significant intellectual and broader impact. The papers have become central in the social computing field, and also led to imapct on internet technologies for societal good. For example, this project substantially affected design and policy at Reddit, Twitter, Twitch, and Facebook, among others.
In the second phase, we achiveved intellectual and broader impact by building systems: specifically, two new computational systems to prevent and deal with abusive behavior online. The first, Crossmod, is a new sociotechnical moderation system that makes decisions using cross-community learning--an approach that leverages a large corpus of previous moderator decisions via an ensemble of classifiers. Crossmod was deployed and evaluated on Reddit in a community of 10M people. The second, Sig, is an extensible Chrome framework that computes and visualizes "synthesized social signals." After a formative study, we deployed and evaluated Sig on Twitter, targeting two well-known problems on social media: toxic accounts and misinformation.
Last Modified: 07/04/2023
Modified by: Eric Gilbert
Please report errors in award information by writing to: awardsearch@nsf.gov.