
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | March 9, 2018 |
Latest Amendment Date: | April 29, 2020 |
Award Number: | 1756028 |
Award Instrument: | Standard Grant |
Program Manager: |
William Bainbridge
IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 15, 2018 |
End Date: | April 30, 2021 (Estimated) |
Total Intended Award Amount: | $174,954.00 |
Total Awarded Amount to Date: | $183,654.00 |
Funds Obligated to Date: |
FY 2020 = $8,700.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
3100 MARINE ST Boulder CO US 80309-0001 (303)492-6221 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
Boulder CO US 80303-1058 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | HCC-Human-Centered Computing |
Primary Program Source: |
01001819DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This project will study algorithmic interactions and develop strategies for the human-centered design of systems that incorporate algorithms and their underlying data. Software developers of platforms of all kinds are creating features that make use of algorithmically curated content that leverage data about people's relationships, behavior, and identities. However, algorithms usually make decisions based on system metrics that are readily calculable, such as the number of likes, plays, and clicks. Even more sophisticated algorithms are limited by the social information explicitly given or inferred from provided data. As a result, algorithms can fail to capture the social context and human meaning that is important to the acceptability and success of the interactions these algorithms are meant to support. The research will investigate both algorithmic and human understandings of social data, especially when they diverge. By attending to divergence, the research can examine human expectations of algorithms, how misunderstandings might be reframed, and how subsequent action is informed by those divergences.
Specifically, this project will identify (1) how people navigate sensitive algorithmic encounters; (2) how these encounters impact people; (3) what social concepts algorithms are failing to understand; and (4) what design strategies are needed to address sensitive content in algorithmic curation. To focus this work, the specific context of inquiry will be algorithmic encounters with content related to loss of life, given its prevalence and sensitivity at both communal and individual levels. The broader impacts of the work include: (1) developing guidelines around the curation of and interactions with social data related to loss of life, which can also be applied to other groups and experiences where algorithms should be sensitive; (2) demonstrating how designs that incorporate social data can adopt human-centered approaches to sensitize encounters with algorithmically curated content; (3) contributing to the development of design practices that encompass the design of interactions, systems, algorithms, and data; and (4) engaging students in multiple fields, including Information Science, Computer Science, Media Studies, and Communication through research and curricular activities focused on human-centered approaches to studying and designing social algorithms.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This research focused on emotionally-laden experiences with algorithmically curated content. Across six studies, we investigated the interplay of human and computational understandings of data with a focus on when these understandings breakdown. The major goals were (1) to understand how people navigate sensitive algorithmic encounters; (2) how these encounters impact people; and (3) what social concepts algorithms are failing to understand.
Our studies fall into two different focus areas:
Sensitive Algorithmic Encounters
Encounters with Ex Romantic Partners: We conducted a study of upsetting algorithmic encounters on social computing systems focused on encounters with ex-romantic partners. We found that upsetting encounters were exacerbated by the complex social networks and data that enable inferred connections around otherwise explicit (and often explicitly terminated) relationships -- a concept we termed as ?data peripheries.? We show how designing for peripheries can allow technologists to conceptualize differences between explicit and inferred connections, and how to design agency into systems based on each. Next, we conducted a study focused on encounters with digital objects and relationships following a break-up. We found that people took different actions towards their possessions and connections in service of creating a post-break-up identity. We found that existing tools are ill-designed to support competing desires to present authentic past and future online identities and produced design suggestions to address this tension.
Encounters through Technologically Mediated Reminiscence (TMR) Tools: Recommender systems sometimes inadvertently curate content that users may find emotionally intense, such as a picture of an ex-partner or of a now-deceased family member. To explore this issue, we examined Facebook's Memories feature, a Technology-Mediated Reflection (TMR) system that shows users content about their pasts in order to prompt reflection. We interviewed 20 people who had recently seen sensitive curated content through this suite of features. We found that they wanted to see ?bittersweet? content, but they preferred to see it when it was expected and viewed in a context they felt was appropriate, and when they were able to make sense of why the recommender system curated the content. We recommend that designers engage in three practices to meet users? needs: (1) draw inspiration from no/low-technology artifacts, (2) use empirical research to identify which contextual features have negative impacts on users, and (3) conduct user studies to determine how users are doing sense-making around the perceived affect of recommender systems.
Everyday Evaluations of Social Media Metrics: The Like button is simultaneously a means of social interaction and a tool to evaluate social media content. Based on in-depth interviews with 25 artists who use Instagram, we identified three overlapping orientations to the Like button: affective, relational, and infrastructural. We found that the flexibility of the button creates ambiguity around the meaning of a Like that incentivizes an economic approach to evaluation that crowds out other value schemas, shaping how artists use the platform, make art, and even understand themselves.
Algorithmic Infrastructure and Bias
AI systems have been critiqued for designs that cannot capture the nuance of human identity. However, there was scant empirical work at the time. To address this gap, we conducted a study of how gender is operationalized into commercial facial analysis services. We conducted a two-phrase study: (1) a system analysis of ten commercial facial analysis and image labeling services and (2) an evaluation of five services using a custom dataset of diverse genders using social media images with gender labels provided by their creators. We found that services performed consistently worse on transgender individuals and were universally unable to classify non-binary genders. We found that bias is the result of more than just engineering practices and training data. We identified bias in how gender is codified into the data standards that surround (and typically pre-date) classifiers. We demonstrate how the cloud based services we studied provide black-boxed infrastructures to many third-party developers that can include subtle and invisible forms of bias.
Next, we investigated race and gender bias in computer vision by examining the databases with which models are built. Race and gender have long sociopolitical histories of classification in technical infrastructures?from the passport to social media. In this study, we specifically focused on how race and gender are defined and annotated in image databases used for facial analysis. We found that the majority of databases rarely contain underlying source material for how those identities are defined. Further, when they are annotated with race and gender information, database authors rarely describe the process of annotation. In our publication we discuss the limitations of these approaches and argue that the lack of critical engagement renders databases opaque and less trustworthy. Our work encourages and provides guidance for database authors to address both the histories of classification inherently embedded into race and gender.
Last Modified: 08/29/2021
Modified by: Jed R Brubaker
Please report errors in award information by writing to: awardsearch@nsf.gov.