
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | January 25, 2021 |
Latest Amendment Date: | September 1, 2021 |
Award Number: | 2040898 |
Award Instrument: | Standard Grant |
Program Manager: |
Todd Leen
tleen@nsf.gov (703)292-7215 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | February 1, 2021 |
End Date: | January 31, 2025 (Estimated) |
Total Intended Award Amount: | $625,000.00 |
Total Awarded Amount to Date: | $625,000.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
70 WASHINGTON SQ S NEW YORK NY US 10012-1019 (212)998-2121 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
370 Jay Street Brooklyn NY US 11201-3828 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Fairness in Artificial Intelli |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
The goal of this project is to develop methods and tools that assist public sector organizations with fair and equitable policy interventions. In areas such as housing and criminal justice, critical decisions that impact lives, families, and communities are made by a variety of actors, including city officials, police, and court judges. In these high-stakes contexts, human decision makers? implicit biases can lead to disparities in outcomes across racial, gender, and socioeconomic lines. While artificial intelligence (AI) offers great promise for identifying and potentially correcting these sorts of biases, a rapidly growing literature has shown that automated decision tools can also worsen existing disparities or create new biases. To help bridge this gap between the promise and practice of AI, the interdisciplinary team of investigators will develop an integrated framework and new methodological approaches to support fair and equitable decision-making. This framework is motivated by three main ideas: (1) identifying and mitigating the impacts of biases on downstream decisions and their impacts, instead of simply measuring biases in data and in predictive models; (2) enabling the combination of an algorithmic decision support tool and a human decision-maker to make fairer and more equitable decisions than either human or algorithm alone; and (3) developing operational definitions of fairness and quantitative assessments of bias, guided by stakeholder discussions, that are directly relevant and applicable to the housing and criminal justice domains. The ultimate impact of this work is to advance social justice for those who live in cities, and who rely on city services or are involved with the justice system, by assessing and mitigating biases in decision-making processes and reducing disparities.
The project team will address both the risks and the benefits of algorithmic decision-making through transformative technical contributions. First, they will develop a new, pipelined conceptualization of fairness consisting of seven distinct stages: data, models, predictions, recommendations, decisions, impacts, and outcomes. This end-to-end fairness pipeline will account for multiple sources of bias, model how biases propagate through the pipeline to result in inequitable outcomes and assess sensitivity to unmeasured biases. Second, they will build a general methodological framework for identifying and correcting biases at each stage of this pipeline, assessing intersectional and contextual biases across multiple data dimensions, and incorporating new ideas for model assessment and analysis of heterogeneous treatment effects. This generalized bias scan will provide essential information throughout the end-to-end fairness pipeline, informing not only what human and algorithmic biases exist, but what interventions are likely to mitigate these biases. Third, the project addresses algorithm-in-the-loop decision processes, in which an algorithmic decision support tool provides recommendations to a human decision-maker. The investigators will develop approaches for modeling systematic biases in human decisions, identifying possible explanatory factors for those biases, and optimizing individualized algorithmic "nudges" to guide human decisions toward fairness. Finally, the project team will create new metrics for measuring the presence and extent of bias. The outputs of the project will be designed for integration into the operational decision-making of city agencies responsible for making fair and equitable decisions in the criminal justice and housing domains. The investigators will assess the fairness of existing practices and create open-source tools for assessing and correcting biases, for users in each domain. They will develop tools which can be used to (a) reduce incarceration by equitably providing supportive interventions to justice-involved populations; (b) prioritize housing inspections and repairs; (c) assess and improve the fairness of civil and criminal court proceedings; and (d) analyze the disparate health impacts of adverse environmental exposures, including poor-quality housing and aggressive, unfair policing practices. Operational deployments of the developed tools will be regularly and comprehensively evaluated to assess impacts and to avoid unintended consequences, both maximizing the benefits and minimizing potential harms from both algorithmic and human decisions.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This project created new methods and tools to assist public sector organizations with fair and beneficial policy interventions, applied to areas including housing, criminal justice, and health. We developed a pipelined conception of fairness that allows us to assess and improve fairness in algorithmic, human, and combined "algorithm-in-the-loop" decisions, along with new approaches for identifying, quantifying, and mitigating systematic biases impacting multidimensional subgroups of the population at each stage of this pipeline. Moreover, by modeling the propagation of bias through the pipeline (e.g., from data to predictive models, from algorithmic recommendations to human decisions, and from interventions to outcomes), we obtained theoretical results about when biases will be detectable, enabling us to optimize for increased fairness and mitigate biases throughout the pipeline. Our novel methods for bias auditing and mitigation throughout the end-to-end fairness pipeline were rigorously evaluated and demonstrated improved performance (e.g., increased power to detect subtle, multidimensional biases, and reduced disparity after mitigation) as compared to the current state of the art. Finally, we addressed applications to fair preventive screening for diabetes in hospital Emergency Departments, to understanding reporting biases in resident complaints made to a city's "311" system, and to discovering systematic patterns of bias in prosecution, bail, and sentencing decisions.
Through this project, we created new graduate-level course materials in Fairness in AI, including a full-semester course on "Fair and Ethical Machine Learning for Social Good" taught at New York University's Courant Institute Department of Computer Science, and lectures on algorithmic fairness incorporated into courses in business, public policy, urban science, and applied statistics. This project has supported the work of 15 graduate students, 8 undergraduates, and 6 high-school students in New York University's Machine Learning for Good Laboratory. The work has been widely disseminated to a variety of methodological and applied audiences, ranging from computer scientists and statisticians to public health practitioners, law enforcement agencies, and city leaders, through direct collaborations, publication of journal and conference papers, and presentation of invited talks. Publications, working papers, and course materials are available on our Machine Learning for Good Laboratory website, https://wp.nyu.edu/ml4good/end-to-end-fairness/.
Last Modified: 05/15/2025
Modified by: Daniel B Neill
Please report errors in award information by writing to: awardsearch@nsf.gov.