
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | January 25, 2021 |
Latest Amendment Date: | June 29, 2021 |
Award Number: | 2040929 |
Award Instrument: | Standard Grant |
Program Manager: |
Wendy Nilsen
wnilsen@nsf.gov (703)292-2568 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | April 1, 2021 |
End Date: | March 31, 2024 (Estimated) |
Total Intended Award Amount: | $375,000.00 |
Total Awarded Amount to Date: | $391,000.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
5000 FORBES AVE PITTSBURGH PA US 15213-3890 (412)268-8746 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
Pittsburgh PA US 15213-3815 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Fairness in Artificial Intelli, IIS Special Projects |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This project advances the potential for Machine Learning (ML) to serve the social good by improving understanding of how to apply ML methods to high-stakes, real-world settings in fair and responsible ways. Government agencies and nonprofits use ML tools to inform consequential decisions. However, a growing number of academics, journalists, and policy-makers have expressed apprehension regarding the prominent (and growing) role that ML technology plays in the allocation of social benefits and burdens across diverse policy areas, including child welfare, health, and criminal justice. Many of these decisions impart long-lasting effects on the lives of their subjects. When applied inappropriately, they can harm already vulnerable and historically-disadvantaged communities. These concerns have given rise to a growing number of research efforts aimed at understanding disparities and developing tools that aim to minimize or mitigate them. To date, these efforts have been limited in their impact on real-world applications by focusing too narrowly on abstract technical concepts and computational methods at the expense of addressing the decisions and societal outcomes these methods affect. Such efforts also commonly fail to situate the work in real-world contexts or to draw input from the communities most affected by ML-assisted decision-making. This project seeks to fill these gaps in current research and practice in close partnership with government agencies and nonprofits.
This project draws upon disciplinary perspectives from computer science, statistics, and public policy. Its first aim explores the mapping between policy goals and ML formulations. This aim focuses on what facts must be consulted to make coherent determinations about fairness, and anchors those assessments of fairness to near- and long-term societal outcomes for people subject to decisions. This work offers practical ways to engage with partners, policymakers, and affected communities to translate desired fairness goals into computationally tractable measures. Itssecond aim investigates fairness through the entire ML decision-support pipeline, from policy goals to data to models to interventions. It explores how different approaches to data collection, imputation, model selection, and evaluation impact the fairness of resulting tools. The project?s third aim is concerned with modeling the long-term societal outcomes of ML-assisted decision-making in policy domains, ultimately to guide a grounded approach to designing fairness-promoting methods. The project?s over-arching objective is to bridge the divide between active research in fair ML and applications in policy domains. It does that through innovative teaching and training activities, broadening the participation of under-represented groups in research and technology design, enhancing scientific and technological understanding among the public, practitioners, and legislators, and delivering a direct positive impact with partner agencies.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Bridging AI and Society: Our Mission for Safe and Effective AI
In an era where Artificial Intelligence (AI) is increasingly influencing critical aspects of society, our project set out to ensure that AI can be effectively and responsibly applied to data-rich societal domains—without amplifying undue harm or disparities. By integrating expertise from computer science, statistics, and public policy, we developed methods, tools, and guidelines that close the gap between AI's potential and its real-world applicability.
Key Research Areas
1. Translating Societal Goals into AI Solutions
How can we align AI decision-making with the diverse societal and policy-level goals of different communities? For example, what fairness means can vary significantly depending on cultural, legal, and ethical perspectives. Our research found that existing methods for capturing these perspectives were inadequate. In response, we designed better ways to engage with policymakers, affected communities, and other AI stakeholders. One of our key contributions was the AI Failure Cards, a tool designed to help communities understand AI’s risks and share their own experiences and preferred strategies for harm mitigation in AI-driven decision-making.
2. Reducing Harm Throughout the AI Lifecycle
From initial goal-setting to data collection, model design, and deployment, AI systems can introduce unintended harms at every stage. Our research focused on identifying critical intervention points to prevent such issues before they arise. A major outcome of this work was the Situate-AI Guidebook, developed in collaboration with local and state governments. This guidebook provides a structured framework for evaluating AI projects early on—helping decision-makers assess their goals, legal and societal constraints, data limitations, and governance requirements before committing to AI-based solutions.
3. Understanding AI’s Real-World Impact
Even the most well-designed AI tools can fail if human decision-makers don’t trust or properly integrate them into their workflows. Through our research, we identified key factors that influence AI adoption, including user perceptions, potential liability concerns, and the broader interests of stakeholders. We also studied why some algorithmic tools are ultimately abandoned or decommissioned, providing insights into how to improve accountability and long-term success in AI implementation.
Advancing AI with a Multidisciplinary Approach
Creating fair and effective AI systems requires more than just technical expertise—it demands an understanding of public policy, economics, moral philosophy, and human behavior. Our team worked closely with experts across these fields, as well as real-world stakeholders, to ensure AI solutions address genuine societal needs rather than just technical challenges.
Impact and Future Directions
Our findings offer practical insights for a wide range of AI stakeholders, including policymakers, developers, and the communities affected by AI-driven decisions. By fostering transparency, fairness, and accountability, we aim to empower society to harness AI’s benefits while minimizing its risks.
Last Modified: 01/30/2025
Modified by: Hoda Heidari
Please report errors in award information by writing to: awardsearch@nsf.gov.