Award Abstract # 2040929
FAI: Fair AI in Public Policy - Achieving Fair Societal Outcomes in ML Applications to Education, Criminal Justice, and Health and Human Services

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: CARNEGIE MELLON UNIVERSITY
Initial Amendment Date: January 25, 2021
Latest Amendment Date: June 29, 2021
Award Number: 2040929
Award Instrument: Standard Grant
Program Manager: Wendy Nilsen
wnilsen@nsf.gov
 (703)292-2568
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: April 1, 2021
End Date: March 31, 2024 (Estimated)
Total Intended Award Amount: $375,000.00
Total Awarded Amount to Date: $391,000.00
Funds Obligated to Date: FY 2021 = $391,000.00
History of Investigator:
  • Hoda Heidari (Principal Investigator)
    hoda.heidari@gmail.com
  • Christopher Rodolfa (Co-Principal Investigator)
  • Rayid Ghani (Co-Principal Investigator)
  • OLEXANDRA CHOULDECHOVA (Co-Principal Investigator)
  • Zachary Lipton (Co-Principal Investigator)
Recipient Sponsored Research Office: Carnegie-Mellon University
5000 FORBES AVE
PITTSBURGH
PA  US  15213-3890
(412)268-8746
Sponsor Congressional District: 12
Primary Place of Performance: Carnegie-Mellon University
Pittsburgh
PA  US  15213-3815
Primary Place of Performance
Congressional District:
12
Unique Entity Identifier (UEI): U3NKNFLNQ613
Parent UEI: U3NKNFLNQ613
NSF Program(s): Fairness in Artificial Intelli,
IIS Special Projects
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 9251, 075Z
Program Element Code(s): 114Y00, 748400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This project advances the potential for Machine Learning (ML) to serve the social good by improving understanding of how to apply ML methods to high-stakes, real-world settings in fair and responsible ways. Government agencies and nonprofits use ML tools to inform consequential decisions. However, a growing number of academics, journalists, and policy-makers have expressed apprehension regarding the prominent (and growing) role that ML technology plays in the allocation of social benefits and burdens across diverse policy areas, including child welfare, health, and criminal justice. Many of these decisions impart long-lasting effects on the lives of their subjects. When applied inappropriately, they can harm already vulnerable and historically-disadvantaged communities. These concerns have given rise to a growing number of research efforts aimed at understanding disparities and developing tools that aim to minimize or mitigate them. To date, these efforts have been limited in their impact on real-world applications by focusing too narrowly on abstract technical concepts and computational methods at the expense of addressing the decisions and societal outcomes these methods affect. Such efforts also commonly fail to situate the work in real-world contexts or to draw input from the communities most affected by ML-assisted decision-making. This project seeks to fill these gaps in current research and practice in close partnership with government agencies and nonprofits.

This project draws upon disciplinary perspectives from computer science, statistics, and public policy. Its first aim explores the mapping between policy goals and ML formulations. This aim focuses on what facts must be consulted to make coherent determinations about fairness, and anchors those assessments of fairness to near- and long-term societal outcomes for people subject to decisions. This work offers practical ways to engage with partners, policymakers, and affected communities to translate desired fairness goals into computationally tractable measures. Itssecond aim investigates fairness through the entire ML decision-support pipeline, from policy goals to data to models to interventions. It explores how different approaches to data collection, imputation, model selection, and evaluation impact the fairness of resulting tools. The project?s third aim is concerned with modeling the long-term societal outcomes of ML-assisted decision-making in policy domains, ultimately to guide a grounded approach to designing fairness-promoting methods. The project?s over-arching objective is to bridge the divide between active research in fair ML and applications in policy domains. It does that through innovative teaching and training activities, broadening the participation of under-represented groups in research and technology design, enhancing scientific and technological understanding among the public, practitioners, and legislators, and delivering a direct positive impact with partner agencies.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 13)
Laufer, Benjamin and Kleinberg, Jon and Heidari, Hoda "Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models" , 2024 https://doi.org/10.1145/3589334.3645366 Citation Details
London, Alex_John and Heidari, Hoda "Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures Through AI Systems" Minds and Machines , v.34 , 2024 https://doi.org/10.1007/s11023-024-09696-8 Citation Details
Tang, Ningjing and Zhi, Jiayin and Kuo, Tzu-Sheng and Kainaroi, Calla and Northup, Jeremy J and Holstein, Kenneth and Zhu, Haiyi and Heidari, Hoda and Shen, Hong "AI Failure Cards: Understanding and Supporting Grassroots Efforts to Mitigate AI Failures in Homeless Services" , 2024 https://doi.org/10.1145/3630106.3658935 Citation Details
Black, Emily and Naidu, Rakshit and Ghani, Rayid and Rodolfa, Kit and Ho, Daniel and Heidari, Hoda "Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools" , 2023 https://doi.org/10.1145/3617694.3623259 Citation Details
Chen, Violet (Xinying) and Williams, Joshua and Leben, Derek and Heidari, Hoda "Local Justice and Machine Learning: Modeling and Inferring Dynamic Ethical Preferences toward Allocations" Proceedings of the AAAI Conference on Artificial Intelligence , v.37 , 2023 https://doi.org/10.1609/aaai.v37i5.25737 Citation Details
Coston, Amanda and Kawakami, Anna and Zhu, Haiyi and Holstein, Ken and Heidari, Hoda "A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms" , 2023 https://doi.org/10.1109/SaTML54575.2023.00050 Citation Details
Feffer, Michael and Heidari, Hoda and Lipton, Zachary C. "Moral Machine or Tyranny of the Majority?" Proceedings of the AAAI Conference on Artificial Intelligence , v.37 , 2023 https://doi.org/10.1609/aaai.v37i5.25739 Citation Details
Feffer, Michael and Martelaro, Nikolas and Heidari, Hoda "The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements" , 2023 https://doi.org/10.1145/3617694.3623223 Citation Details
Feffer, Michael and Skirpan, Michael and Lipton, Zachary and Heidari, Hoda "From Preference Elicitation to Participatory ML: A Critical Survey & Guidelines for Future Research" , 2023 https://doi.org/10.1145/3600211.3604661 Citation Details
Heidari, Hoda and Barocas, Solon and Kleinberg, Jon and Levy, Karen "Informational Diversity and Affinity Bias in Team Growth Dynamics" , 2023 https://doi.org/10.1145/3617694.3623238 Citation Details
Johnson, Nari and Moharana, Sanika and Harrington, Christina and Andalibi, Nazanin and Heidari, Hoda and Eslami, Motahhare "The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment" , 2024 https://doi.org/10.1145/3630106.3658910 Citation Details
(Showing: 1 - 10 of 13)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Bridging AI and Society: Our Mission for Safe and Effective AI

In an era where Artificial Intelligence (AI) is increasingly influencing critical aspects of society, our project set out to ensure that AI can be effectively and responsibly applied to data-rich societal domains—without amplifying undue harm or disparities. By integrating expertise from computer science, statistics, and public policy, we developed methods, tools, and guidelines that close the gap between AI's potential and its real-world applicability.

Key Research Areas

1. Translating Societal Goals into AI Solutions

How can we align AI decision-making with the diverse societal and policy-level goals of different communities? For example, what fairness means can vary significantly depending on cultural, legal, and ethical perspectives. Our research found that existing methods for capturing these perspectives were inadequate. In response, we designed better ways to engage with policymakers, affected communities, and other AI stakeholders. One of our key contributions was the AI Failure Cards, a tool designed to help communities understand AI’s risks and share their own experiences and preferred strategies for harm mitigation in AI-driven decision-making.

2. Reducing Harm Throughout the AI Lifecycle

From initial goal-setting to data collection, model design, and deployment, AI systems can introduce unintended harms at every stage. Our research focused on identifying critical intervention points to prevent such issues before they arise. A major outcome of this work was the Situate-AI Guidebook, developed in collaboration with local and state governments. This guidebook provides a structured framework for evaluating AI projects early on—helping decision-makers assess their goals, legal and societal constraints, data limitations, and governance requirements before committing to AI-based solutions.

3. Understanding AI’s Real-World Impact

Even the most well-designed AI tools can fail if human decision-makers don’t trust or properly integrate them into their workflows. Through our research, we identified key factors that influence AI adoption, including user perceptions, potential liability concerns, and the broader interests of stakeholders. We also studied why some algorithmic tools are ultimately abandoned or decommissioned, providing insights into how to improve accountability and long-term success in AI implementation.

Advancing AI with a Multidisciplinary Approach

Creating fair and effective AI systems requires more than just technical expertise—it demands an understanding of public policy, economics, moral philosophy, and human behavior. Our team worked closely with experts across these fields, as well as real-world stakeholders, to ensure AI solutions address genuine societal needs rather than just technical challenges.

Impact and Future Directions

Our findings offer practical insights for a wide range of AI stakeholders, including policymakers, developers, and the communities affected by AI-driven decisions. By fostering transparency, fairness, and accountability, we aim to empower society to harness AI’s benefits while minimizing its risks.


Last Modified: 01/30/2025
Modified by: Hoda Heidari

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page