Award Abstract # 1939728
FAI: Identifying, Measuring, and Mitigating Fairness Issues in AI

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: PURDUE UNIVERSITY
Initial Amendment Date: December 23, 2019
Latest Amendment Date: June 15, 2020
Award Number: 1939728
Award Instrument: Standard Grant
Program Manager: Todd Leen
tleen@nsf.gov
 (703)292-7215
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: January 1, 2020
End Date: December 31, 2021 (Estimated)
Total Intended Award Amount: $216,908.00
Total Awarded Amount to Date: $232,908.00
Funds Obligated to Date: FY 2020 = $232,908.00
History of Investigator:
  • Christopher Clifton (Principal Investigator)
  • Murat Kantarcioglu (Co-Principal Investigator)
  • Blase Ur (Co-Principal Investigator)
  • Lindsay Weinberg (Co-Principal Investigator)
  • Christopher Yeomans (Co-Principal Investigator)
Recipient Sponsored Research Office: Purdue University
2550 NORTHWESTERN AVE # 1100
WEST LAFAYETTE
IN  US  47906-1332
(765)494-1055
Sponsor Congressional District: 04
Primary Place of Performance: Purdue University
305 N University St
West Lafayette
IN  US  47907-2107
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): YRXVL4JYCEF5
Parent UEI: YRXVL4JYCEF5
NSF Program(s): Fairness in Artificial Intelli,
Fairness in Artificial Intelli
Primary Program Source: 01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 0757, 075Z, 9251
Program Element Code(s): 114y00, 114Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Bias and Discrimination in Artificial Intelligence (AI) has been receiving increasing attention. Unfortunately, the positive concept Fair AI is difficult to define. For example, it is hard to distinguish between (desired) personalization and (undesired) bias. These differences often depend on context, such as the use of gender or ethnicity in making a medical diagnosis vs. using the same attributes in determining if insurance should cover a medical procedure. This is particularly difficult as AI systems are used in new contexts, enabling products and services that have not been seen before and for which societal concepts of fairness are not yet established. This multidisciplinary project will construct a framework and taxonomy for understanding fairness in societal contexts. Human-computer interaction methods will be developed to learn perceptions of fairness based on human interaction with AI systems. Automated methods will be developed to relate these perceptions to the framework, enabling developers (and eventually automated AI systems) to respond to and correct issues perceived by users of the systems.

This exploratory project will develop a taxonomy incorporating concepts of Aristotelian fairness (distributive vs. corrective justice) and Rawlsian fairness (equality of rights and opportunities). A formal literature survey will be used to establish a framework for societal contexts of fairness and how they relate to the Taxonomy. Experiments with perceptions of models both in isolation and in comparison will be used to evaluate situations where people perceive AI systems as fair or unfair. Tools will be developed to identify and explain fairness issues in terms of the taxonomy, based on the elicited perceptions and societal context of the system. While beyond the scope of this project, the outcome of these tools could potentially be used to automatically adjust AI systems to reduce unfairness.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 11)
Haider, Chowdhury_Mohammad Rakin and Clifton, Chris and Zhou, Yan "Unfair AI: It Isnt Just Biased Data" , 2022 https://doi.org/10.1109/ICDM54844.2022.00114 Citation Details
Haider, Chowdhury_Mohammad Rakin and Clifton, Christopher and Yin, Ming "Do Crowdsourced Fairness Preferences Correlate with Risk Perceptions?" , 2024 https://doi.org/10.1145/3640543.3645209 Citation Details
Hanson, Julia and Wei, Miranda and Veys, Sophie and Kugler, Matthew and Strahilevitz, Lior and Ur, Blase "Taking Data Out of Context to Hyper-Personalize Ads: Crowdworkers' Privacy Perceptions and Decisions to Disclose Private Information" CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems , 2020 https://doi.org/10.1145/3313831.3376415 Citation Details
Harrison, Galen and Bryson, Kevin and Bamba, Ahmad Emmanuel and Dovichi, Luca and Binion, Aleksander Herrmann and Borem, Arthur and Ur, Blase "JupyterLab in Retrograde: Contextual Notifications That Highlight Fairness and Bias Issues for Data Scientists" , 2024 Citation Details
Harrison, Galen and Hanson, Julia and Jacinto, Christine and Ramirez, Julio and Ur, Blase "An empirical study on the perceived fairness of realistic, imperfect machine learning models" Conference on Fairness, Accountability, and Transparency (FAT* 20) , 2020 https://doi.org/10.1145/3351095.3372831 Citation Details
Ofori-Boateng, D. and Segovia Dominguez, I.J. and Akcora, C. and Kantarcioglu, M. and Gel, Y.R. "Topological anomaly detection in dynamic multilayer blockchain networks" ECML PKDD , 2021 https://doi.org/10.1007/978-3-030-86486-6_48 Citation Details
Sullivan Jr., Jamar and Brackenbury, Will and McNutt, Andrew and Bryson, Kevin and Byll, Kwam and Chen, Yuxin and Littman, Michael and Tan, Chenhao and Ur, Blase "Explaining Why: How Instructions and User Interfaces Impact Annotator Rationales When Labeling Text Data" Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , 2022 https://doi.org/10.18653/v1/2022.naacl-main.38 Citation Details
van Nood, Ryan and Yeomans, Christopher "Fairness as Equal Concession: Critical Remarks on Fair AI" Science and Engineering Ethics , v.27 , 2021 https://doi.org/10.1007/s11948-021-00348-z Citation Details
Weinberg, L. "Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches" The journal of artificial intelligence research , v.74 , 2022 https://doi.org/https://doi.org/10.1613/jair.1.13196 Citation Details
Yasmeen Alufaisan, Laura R. "Does Explainable Artificial Intelligence Improve Human Decision-Making?" Proceedings of the AAAI Conference on Artificial Intelligence , v.35 , 2021 https://doi.org/10.31234/osf.io/d4r9t Citation Details
Zhou, Yan and Kantarcioglu, Murat and Clifton, Chris "On Improving Fairness of AI Models with Synthetic Minority Oversampling Techniques" , 2023 Citation Details
(Showing: 1 - 10 of 11)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Fairness in Artificial Intelligence (AI) is a growing concern, and research has begun to address this issue.  However, much of the work seeks to satisfy straightforward statistical measure of bias and discrimination.  The reality is that "fair" is much more nuanced.

Our multidisciplinary team; with PIs from philosophy, machine learning, privacy, human-computer interaction, and critical theory, science and technology studies, and feminist studies; has explored issues of fairness from several angles, including perceptions of AI fairness (through literature review, crowdsourced studies, and focus groups), sources of unfairness (including data and inherent issues with machine learning), and methods to mitigate bias (such as tools to alert data scientists to situations with potential bias, and using synthetic data to alleviate data-induced bias in machine learning.)

Key outcomes include:

  • Articles reframing AI fairness in the context of social justice, including a framing as equal concession, and a critical review categorizing issues with current approaches to formulating Fair AI.
  • Articles describing crowdsourced perceptions of fairness, and the variation in fairness perceptions between individuals and the impact of intersectionality on these perceptions.  This has further been investigated with a deeper dive into expert perceptions of fairness in a healthcare context.
  • User study of how explanations of data label use for machine learning impact human annotators in the context of different interfaces and instructions for annotation, providing a means to avoid unfair and spurious correlations.
  • Developed the Retrograde extension for Jupyter Notebooks; a tool that automatically identifies protected classes, and proxy variables for those protected classes, and reports results independently for demographics subgroups to highlight potential disparities to analyst/developers using scikit-learn.
  • Machine learning tends to favor majority groups; we developed a method to generate synthetic data to enhance outcomes by reducing underrepresentation.
  • We showed that even with equally represented groups and equalized outcomes, machine learning can in some situations be expected to produce biased outcomes; this represents an AI-induced systemic bias.

The project has also had a strong influence in education.  Numerous undergraduate and graduate students have been directly involved in the research.  In addition, the project has had a long-term educational impact through inclusion of an ethics component in a new Bachelor's in AI at Purdue, and a new engineering and computing ethics course at the University of Chicago.

While this has been a short project (one year of work, performed over two due to the pandemic), it has resulted in significant new insights into why AI unfairness is a problem, how to understand that problem, and means to develop (or in some cases, not deploy) AI so as to enhance social justice.


Last Modified: 04/28/2022
Modified by: Christopher W Clifton

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page