
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | December 23, 2019 |
Latest Amendment Date: | June 15, 2020 |
Award Number: | 1939728 |
Award Instrument: | Standard Grant |
Program Manager: |
Todd Leen
tleen@nsf.gov (703)292-7215 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | January 1, 2020 |
End Date: | December 31, 2021 (Estimated) |
Total Intended Award Amount: | $216,908.00 |
Total Awarded Amount to Date: | $232,908.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
2550 NORTHWESTERN AVE # 1100 WEST LAFAYETTE IN US 47906-1332 (765)494-1055 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
305 N University St West Lafayette IN US 47907-2107 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Fairness in Artificial Intelli, Fairness in Artificial Intelli |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Bias and Discrimination in Artificial Intelligence (AI) has been receiving increasing attention. Unfortunately, the positive concept Fair AI is difficult to define. For example, it is hard to distinguish between (desired) personalization and (undesired) bias. These differences often depend on context, such as the use of gender or ethnicity in making a medical diagnosis vs. using the same attributes in determining if insurance should cover a medical procedure. This is particularly difficult as AI systems are used in new contexts, enabling products and services that have not been seen before and for which societal concepts of fairness are not yet established. This multidisciplinary project will construct a framework and taxonomy for understanding fairness in societal contexts. Human-computer interaction methods will be developed to learn perceptions of fairness based on human interaction with AI systems. Automated methods will be developed to relate these perceptions to the framework, enabling developers (and eventually automated AI systems) to respond to and correct issues perceived by users of the systems.
This exploratory project will develop a taxonomy incorporating concepts of Aristotelian fairness (distributive vs. corrective justice) and Rawlsian fairness (equality of rights and opportunities). A formal literature survey will be used to establish a framework for societal contexts of fairness and how they relate to the Taxonomy. Experiments with perceptions of models both in isolation and in comparison will be used to evaluate situations where people perceive AI systems as fair or unfair. Tools will be developed to identify and explain fairness issues in terms of the taxonomy, based on the elicited perceptions and societal context of the system. While beyond the scope of this project, the outcome of these tools could potentially be used to automatically adjust AI systems to reduce unfairness.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Fairness in Artificial Intelligence (AI) is a growing concern, and research has begun to address this issue. However, much of the work seeks to satisfy straightforward statistical measure of bias and discrimination. The reality is that "fair" is much more nuanced.
Our multidisciplinary team; with PIs from philosophy, machine learning, privacy, human-computer interaction, and critical theory, science and technology studies, and feminist studies; has explored issues of fairness from several angles, including perceptions of AI fairness (through literature review, crowdsourced studies, and focus groups), sources of unfairness (including data and inherent issues with machine learning), and methods to mitigate bias (such as tools to alert data scientists to situations with potential bias, and using synthetic data to alleviate data-induced bias in machine learning.)
Key outcomes include:
- Articles reframing AI fairness in the context of social justice, including a framing as equal concession, and a critical review categorizing issues with current approaches to formulating Fair AI.
- Articles describing crowdsourced perceptions of fairness, and the variation in fairness perceptions between individuals and the impact of intersectionality on these perceptions. This has further been investigated with a deeper dive into expert perceptions of fairness in a healthcare context.
- User study of how explanations of data label use for machine learning impact human annotators in the context of different interfaces and instructions for annotation, providing a means to avoid unfair and spurious correlations.
- Developed the Retrograde extension for Jupyter Notebooks; a tool that automatically identifies protected classes, and proxy variables for those protected classes, and reports results independently for demographics subgroups to highlight potential disparities to analyst/developers using scikit-learn.
- Machine learning tends to favor majority groups; we developed a method to generate synthetic data to enhance outcomes by reducing underrepresentation.
- We showed that even with equally represented groups and equalized outcomes, machine learning can in some situations be expected to produce biased outcomes; this represents an AI-induced systemic bias.
The project has also had a strong influence in education. Numerous undergraduate and graduate students have been directly involved in the research. In addition, the project has had a long-term educational impact through inclusion of an ethics component in a new Bachelor's in AI at Purdue, and a new engineering and computing ethics course at the University of Chicago.
While this has been a short project (one year of work, performed over two due to the pandemic), it has resulted in significant new insights into why AI unfairness is a problem, how to understand that problem, and means to develop (or in some cases, not deploy) AI so as to enhance social justice.
Last Modified: 04/28/2022
Modified by: Christopher W Clifton
Please report errors in award information by writing to: awardsearch@nsf.gov.