Award Abstract # 2040800
FAI: Fairness in Machine Learning with Human in the Loop

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF CALIFORNIA SANTA CRUZ
Initial Amendment Date: January 25, 2021
Latest Amendment Date: April 24, 2024
Award Number: 2040800
Award Instrument: Standard Grant
Program Manager: Todd Leen
tleen@nsf.gov
 (703)292-7215
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: February 1, 2021
End Date: January 31, 2026 (Estimated)
Total Intended Award Amount: $625,000.00
Total Awarded Amount to Date: $625,000.00
Funds Obligated to Date: FY 2021 = $625,000.00
History of Investigator:
  • Yang Liu (Principal Investigator)
    yangliu@ucsc.edu
  • Mingyan Liu (Co-Principal Investigator)
  • Ming Yin (Co-Principal Investigator)
  • Parinaz Naghizadeh Ardabili (Co-Principal Investigator)
Recipient Sponsored Research Office: University of California-Santa Cruz
1156 HIGH ST
SANTA CRUZ
CA  US  95064-1077
(831)459-5278
Sponsor Congressional District: 19
Primary Place of Performance: University of California-Santa Cruz
SOE 3, UC Santa Cruz, 1156 High
Santa Cruz
CA  US  95064-1100
Primary Place of Performance
Congressional District:
19
Unique Entity Identifier (UEI): VXUFPE4MCZH5
Parent UEI:
NSF Program(s): Fairness in Artificial Intelli
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 075Z
Program Element Code(s): 114Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Despite early successes and significant potential, algorithmic decision-making systems often inherit and encode biases that exist in the training data and/or the training process. It is thus important to understand the consequences of deploying and using machine learning models and provide algorithmic treatments to ensure that such techniques will ultimately serve the social good. While recent works have looked into the fairness issues in AI concerning the ?short-term? measures, the long-term consequences and impacts of automated decision making remain unclear. The understanding of the long-term impact of a fair decision provides guidelines to policy-makers when deploying an algorithmic model in a dynamic environment and is critical to its trustworthiness and adoption. It will also drive the design of algorithms with an eye toward the welfare of both the makers and the users of these algorithms, with an ultimate goal of achieving more equitable outcomes.

This project aims to understand the long-term impact of fair decisions made by automated machine learning algorithms via establishing an analytical, algorithmic, and experimental framework that captures the sequential learning and decision process, the actions and dynamics of the underlying user population, and its welfare. This knowledge will help design the right fairness criteria and intervention mechanisms throughout the life cycle of the decision-action loop to ensure long-term equitable outcomes. Central to this project?s intellectual inquiry is the focus on human in the loop, i.e., an AI-human feedback loop with automated decision-making that involves human participation. Our focus on the long-term impacts of fair algorithmic decision-making while explicitly modeling and incorporating human agents in the loop provides a theoretically rigorous framework to understand how an algorithmic decision-maker fares in the foreseeable future.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 35)
Tang, Zeyu and Wang, Jialu and Liu, Yang and Spirtes, Peter and Zhang, Kun "Procedural Fairness Through Decoupling Objectionable Data Generating Components" , 2024 Citation Details
Zhu, Zhaowei and Luo, Tianyi and Liu, Yang "The Rich Get Richer: Disparate Impact of Semi-Supervised Learning" International Conference on Learning Representations (ICLR) , 2022 Citation Details
Yin, Tongxin and Raab, Reilly and Liu, Mingyan and Liu, Yang "Long-Term Fairness with Unknown Dynamics" , 2023 Citation Details
Yetukuri, Jayanth and Hardy, Ian and Vorobeychik, Yevgeniy and Ustun, Berk and Liu, Yang "Providing Fair Recourse over Plausible Groups" , 2024 Citation Details
Yetukuri, Jayanth and Hardy, Ian and Liu, Yang "Towards User Guided Actionable Recourse" , 2023 https://doi.org/10.1145/3600211.3604708 Citation Details
Khalili, Mohammad Mahdi and Zhang, Xueru and Liu, Mingyan "Designing Contracts for Trading Private and Heterogeneous Data Using a Biased Differentially Private Algorithm" IEEE Access , v.9 , 2021 https://doi.org/10.1109/ACCESS.2021.3074478 Citation Details
Liao, Yiqiao and Naghizadeh, Parinaz "Social Bias Meets Data Bias: The Impacts of Labeling and Measurement Errors on Fairness Criteria" Proceedings of the AAAI Conference on Artificial Intelligence , 2023 Citation Details
Liu, Yang "Understanding Instance-Level Label Noise: Disparate Impacts and Treatments" International Conference on Machine Learning , 2021 Citation Details
Liu, Yang and Wang, Jialu "Can Less be More? When Increasing-to-Balancing Label Noise Rates Considered Beneficial" 35th Conference on Neural Information Processing Systems (NeurIPS 2021) , 2021 Citation Details
Pang, Jinlong and Wang, Jialu and Zhu, Zhaowei and Yao, Yuanshun and Qian, Chen and Liu, Yang "Fairness Without Harm: An Influence-Guided Active Sampling Approach" , 2024 Citation Details
Raab, Reilly and Boczar, Ross and Fazel, Maryam and Liu, Yang "Fair Participation via Sequential Policies" , 2024 Citation Details
(Showing: 1 - 10 of 35)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page