Award Abstract # 2007932
III: Small: Fair Decision Making by Consensus: Interactive Bias Mitigation Technology

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: WORCESTER POLYTECHNIC INSTITUTE
Initial Amendment Date: August 17, 2020
Latest Amendment Date: July 21, 2021
Award Number: 2007932
Award Instrument: Standard Grant
Program Manager: Sylvia Spengler
sspengle@nsf.gov
 (703)292-7347
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2020
End Date: August 31, 2025 (Estimated)
Total Intended Award Amount: $499,990.00
Total Awarded Amount to Date: $515,990.00
Funds Obligated to Date: FY 2020 = $499,990.00
FY 2021 = $16,000.00
History of Investigator:
  • Elke Rundensteiner (Principal Investigator)
    rundenst@wpi.edu
  • Lane Harrison (Co-Principal Investigator)
Recipient Sponsored Research Office: Worcester Polytechnic Institute
100 INSTITUTE RD
WORCESTER
MA  US  01609-2280
(508)831-5000
Sponsor Congressional District: 02
Primary Place of Performance: Worcester Polytechnic Institute
100 Institute Road
Worcester
MA  US  01609-2247
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): HJNQME41NBU4
Parent UEI:
NSF Program(s): Info Integration & Informatics
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 9251, 7364, 9102
Program Element Code(s): 736400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

As the use of AI becomes ever more prevalent in socio-technical systems, people making decisions frequently collaborate not only with each other, but also with automated technologies to make judgements that have real and lasting impact on other people's lives. This has serious implications for the equitable and fair treatment of historically disadvantaged groups, due to the potential interplay between
implicit bias analysts may suffer from and algorithmic bias inadvertently embedded in AI systems. There is a strong imperative to address open problems surrounding interactive decision support systems with effective bias mitigation technologies to ensure fair outcomes. This project, named AEQUITAS to reflect the concept of justice and fairness, investigates the application of contemporary notions of group fairness to the classic task of aggregating multiple rankings of candidates to derive an overall fair consensus decision. The resulting methods and tools help decision makers mitigate both the implicit bias they suffer from as well as expose algorithmic bias inadvertently embedded in automated AI ranking algorithms. This technology will have impactful applications in domains from hiring, lending, to education, where decisions often made by committee with input from multiple decision makers must have unbiased outcomes. Fair access for historically disadvantaged groups of people to potentially life changing opportunities such as jobs, loans, and educational resources is a potential game changing societal outcome of the AEQUITAS project. Further, the integration of project activities with the training of a future STEM workforce with focus on female and underrepresented students via the WPI Data Science REU summer site and the interdisciplinary degree programs in Data Science at WPI also represent significant broader impact.

AEQUITAS promises to break fundamental new ground in ethical AI by providing the first interactive consensus-based bias mitigation solution. New insights are expected to be gained into the ways in which unfair bias against underprivileged groups may be introduced by a consensus building process and manifest itself in a final ranking. As foundation of AEQUITAS, the fair rank aggregation problem is
modeled using a constraint optimization formulation that captures prevalent group fairness criteria. This new fairness-preserving optimization model ensures measures of fairness for the candidates being ranked while still producing a representative consensus ranking following the given set of base rankings. A family of exact and approximate bias mitigation solutions is designed that collectively guarantee fair consensus generation in a rich variety of decision scenarios. Tailored optimization strategies for these new fair rank aggregation services are potentially transformative -- pushing the envelope on practical ethical applications of AI for fair decision making. Further, these fair rank aggregation methods are integrated into carefully designed mixed-initiative interactive systems to
facilitate understanding and trust in the consensus building process and to empower human decision makers to engage in an AI-driven consensus building process to reach unbiased decisions. The AEQUITAS technology supports comparative analytics to visualize the impact of individual rankings on the final consensus outcome, as well as to explore the trade-offs between theaccuracy of the aggregation and fairness criteria. User studies to understand how well fairness imposed by the AEQUITAS system aligns with human decision makers' perception of fairness are undertaken. Further, the effectiveness of the AEQUITAS technology in supporting multiple analysts to collaborate towards reaching a fair shared decision is studied.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Kuhlman, Caitlin and Gerych, Walter and Rundensteiner, Elke "Measuring Group Advantage: A Comparative Study of Fair Ranking Metrics" Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES 21) , 2021 https://doi.org/10.1145/3461702.3462588 Citation Details
Kuhlman, Caitlin and Rundensteiner, Elke "Rank aggregation algorithms for fair consensus" Proceedings of the VLDB Endowment , v.13 , 2020 https://doi.org/10.14778/3407790.3407855 Citation Details
Shrestha, Hilson and Cachel, Kathleen and Alkhathlan, Mallak and Rundensteiner, Elke and Harrison, Lane "FairFuse: Interactive Visual Support for Fair Consensus Ranking" 2022 IEEE Visualization and Visual Analytics (VIS) , 2022 https://doi.org/10.1109/VIS54862.2022.00022 Citation Details
Cachel, Kathleen and Rundensteiner, Elke "PreFAIR: Combining Partial Preferences for Fair Consensus Decision-making" , 2024 https://doi.org/10.1145/3630106.3658961 Citation Details
Shrestha, Hilson and Cachel, Kathleen and Alkhathlan, Mallak and Rundensteiner, Elke and Harrison, Lane "Help or Hinder? Evaluating the Impact of Fairness Metrics and Algorithms in Visualizations for Consensus Ranking" 2023 ACM Conference on Fairness, Accountability, and Transparency , 2023 https://doi.org/10.1145/3593013.3594108 Citation Details
Alkhathlan, Mallak and Cachel, Kathleen and Shrestha, Hilson and Harrison, Lane and Rundensteiner, Elke "Balancing Act: Evaluating Peoples Perceptions of Fair Ranking Metrics" , 2024 https://doi.org/10.1145/3630106.3659018 Citation Details
Cachel, K. and Rundensteiner, E. "FINS Auditing Framework: Group Fairness for Subset Selection" AAAI/ACM Conference on AI, Ethics, and Society (AEIS) , 2022 https://doi.org/10.1145/3514094.3534160 Citation Details
Cachel, K. and Rundensteiner, E. and Harrison, L. "MANI-RANK: Multi-attribute and Intersectional Fairness for Consensus Ranking" IEEE International Conference on Data Engineering (ICDE) , 2022 Citation Details
Cachel, Kathleen and Rundensteiner, Elke "Fairer Together: Mitigating Disparate Exposure in Kemeny Rank Aggregation" 2023 ACM Conference on Fairness, Accountability, and Transparency , 2023 https://doi.org/10.1145/3593013.3594085 Citation Details
Cachel, Kathleen and Rundensteiner, Elke "Fair&Share: Fast and Fair Multi-Criteria Selections" , 2023 https://doi.org/10.1145/3583780.3614874 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page