
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | August 17, 2020 |
Latest Amendment Date: | July 21, 2021 |
Award Number: | 2007932 |
Award Instrument: | Standard Grant |
Program Manager: |
Sylvia Spengler
sspengle@nsf.gov (703)292-7347 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | September 1, 2020 |
End Date: | August 31, 2025 (Estimated) |
Total Intended Award Amount: | $499,990.00 |
Total Awarded Amount to Date: | $515,990.00 |
Funds Obligated to Date: |
FY 2021 = $16,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
100 INSTITUTE RD WORCESTER MA US 01609-2280 (508)831-5000 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
100 Institute Road Worcester MA US 01609-2247 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Info Integration & Informatics |
Primary Program Source: |
01002021DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
As the use of AI becomes ever more prevalent in socio-technical systems, people making decisions frequently collaborate not only with each other, but also with automated technologies to make judgements that have real and lasting impact on other people's lives. This has serious implications for the equitable and fair treatment of historically disadvantaged groups, due to the potential interplay between
implicit bias analysts may suffer from and algorithmic bias inadvertently embedded in AI systems. There is a strong imperative to address open problems surrounding interactive decision support systems with effective bias mitigation technologies to ensure fair outcomes. This project, named AEQUITAS to reflect the concept of justice and fairness, investigates the application of contemporary notions of group fairness to the classic task of aggregating multiple rankings of candidates to derive an overall fair consensus decision. The resulting methods and tools help decision makers mitigate both the implicit bias they suffer from as well as expose algorithmic bias inadvertently embedded in automated AI ranking algorithms. This technology will have impactful applications in domains from hiring, lending, to education, where decisions often made by committee with input from multiple decision makers must have unbiased outcomes. Fair access for historically disadvantaged groups of people to potentially life changing opportunities such as jobs, loans, and educational resources is a potential game changing societal outcome of the AEQUITAS project. Further, the integration of project activities with the training of a future STEM workforce with focus on female and underrepresented students via the WPI Data Science REU summer site and the interdisciplinary degree programs in Data Science at WPI also represent significant broader impact.
AEQUITAS promises to break fundamental new ground in ethical AI by providing the first interactive consensus-based bias mitigation solution. New insights are expected to be gained into the ways in which unfair bias against underprivileged groups may be introduced by a consensus building process and manifest itself in a final ranking. As foundation of AEQUITAS, the fair rank aggregation problem is
modeled using a constraint optimization formulation that captures prevalent group fairness criteria. This new fairness-preserving optimization model ensures measures of fairness for the candidates being ranked while still producing a representative consensus ranking following the given set of base rankings. A family of exact and approximate bias mitigation solutions is designed that collectively guarantee fair consensus generation in a rich variety of decision scenarios. Tailored optimization strategies for these new fair rank aggregation services are potentially transformative -- pushing the envelope on practical ethical applications of AI for fair decision making. Further, these fair rank aggregation methods are integrated into carefully designed mixed-initiative interactive systems to
facilitate understanding and trust in the consensus building process and to empower human decision makers to engage in an AI-driven consensus building process to reach unbiased decisions. The AEQUITAS technology supports comparative analytics to visualize the impact of individual rankings on the final consensus outcome, as well as to explore the trade-offs between theaccuracy of the aggregation and fairness criteria. User studies to understand how well fairness imposed by the AEQUITAS system aligns with human decision makers' perception of fairness are undertaken. Further, the effectiveness of the AEQUITAS technology in supporting multiple analysts to collaborate towards reaching a fair shared decision is studied.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.