
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | April 25, 2024 |
Latest Amendment Date: | April 25, 2024 |
Award Number: | 2345235 |
Award Instrument: | Standard Grant |
Program Manager: |
Sylvia Spengler
sspengle@nsf.gov (703)292-7347 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | June 1, 2024 |
End Date: | May 31, 2027 (Estimated) |
Total Intended Award Amount: | $597,149.00 |
Total Awarded Amount to Date: | $597,149.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
2550 NORTHWESTERN AVE # 1100 WEST LAFAYETTE IN US 47906-1332 (765)494-1055 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
2550 NORTHWESTERN AVE STE 1900 WEST LAFAYETTE IN US 47906-1332 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Info Integration & Informatics |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
The growing utilization of data mining and machine learning systems in critical domains has raised concerns about the potential amplification of societal biases and discrimination. Large-scale pre-trained models, such as Generative Pre-trained Transformer (GPT), also confront fairness issues, further intensifying the need to address societal biases in automated decision-making algorithms. However, the resource-intensive nature of obtaining annotated data for fair algorithms, and the significant computational challenges in debiasing large-scale models, presents substantial obstacles to achieving algorithmic fairness. In response, this project aims to pioneer fundamental research in fair algorithmic decision-making while alleviating the heavy demands on data annotation and computational resources. Ultimately, it will facilitate the development, adoption, and evaluation of fair artificial intelligence (AI) systems by humans. This project will result in novel algorithms and software, fostering broader study in real-world applications such as promoting health equity in Alzheimer's disease research. Moreover, this project prioritizes education and diversity by providing training opportunities for underrepresented minority students, engaging them in cutting-edge computational research.
The research objective of this project is to improve fair decision-making in a more efficient and flexible manner. It addresses three fundamental research challenges: 1) establishing a theoretically grounded framework for learning and evaluating fair representations using widely accessible unlabeled data; 2) learning unsupervised fair representation applicable to various downstream tasks for improved flexibility; 3) exploring an efficient strategy tailored for transformers to achieve fairness in large-scale pre-trained models without the need of retraining, thereby enhancing the trade-off between fairness and accuracy while concurrently improving computational and GPU memory efficiency. One important application of this project lies in health equity, particularly in addressing biased predictions in disease studies. By integrating rigorous theoretical analysis with emerging application studies, this research project contributes to the advancement of more equitable and effective AI for societal benefits.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Please report errors in award information by writing to: awardsearch@nsf.gov.