
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | June 4, 2024 |
Latest Amendment Date: | June 4, 2024 |
Award Number: | 2348391 |
Award Instrument: | Standard Grant |
Program Manager: |
Cornelia Caragea
ccaragea@nsf.gov (703)292-2706 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 1, 2024 |
End Date: | July 31, 2026 (Estimated) |
Total Intended Award Amount: | $174,999.00 |
Total Awarded Amount to Date: | $174,999.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
9201 UNIVERSITY CITY BLVD CHARLOTTE NC US 28223-0001 (704)687-1888 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
9201 UNIVERSITY CITY BLVD CHARLOTTE NC US 28223-0001 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Info Integration & Informatics |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
The rapid development and attention towards generative artificial intelligence (AI) makes their trustworthiness a critical issue of societal importance. Diffusion models (DMs) are large generative AI models which can generate high quality images from text instructions. There are concerns about the trustworthiness of DMs in terms of privacy, fairness and explainability. It may over-memorize the training data, which causes vulnerabilities to privacy leakages. Without proper guidance, it may inherent social bias from the training data and generate harmful images towards unprivileged groups. It cannot explain why or how the images are generated based on the instructions. This project evaluates the trustworthiness of DMs and provides advanced solutions to address these issues. The results of this project can benefit decision-makers and practitioners in different areas such as health care, media, law, and education to adopt generative models to assist content creation and daily productivity.
This project extends the current techniques in Diffusion Models focusing on a single aspect of trustworthiness to achieve multi-desiderata simultaneously including privacy, fairness, and explainability. This project first implements differential private DMs to defend against privacy attacks. It then introduces fair training in private DMs to avoid spurious correlation and stereotyping in generated content. In addition, it explores faithful explanations of DMs with the assistance of attention mechanisms. Most importantly, the projects evaluate the trade-off between these trustworthiness properties and generative quality in a joint trustworthy DM framework.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Please report errors in award information by writing to: awardsearch@nsf.gov.