Skip to feedback

Award Abstract # 2348391
CRII: III: Trustworthy Diffusion Models

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF NORTH CAROLINA AT CHARLOTTE
Initial Amendment Date: June 4, 2024
Latest Amendment Date: June 4, 2024
Award Number: 2348391
Award Instrument: Standard Grant
Program Manager: Cornelia Caragea
ccaragea@nsf.gov
 (703)292-2706
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 1, 2024
End Date: July 31, 2026 (Estimated)
Total Intended Award Amount: $174,999.00
Total Awarded Amount to Date: $174,999.00
Funds Obligated to Date: FY 2024 = $174,999.00
History of Investigator:
  • Depeng Xu (Principal Investigator)
    dxu7@uncc.edu
Recipient Sponsored Research Office: University of North Carolina at Charlotte
9201 UNIVERSITY CITY BLVD
CHARLOTTE
NC  US  28223-0001
(704)687-1888
Sponsor Congressional District: 12
Primary Place of Performance: University of North Carolina at Charlotte
9201 UNIVERSITY CITY BLVD
CHARLOTTE
NC  US  28223-0001
Primary Place of Performance
Congressional District:
12
Unique Entity Identifier (UEI): JB33DT84JNA5
Parent UEI: NEYCH3CVBTR6
NSF Program(s): Info Integration & Informatics
Primary Program Source: 01002425DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7364, 8228
Program Element Code(s): 736400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

The rapid development and attention towards generative artificial intelligence (AI) makes their trustworthiness a critical issue of societal importance. Diffusion models (DMs) are large generative AI models which can generate high quality images from text instructions. There are concerns about the trustworthiness of DMs in terms of privacy, fairness and explainability. It may over-memorize the training data, which causes vulnerabilities to privacy leakages. Without proper guidance, it may inherent social bias from the training data and generate harmful images towards unprivileged groups. It cannot explain why or how the images are generated based on the instructions. This project evaluates the trustworthiness of DMs and provides advanced solutions to address these issues. The results of this project can benefit decision-makers and practitioners in different areas such as health care, media, law, and education to adopt generative models to assist content creation and daily productivity.

This project extends the current techniques in Diffusion Models focusing on a single aspect of trustworthiness to achieve multi-desiderata simultaneously including privacy, fairness, and explainability. This project first implements differential private DMs to defend against privacy attacks. It then introduces fair training in private DMs to avoid spurious correlation and stereotyping in generated content. In addition, it explores faithful explanations of DMs with the assistance of attention mechanisms. Most importantly, the projects evaluate the trade-off between these trustworthiness properties and generative quality in a joint trustworthy DM framework.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page