
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | January 25, 2021 |
Latest Amendment Date: | July 20, 2021 |
Award Number: | 2040989 |
Award Instrument: | Standard Grant |
Program Manager: |
Wendy Nilsen
wnilsen@nsf.gov (703)292-2568 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | February 1, 2021 |
End Date: | January 31, 2025 (Estimated) |
Total Intended Award Amount: | $375,000.00 |
Total Awarded Amount to Date: | $388,500.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
5801 S ELLIS AVE CHICAGO IL US 60637-5418 (773)702-8669 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
5730 S Ellis Avenue Chicago IL US 60637-2612 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
IIS Special Projects, Fairness in Artificial Intelli |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Explaining machine learning (ML) models have received increasing interest because of their adoption in societally-critical tasks, ranging from health care, to hiring, to criminal justice. It is crucial for the relevant parties, such as decision makers and decision subjects, to understand why a model makes a particular prediction. This proposal argues that explanations represent a communication process. In order to improve the effectiveness of explanations, explanations should be adaptive and interactive based on the subject being explained (subgroups of interest) as well as the target audience (user profiles), whose knowledge and preferences may be evolving. Therefore, this proposal aims to develop adaptive and interactive explanations of machine learning models, which will allow people to better understand the decisions being made for and about them.
This proposal has three key areas of focus. First, this proposal will develop a novel formal framework for generating adaptive explanations which can be customized to account for subgroups of interest and user profiles. Second, this proposal will facilitate the explanations as an interactive communication process by dynamically incorporating user inputs. Finally, this proposal will improve existing automatic evaluation metrics such as sufficiency and comprehensiveness, and develop novel ones, especially for the understudied global explanations. The team will embed these computational approaches in real-world systems.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Intellectual Merit
This project advanced the field of explainable AI by designing adaptive and interactive explanation methods tailored to human users. We developed selective explanations that incorporate user input to highlight relevant information, TalkToModel for conversational querying of black-box models, and Reasoning-in-Reasoning, a hierarchical planning framework for generating step-by-step explanations in complex tasks like theorem proving. These innovations improve both accessibility and effectiveness of AI systems, particularly when users must make high-stakes decisions based on model outputs.
We also created a theoretical framework that clarifies when and how explanations can enhance human understanding, emphasizing the role of human intuitions. Through collaborations with radiologists, we conducted some of the first controlled experiments on AI-assisted medical diagnosis, showing that while AI-human teams outperform unaided experts, users often under-rely on AI. These contributions were published in top venues, including Nature Machine Intelligence, NeurIPS, ICML, ICLR, and FAccT, and presented in a NAACL 2024 tutorial.
Broader Impacts
The project supported training and mentoring for over 20 graduate, undergraduate, and postdoctoral researchers at the University of Chicago, Harvard, and UC Irvine. It informed the development of new courses on human-centered machine learning, adaptive experimentation, and explainable AI, helping prepare the next generation of researchers and practitioners. Our findings have practical implications in domains such as healthcare and finance, showing how thoughtful explanation design can improve decision quality and appropriate trust in AI.
We also promoted broader community engagement. Project outcomes were shared through over 40 invited talks, keynotes, and tutorials at conferences, universities, and interdisciplinary workshops. These included outreach to the medical, legal, and policy communities to foster responsible and informed use of AI. By bridging theory, application, and education, this project lays a strong foundation for more effective and trustworthy human-AI collaboration.
Last Modified: 07/17/2025
Modified by: Chenhao Tan
Please report errors in award information by writing to: awardsearch@nsf.gov.