
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | January 25, 2021 |
Latest Amendment Date: | May 20, 2021 |
Award Number: | 2040880 |
Award Instrument: | Standard Grant |
Program Manager: |
Wendy Nilsen
wnilsen@nsf.gov (703)292-2568 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | July 1, 2021 |
End Date: | June 30, 2025 (Estimated) |
Total Intended Award Amount: | $625,000.00 |
Total Awarded Amount to Date: | $625,000.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1033 MASSACHUSETTS AVE STE 3 CAMBRIDGE MA US 02138-5366 (617)495-5501 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
33 Oxford Street, MD 347 Cambridge MA US 02138-2933 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Fairness in Artificial Intelli, Smart and Connected Health |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Machine learning models support decisions that affect millions of patients in the U.S. healthcare system in diagnosing illnesses, facilitating triage in emergency rooms, and informing supervision at intensive care units. In such applications, models will often include group attributes such as age, weight, and employment status to capture differences between patient subgroups. Standard techniques to build models with group attributes typically improve aggregate performance across the entire patient population. As a result, however, such models may lead to worse performance for specific groups. In such cases, the model may assign these groups preventable inaccurate predictions that undermine medical care and health outcomes. This project aims to prevent this harm by developing tools to ensure the fair use of group attributes in predictive models. The goal is to ensure that a model uses group attributes in a way that yields a tailored performance benefit for every group.
Currently deployed machine learning models in medicine may exhibit fair use violations that undermine health outcomes. This project mitigates fair use violations at key stages in the deployment of machine learning in medicine: verification, model development, and communication. First, it develops tools to check if a model ensures fair use. These tools include theoretical guarantees that characterize when common approaches to model development produce fair use violations, and statistical tests to verify if a model violates fair use before and during deployment. Second, it develops algorithms for learning models with fair use guarantees. Algorithms will be tailored for salient use cases in medicine, paired with open-source software, and applied to build decision support tools for real-world medical applications. Third, it creates tools to inform key stakeholders (regulators, physicians, and patients) about a model's fair use guarantees. The project draws on machine learning, information theory, optimization, human-centered design, as well as expertise in deploying models in clinical settings. The resulting toolkit for ensuring fair use of group attributes in medicine will be embedded in real-world systems through collaborations with medical researchers and industry.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.