
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | March 7, 2022 |
Latest Amendment Date: | March 7, 2022 |
Award Number: | 2147305 |
Award Instrument: | Standard Grant |
Program Manager: |
Sylvia Spengler
sspengle@nsf.gov (703)292-7347 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | March 15, 2022 |
End Date: | February 28, 2026 (Estimated) |
Total Intended Award Amount: | $392,994.00 |
Total Awarded Amount to Date: | $392,994.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
438 WHITNEY RD EXTENSION UNIT 1133 STORRS CT US 06269-9018 (860)486-3622 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
371 Fairfield Way Storrs CT US 06269-4157 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Fairness in Artificial Intelli, Fairness in Artificial Intelli, Secure &Trustworthy Cyberspace |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070, 47.075 |
ABSTRACT
This award, Bias Reduction In Medical Information (BRIMI), focuses on using artificial intelligence (AI) to detect and mitigate biased, harmful, and/or false health information that disproportionately hurts minority groups in society. BRIMI offers outsized promise for increased equity in health information, improving fairness in AI, medicine, and in the information ecosystem online (e.g., health websites and social media content). BRIMI's novel study of biases stands to greatly advance the understanding of the challenges that minority groups and individuals face when seeking health information. By including specific interventions for both patients and doctors and advancing the state-of-the-art in public health and fact checking organizations, BRIMI aims to inform public policy, increase the public's critical literacy, and improve the well-being of historically under-served patients. The award includes significant outreach efforts, which will engage minority communities directly in our scientific process; broad stakeholder engagement will ensure that the research approach to the groups studied is respectful, ethical, and patient-centered. The BRIMI team is composed of academics, non-profits, and industry partners, thus improving collaboration and partnerships across different sectors and multiple disciplines. The BRIMI project will lead to fundamental research advances in computer science, while integrating deep expertise in medical training, public health interventions, and fact checking. BRIMI is the first large scale computational study of biased health information of any kind. This award specifically focuses on bias reduction in the health domain; its foundational computer science advances and contributions may generalize to other domains, and it will likely pave the way for studying bias in other areas such as politics and finances.
BRIMI has the following objectives: (a) identifying and analyzing bias and language misuse online; (b) advancing the understanding of how misinformation spreads amongst different populations; and (c) triaging health topics with the biggest harms, and creating and disseminating triage guidelines to public health officials and practitioners. BRIMI will develop novel artificial intelligence approaches both to establish health information inequities empirically, and to reduce them. The methods used include large-scale online and social network data collection and a content analysis approach to annotating complex health data; supervised, semi-supervised and transfer learning to detect biased and false health information; controversy and misinformation analysis using community detection, stance detection and claim detection; and intervention design methods based on best practices in public health. The award?s research contributions will include: (a) novel metrics to computationally define biased health information and characterize its dissemination online and in social media, including specifically within divergent population groups; (b) utilizing transfer learning and semi-supervised approaches, in order to generalize solutions developed on and for medical language to lay language; (c) analyzing disagreement within and across populations on health information, which in turn requires improvement in stance detection and claim matching approaches; and (d) novel computational approaches to triage and prioritize misinformation for the purposes of mitigation.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.