Skip to feedback

Award Abstract # 2147305
FAI: BRIMI - Bias Reduction In Medical Information

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF CONNECTICUT
Initial Amendment Date: March 7, 2022
Latest Amendment Date: March 7, 2022
Award Number: 2147305
Award Instrument: Standard Grant
Program Manager: Sylvia Spengler
sspengle@nsf.gov
 (703)292-7347
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: March 15, 2022
End Date: February 28, 2026 (Estimated)
Total Intended Award Amount: $392,994.00
Total Awarded Amount to Date: $392,994.00
Funds Obligated to Date: FY 2022 = $392,994.00
History of Investigator:
  • Shiri Dori-Hacohen (Principal Investigator)
    shiridh@uconn.edu
  • Sherry Pagoto (Co-Principal Investigator)
  • Scott Hale (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Connecticut
438 WHITNEY RD EXTENSION UNIT 1133
STORRS
CT  US  06269-9018
(860)486-3622
Sponsor Congressional District: 02
Primary Place of Performance: University of Connecticut
371 Fairfield Way
Storrs
CT  US  06269-4157
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): WNTPS995QBM7
Parent UEI:
NSF Program(s): Fairness in Artificial Intelli,
Fairness in Artificial Intelli,
Secure &Trustworthy Cyberspace
Primary Program Source: 01002223DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 025Z, 065Z, 075Z, 7434
Program Element Code(s): 114y00, 114Y00, 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070, 47.075

ABSTRACT

This award, Bias Reduction In Medical Information (BRIMI), focuses on using artificial intelligence (AI) to detect and mitigate biased, harmful, and/or false health information that disproportionately hurts minority groups in society. BRIMI offers outsized promise for increased equity in health information, improving fairness in AI, medicine, and in the information ecosystem online (e.g., health websites and social media content). BRIMI's novel study of biases stands to greatly advance the understanding of the challenges that minority groups and individuals face when seeking health information. By including specific interventions for both patients and doctors and advancing the state-of-the-art in public health and fact checking organizations, BRIMI aims to inform public policy, increase the public's critical literacy, and improve the well-being of historically under-served patients. The award includes significant outreach efforts, which will engage minority communities directly in our scientific process; broad stakeholder engagement will ensure that the research approach to the groups studied is respectful, ethical, and patient-centered. The BRIMI team is composed of academics, non-profits, and industry partners, thus improving collaboration and partnerships across different sectors and multiple disciplines. The BRIMI project will lead to fundamental research advances in computer science, while integrating deep expertise in medical training, public health interventions, and fact checking. BRIMI is the first large scale computational study of biased health information of any kind. This award specifically focuses on bias reduction in the health domain; its foundational computer science advances and contributions may generalize to other domains, and it will likely pave the way for studying bias in other areas such as politics and finances.

BRIMI has the following objectives: (a) identifying and analyzing bias and language misuse online; (b) advancing the understanding of how misinformation spreads amongst different populations; and (c) triaging health topics with the biggest harms, and creating and disseminating triage guidelines to public health officials and practitioners. BRIMI will develop novel artificial intelligence approaches both to establish health information inequities empirically, and to reduce them. The methods used include large-scale online and social network data collection and a content analysis approach to annotating complex health data; supervised, semi-supervised and transfer learning to detect biased and false health information; controversy and misinformation analysis using community detection, stance detection and claim detection; and intervention design methods based on best practices in public health. The award?s research contributions will include: (a) novel metrics to computationally define biased health information and characterize its dissemination online and in social media, including specifically within divergent population groups; (b) utilizing transfer learning and semi-supervised approaches, in order to generalize solutions developed on and for medical language to lay language; (c) analyzing disagreement within and across populations on health information, which in turn requires improvement in stance detection and claim matching approaches; and (d) novel computational approaches to triage and prioritize misinformation for the purposes of mitigation.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Bucknall, Benjamin S. and Dori-Hacohen, Shiri "Current and Near-Term AI as a Potential Existential Risk Factor" AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society , 2022 https://doi.org/10.1145/3514094.3534146 Citation Details
Dori-Hacohen, S. and Montenegro, R. and Murai, F. and Hale, S. A. and Sung, K. and Blain, M. and Edwards-Johnson, J. "Fairness via AI: Bias Reduction in Medical Information" The 4th FAccTRec Workshop on Responsible Recommendation at RecSys 2021 , 2021 Citation Details
Dori-Hacohen, Shiri and Hale, Scott A. "Information Ecosystem Threats in Minoritized Communities: Challenges, Open Problems and Research Directions" SIGIR '22: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval , 2022 https://doi.org/10.1145/3477495.3536327 Citation Details
Kierans, A. and Hazan, H. and Dori-Hacohen, S. "Quantifying Misalignment Between Agents" ML Safety @ NeurIPS 2022 , 2022 Citation Details
Kierans, Aidan "Benchmarked Ethics: A Roadmap to AI Alignment, Moral Knowledge, and Control" Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society , 2023 https://doi.org/10.1145/3600211.3604764 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page