Award Abstract # 2229885
Institute for Trustworthy AI in Law and Society (TRAILS)

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF MARYLAND, COLLEGE PARK
Initial Amendment Date: May 3, 2023
Latest Amendment Date: November 15, 2024
Award Number: 2229885
Award Instrument: Cooperative Agreement
Program Manager: Cindy Bethel
cbethel@nsf.gov
 (703)292-4420
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: June 1, 2023
End Date: May 31, 2028 (Estimated)
Total Intended Award Amount: $20,000,000.00
Total Awarded Amount to Date: $14,262,089.00
Funds Obligated to Date: FY 2023 = $7,626,273.00
FY 2024 = $6,635,816.00
History of Investigator:
  • Hal Daume (Principal Investigator)
    hal@umiacs.umd.edu
  • Thomas Goldstein (Co-Principal Investigator)
  • Katherine Shilton (Co-Principal Investigator)
  • Susan Aaronson (Co-Principal Investigator)
  • David Broniatowski (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Maryland, College Park
3112 LEE BUILDING
COLLEGE PARK
MD  US  20742-5100
(301)405-6269
Sponsor Congressional District: 04
Primary Place of Performance: University of Maryland, College Park
3112 LEE BLDG 7809 REGENTS DR
COLLEGE PARK
MD  US  20742-5100
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): NPU8ULVAAS23
Parent UEI: NPU8ULVAAS23
NSF Program(s): Reimbursable/Reserved Out-year,
AI Research Institutes
Primary Program Source: 01002627RB NSF RESEARCH & RELATED ACTIVIT
01002223RB NSF RESEARCH & RELATED ACTIVIT

01002324DB NSF RESEARCH & RELATED ACTIVIT

01002728DB NSF RESEARCH & RELATED ACTIVIT

01002324RB NSF RESEARCH & RELATED ACTIVIT

01002728RB NSF RESEARCH & RELATED ACTIVIT

01002425RB NSF RESEARCH & RELATED ACTIVIT

01002425DB NSF RESEARCH & RELATED ACTIVIT

01002526RB NSF RESEARCH & RELATED ACTIVIT

01002627DB NSF RESEARCH & RELATED ACTIVIT

01002526DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 075Z, 8237
Program Element Code(s): 917900, 132Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070, 47.084

ABSTRACT

Artificial Intelligence (AI) systems have potential to enhance human capacity and increase productivity. They also can catalyze innovation and mitigate complex problems. Current AI systems are not created in a way that is transparent making them a challenge to public trust. The opaque processes used produce results that are not well understood. Trust is further undermined by the harms that AI systems can cause. Those most affected are the communities excluded from participating in AI system developments. This lack of trustworthiness will result in slower adoption of these AI technologies. It is critical to AI innovation to include groups affected by the benefits and harms of these AI systems. The TRAILS (Trustworthy AI in Law and Society) Institute is a partnership of the University of Maryland, The George Washington University, Morgan State University, and Cornell University. It encourages community participation in AI development of techniques, tools, and scientific theories. Design and policy recommendations produced will promote the trustworthiness of AI systems. A first goal of the TRAILS Institute is to discover ways to change the design and development of AI systems. This will help communities make informed choices about AI technology adoption. A second goal is the development of best practices for industry and government. This will foster AI innovation while keeping communities safe, engaged, and informed. The TRAILS Institute has explicit plans for increasing participation of affected communities. This includes participation of K-12 education up through Congressional staff. These plans will elicit the concerns and expectations from the affected communities. They also provide improved understanding of the risks and benefits of AI-enabled systems.

The TRAILS Institute's research program identifies four key thrusts. These thrusts target key aspects of the AI system development lifecycle. The first is Social Values. It involves increasing participation throughout all aspects of AI development. This ensures the values produced by AI systems reflect community and interested parties? values. This includes participatory design with diverse communities. The result is community-based interventions and adaptations for the AI development lifecycle. The second thrust is Technical Design. It includes the development of algorithms to promote transparency and trust in AI. This includes the development of tools that increase robustness in AI systems. It also promotes user and developer understanding of how AI systems operate. The third trust is Socio-Technical Perceptions. This involves the development of novel measures including psychometric techniques and experimental paradigms. These measures will assess the interpretability and explainability of AI systems. This will enable a deeper understanding and perception of existing metrics and algorithms. This provides understanding of the values perceived and held by included community members. The fourth thrust is Governance. It includes documentation and analysis of governance regimes for both data and technologies. These provide the underpinning AI for the development of platform and technology regulation. Ethnographers will analyze the institute itself and partner organizations. They will document ways in which technical choices translate to governance impacts. The research focus is in two use-inspired areas. The first being information dissemination systems (e.g., social medial platforms). The second is energy-intensive systems (e.g., autonomous systems). The institute's education and workforce development efforts in AI include new educational offerings. These cater to many markets, ranging from secondary through executive education. The TRAILS Institute is especially focused on expanding access to foundational education. The focus is on historically marginalized and minoritized groups of learners and users. The institute will work with these communities to learn from, educate, and recruit participants. The focus is to retain, support, and empower those marginalized in mainstream AI. The integration of these communities into this AI research program broadens participation in AI development and governance.

The National Institute of Standards and Technology (NIST) is partnering with NSF to provide funding for this Institute.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 28)
Shu, Manli and Wang, Jiongxiao and Zhu, Chen and Geiping, Jonas and Xiao, Chaowei and Goldstein, Tom "On the Exploitability of Instruction Tuning" , 2023 Citation Details
Si, Chenglei and Goyal, Navita and Wu, Sherry Tongshuang and Zhao, Chen and Feng, Shi and Daumé-III, Hal and Boyd-Graber, Jordan "Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong" , 2023 Citation Details
Singla, Vasu and Sandoval-Segura, Pedro and Goldblum, Micah and Geiping, Jonas and Goldstein, Tom "A Simple and Efficient Baseline for Data Attribution on Images" , 2023 Citation Details
Somepalli, Gowthami and Singla, Vasu and Goldblum, Micah and Geiping, Jonas and Goldstein, Tom "Understanding and Mitigating Copying in Diffusion Models" , 2023 Citation Details
Wen, Yuxin and Jain, Neel and Kirchenbauer, John and Goldblum, Micah and Geiping, Jonas and Goldstein, Tom "Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery" , 2023 Citation Details
Sandoval-Segura, P and Singla, V and Geiping, J and Goldblum, M and Goldstein, T "What can we learn from unlearnable datasets?" , 2024 Citation Details
Bansal, A and Borgnia, E and Chu, H-M and Li, J and Kazemi, H and Huang, F and Goldblum, M and Geiping, J and Goldstein, T "Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise" , 2024 Citation Details
Alipour-Fanid, Amir and Dabaghchian, Monireh and Jiao, Long and Zeng, Kai "Learning-Based Secure Spectrum Sharing for Intelligent IoT Networks" , 2024 https://doi.org/10.1109/ISQED60706.2024.10528684 Citation Details
An, B and Ding, M and Rabbani, T and Agrawal, A and Xu, Y and Deng, C and Zhu, S and Mohamed, A and Wen, Y and Goldstein, T and Huang, F "Benchmarking the robustness of image watermarks" , 2024 Citation Details
Broniatowski, David A and Simons, Joseph R and Gu, Jiayan and Jamison, Amelia M and Abroms, Lorien C "The efficacy of Facebooks vaccine misinformation policies and architecture during the COVID-19 pandemic" Science Advances , v.9 , 2023 https://doi.org/10.1126/sciadv.adh2132 Citation Details
Cherepanova, Valeriia and Levin, Roman and Gowthami, Somepalli and Geiping, Jonas and Bruss, C Bayan and Wilson, Andrew G and Goldstein, Tom and Goldblum, Micah "A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning" , 2023 Citation Details
(Showing: 1 - 10 of 28)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page