
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | May 3, 2023 |
Latest Amendment Date: | November 15, 2024 |
Award Number: | 2229885 |
Award Instrument: | Cooperative Agreement |
Program Manager: |
Cindy Bethel
cbethel@nsf.gov (703)292-4420 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | June 1, 2023 |
End Date: | May 31, 2028 (Estimated) |
Total Intended Award Amount: | $20,000,000.00 |
Total Awarded Amount to Date: | $14,262,089.00 |
Funds Obligated to Date: |
FY 2024 = $6,635,816.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
3112 LEE BUILDING COLLEGE PARK MD US 20742-5100 (301)405-6269 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
3112 LEE BLDG 7809 REGENTS DR COLLEGE PARK MD US 20742-5100 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Reimbursable/Reserved Out-year, AI Research Institutes |
Primary Program Source: |
01002223RB NSF RESEARCH & RELATED ACTIVIT 01002324DB NSF RESEARCH & RELATED ACTIVIT 01002728DB NSF RESEARCH & RELATED ACTIVIT 01002324RB NSF RESEARCH & RELATED ACTIVIT 01002728RB NSF RESEARCH & RELATED ACTIVIT 01002425RB NSF RESEARCH & RELATED ACTIVIT 01002425DB NSF RESEARCH & RELATED ACTIVIT 01002526RB NSF RESEARCH & RELATED ACTIVIT 01002627DB NSF RESEARCH & RELATED ACTIVIT 01002526DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070, 47.084 |
ABSTRACT
Artificial Intelligence (AI) systems have potential to enhance human capacity and increase productivity. They also can catalyze innovation and mitigate complex problems. Current AI systems are not created in a way that is transparent making them a challenge to public trust. The opaque processes used produce results that are not well understood. Trust is further undermined by the harms that AI systems can cause. Those most affected are the communities excluded from participating in AI system developments. This lack of trustworthiness will result in slower adoption of these AI technologies. It is critical to AI innovation to include groups affected by the benefits and harms of these AI systems. The TRAILS (Trustworthy AI in Law and Society) Institute is a partnership of the University of Maryland, The George Washington University, Morgan State University, and Cornell University. It encourages community participation in AI development of techniques, tools, and scientific theories. Design and policy recommendations produced will promote the trustworthiness of AI systems. A first goal of the TRAILS Institute is to discover ways to change the design and development of AI systems. This will help communities make informed choices about AI technology adoption. A second goal is the development of best practices for industry and government. This will foster AI innovation while keeping communities safe, engaged, and informed. The TRAILS Institute has explicit plans for increasing participation of affected communities. This includes participation of K-12 education up through Congressional staff. These plans will elicit the concerns and expectations from the affected communities. They also provide improved understanding of the risks and benefits of AI-enabled systems.
The TRAILS Institute's research program identifies four key thrusts. These thrusts target key aspects of the AI system development lifecycle. The first is Social Values. It involves increasing participation throughout all aspects of AI development. This ensures the values produced by AI systems reflect community and interested parties? values. This includes participatory design with diverse communities. The result is community-based interventions and adaptations for the AI development lifecycle. The second thrust is Technical Design. It includes the development of algorithms to promote transparency and trust in AI. This includes the development of tools that increase robustness in AI systems. It also promotes user and developer understanding of how AI systems operate. The third trust is Socio-Technical Perceptions. This involves the development of novel measures including psychometric techniques and experimental paradigms. These measures will assess the interpretability and explainability of AI systems. This will enable a deeper understanding and perception of existing metrics and algorithms. This provides understanding of the values perceived and held by included community members. The fourth thrust is Governance. It includes documentation and analysis of governance regimes for both data and technologies. These provide the underpinning AI for the development of platform and technology regulation. Ethnographers will analyze the institute itself and partner organizations. They will document ways in which technical choices translate to governance impacts. The research focus is in two use-inspired areas. The first being information dissemination systems (e.g., social medial platforms). The second is energy-intensive systems (e.g., autonomous systems). The institute's education and workforce development efforts in AI include new educational offerings. These cater to many markets, ranging from secondary through executive education. The TRAILS Institute is especially focused on expanding access to foundational education. The focus is on historically marginalized and minoritized groups of learners and users. The institute will work with these communities to learn from, educate, and recruit participants. The focus is to retain, support, and empower those marginalized in mainstream AI. The integration of these communities into this AI research program broadens participation in AI development and governance.
The National Institute of Standards and Technology (NIST) is partnering with NSF to provide funding for this Institute.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.