
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | July 18, 2022 |
Latest Amendment Date: | July 18, 2022 |
Award Number: | 2231543 |
Award Instrument: | Standard Grant |
Program Manager: |
Pavithra Prabhakar
CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 1, 2022 |
End Date: | July 31, 2023 (Estimated) |
Total Intended Award Amount: | $49,058.00 |
Total Awarded Amount to Date: | $49,058.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
110 21ST AVE S NASHVILLE TN US 37203-2416 (615)322-2631 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
1025 16th Avenue South Suite 102 Nashville TN US 37212-2328 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Software & Hardware Foundation |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This aim of this workshop is to identify emerging issues, challenges, basic research questions, and potential approaches to study and address safety and trust in AI-enabled systems across application domains. The workshop will bring together researchers from academia, industry, and government research labs that are working in the areas of artificial intelligence, and formal methods, and application domains such as autonomous systems, business and finance, and education. The main deliverable of the workshop is a report summarizing the discussions and findings of the workshop, which will be made publicly available.
Fostering the development of collaborations and the exchange of ideas between the formal methods, AI/ML, and broader research communities and stakeholders is critical and may lead to advances in methods that improve both AI/ML and formal methods, as well as the broader concerns about safety and trust of these AI-enabled systems across science, engineering, and society. The workshop will particularly encourage participation and perspectives from persons coming from underrepresented groups.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
The aim of the 2022 NSF Workshop on Safety and Trust in Artificial Intelligence (SafeTAI) Enabled Systems is to identify emerging issues, challenges, basic research questions, and potential approaches to study and address safety and trust in AI-enabled systems across application domains. The workshop was held September 22-23, 2022 virtually and its program included keynotes from DARPA I2O Director Kathleen Fisher and Mozilla Fellow Deborah Raji, and also consisted of a series of 3 breakout sessions for discussion among attendees around safety and trust issues in AI-enabled systems, along with identification of potential research challanges and directions. The workshop brought together researchers from academia, industry, and government research labs that are working in the areas of artificial intelligence, machine learning (ML), formal methods, and beyond, as well as application domains such as autonomous systems, business and finance, and education. The identification of the research directions and challenges around safety and trust are the primary intellectual merit contributions of the workshop, which were summarized in a report provided to NSF distilling these. With respect to broader impacts of this workshop, fostering the development of collaborations and the exchange of ideas between the formal methods, AI/ML, and broader research communities and stakeholders is critical and may lead to advances in methods that improve both AI/ML and formal methods, as well as the broader concerns about safety and trust of these AI-enabled systems across science, engineering, and society. The workshop involved participation of a diverse group of researchers at varying career stages, all of whom were able to provide input in these important research directions.