
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | May 17, 2019 |
Latest Amendment Date: | May 17, 2019 |
Award Number: | 1850117 |
Award Instrument: | Standard Grant |
Program Manager: |
Jie Yang
jyang@nsf.gov (703)292-4768 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | June 1, 2019 |
End Date: | May 31, 2022 (Estimated) |
Total Intended Award Amount: | $175,000.00 |
Total Awarded Amount to Date: | $175,000.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
321-A INGRAM HALL AUBURN AL US 36849-0001 (334)844-4438 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
345 W. Magnolia St. Auburn AL US 36849-0001 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Robust Intelligence, EPSCoR Co-Funding |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
From autonomous vehicles to cancer detection to speech recognition, artificial intelligence (AI) is transforming many economic sectors. While being increasingly ubiquitous, AI algorithms have been shown to easily misbehave when encountering natural, unexpected, never-seen inputs in the real world. For example, when a car on autopilot failed to recognize a white truck against a bright-lit sky, it crashed into the truck, killing the driver. To avoid such costly and unsafe failures, this project develops a framework for rigorously and automatically testing AI algorithms, specifically computer vision systems, in a 3D environment. In addition, via the framework, the project attempts to uncover why an algorithm makes a given decision. Providing explanations understandable by humans for decisions made by machines is crucial in gaining users' trust, advancing AI algorithms, and complying with the current and future legal regulations on the use of AI with sensitive human data.
Researchers previously attempted to achieve the two main goals of (1) testing and (2) interpreting computer vision systems by synthesizing a 2D input image that fails a target image recognition model. However, the existing methods operate at the pixel level, generating special patterns that (a) are hard to explain; (b) might not transfer well to the physical world; and (c) may rarely be encountered in reality. Instead of optimizing in the 2D image space, the research objective of this project is to harness 3D graphics engines to create a 3D scene where the factors of variations (e.g. lighting, object geometry and appearances, background images) can be controlled and optimized to cause a target computer vision system to misbehave. This research effort will (1) reveal systematic defects via automatically testing the target model across many controlled, disentangled settings; and (2) improve the existing interpretability methods by incorporating 3D information. The developed methods attempt to provide explanations for the decisions made by computer vision models and create new insights into their inner functions. The project will improve the safety, reliability, and transparency of AI algorithms. This project is jointly funded by the Robust Intelligence (RI) and the Established Program to Stimulate Competitive Research (EPSCoR) programs.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
intellectual Merit. The support of NSF has enabled the PI and his team to make the following main scientific findings:
- Existing computer vision systems can easily misbehave when presented with images of objects in unusual poses. That is, state-of-the-art machines can easily mislabel a school bus lying on its side or a motorcycle with one wheel up in the air.
- Without (almost) any physical constraints, if we place an object visible inside the view of the camera randomly and randomly rotate it, there is only 3% chance that state-of-the-art machines can recognize the object.
- Currently, a common and better approach to addressing this issue is to collect more data (e.g. photos of objects in unsual angles) and teach machines on them. Yet, this approach is not guaranteed to work on new unusual poses or new objects.
- While current machines are not accurate in edge cases, it is also not unknown how to make them reliably and accurately explain their own decisions to users.
Outcomes covering the entire 2.5 years of the award:
- The award has resulted in 14 publications (both published and pre-prints). Formal publication venues included top-tier computer vision or machine learning conferences (CVPR 19, 20, 22, NeurIPS 21, ACL 2021) and journals (Vision Research journal).
- The PI has created a new Deep Learning course at Auburn University as part of the Broader Impacts. The course started out as a Special Topics elective but now is part of the official curriculum of Computer Science program and attracts students from many disciplines including Biosystems Engineering, Agriculture, Aerospace Engineering, Mechanical Engineering, and Business Analytics, etc.
- The award has enabled the PI to help 6 Ph.D. students at Auburn University to publish their first, first-authored top-tier computer vision papers.
- The PI and his team have published 8 open-source repositories (which include a tool or code) to the community.
- We have made 5 public-oriented educational research videos covering our papers.
Last Modified: 10/26/2022
Modified by: Anh M Nguyen
Please report errors in award information by writing to: awardsearch@nsf.gov.