
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | March 16, 2018 |
Latest Amendment Date: | April 19, 2021 |
Award Number: | 1751206 |
Award Instrument: | Continuing Grant |
Program Manager: |
Jie Yang
jyang@nsf.gov (703)292-4768 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | April 1, 2018 |
End Date: | October 31, 2021 (Estimated) |
Total Intended Award Amount: | $500,499.00 |
Total Awarded Amount to Date: | $470,120.00 |
Funds Obligated to Date: |
FY 2019 = $77,847.00 FY 2020 = $0.00 FY 2021 = $0.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
1850 RESEARCH PARK DR STE 300 DAVIS CA US 95618-6153 (530)754-7700 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
2063 Kemper Hall Davis CA US 95616-5270 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Robust Intelligence |
Primary Program Source: |
01002021DB NSF RESEARCH & RELATED ACTIVIT 01002223DB NSF RESEARCH & RELATED ACTIVIT 01001819DB NSF RESEARCH & RELATED ACTIVIT 01002122DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
The internet provides an endless supply of images and videos, replete with weakly-annotated meta-data such as text tags, GPS coordinates, timestamps, or social media sentiments. This huge resource of visual data provides an opportunity to create scalable and powerful recognition algorithms that do not depend on expensive human annotations. The research component of this project develops novel visual scene understanding algorithms that can effectively learn from such weakly-annotated visual data. The main novelty is to combine both images and videos together. The developed algorithms could have broad impact in numerous fields including AI, security, and agricultural sciences. In addition to scientific impact, the project performs complementary educational and outreach activities. Specifically, it provides mentorship to high school, undergraduate, and graduate students, teaches new undergraduate and graduate computer vision courses that have been lacking at UC Davis, and organizes an international workshop on weakly-supervised visual scene understanding.
This project develops novel algorithms to advance weakly-supervised visual scene understanding in two complementary ways: (1) learning jointly with both images and videos to take advantage of their complementarity, and (2) learning from weak supervisory signals that go beyond standard semantic tags such as timestamps, captions, and relative comparisons. Specifically, it investigates novel approaches to advance tasks like fully-automatic video object segmentation, weakly-supervised object detection, unsupervised learning of object categories, and mining of localized patterns in the image/video data that are correlated with the weak supervisory signal. Throughout, the project explores ways to understand and mitigate noise in the weak labels and to overcome the domain differences between images and videos.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.