
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | July 17, 2019 |
Latest Amendment Date: | April 14, 2022 |
Award Number: | 1901379 |
Award Instrument: | Continuing Grant |
Program Manager: |
Sylvia Spengler
sspengle@nsf.gov (703)292-7347 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 1, 2019 |
End Date: | July 31, 2025 (Estimated) |
Total Intended Award Amount: | $1,200,000.00 |
Total Awarded Amount to Date: | $1,232,000.00 |
Funds Obligated to Date: |
FY 2020 = $615,782.00 FY 2021 = $310,493.00 FY 2022 = $16,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
200 UNIVERSTY OFC BUILDING RIVERSIDE CA US 92521 (951)827-5535 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
900 University Avenue Riverside CA US 92521-0001 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Info Integration & Informatics |
Primary Program Source: |
01002223DB NSF RESEARCH & RELATED ACTIVIT 01002021DB NSF RESEARCH & RELATED ACTIVIT 01002223DB NSF RESEARCH & RELATED ACTIVIT 01002122DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Many of today's applications provide data in different formats (e.g., text, videos) from different kinds of sensors (e.g., tweets, cameras, sensors mounted on robots). There is work on how to search each source of data separately, but this misses the hidden connections between the data across the sources. As an example, in disaster management, collaborative perception would allow searching for locations where the semantic concepts "fire" or "crowds" may be present, by jointly analyzing text and image content from social posts, video from stationary or mobile phone cameras, and the images or videos recorded by UAVs. This project will study how to jointly search across data sources by mapping the information coming from all data sources to a common information space. The project has the potential to increase the utility of social and video surveillance data for tasks that require situational awareness. In the area of disaster response, responders will have a more integrated and holistic view of the situation, so they can better allocate their resources. The project will capitalize on the student diversity at UC Riverside, which is a Hispanic Serving Institution, and thus broaden the participation of under-represented groups in the research process. This project will strengthen and extend the ongoing high school and college outreach activities of the PIs.
The goal of this project is to create the knowledge to facilitate effective and efficient collaborative perception on top of a set of independent and multi-modal data generating agents. The project will study how to jointly model social and sensor data and use this modeling to efficiently support spatio-temporal queries on the joint embedding space. In addition to mapping information from multi-modal disparate sources to a common information space, this project will study how to optimize the attention routing of controllable agents like UAVs to maximize the reliability and coverage of the collected information.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This project studied how to facilitate effective and efficient collaborative perception on top of a set of independent and multi-modal data generating agents. The project also studied how and where to best deploy UAVs to augment existing information content to achieve the highest possible collaborative perception across the spatio-temporal window of interest. The team worked on both the accuracy of the data representation and retrieval, and the scalability to large collections of data, which may be static or streaming.
A key innovation is the development of methods to jointly model text, image, video, spatiotemporal and graph data, which allows effectively searching multi-modal data sources, such as social media data. The project also created methods to query collections of documents using LLMs with minimum incurred computational cost, and by considering the user’s interaction history with the query system. Methods to scale the processing of big data were also developed, with a focus on publish-subscribe systems. The project studied how to perform robot planning to facilitate active intelligence collection. We further developed methods for sensor-based autonomous and human-in-the-loop mobile robot navigation to collect multi-modal information for scene understanding. In autonomous navigation, we introduced both data-driven and classical perceptual planning methods. In contrast, human-in-the-loop navigation leveraged newly developed vision-language foundation models to enable fluent interaction between the robot and a human user. The resulting algorithms were validated both in simulation and via physical experiment using a range of mobile (wheeled and aerial) robots.
Seven undergraduate students were supported by this project. These students were involved in the research process, and were trained on machine learning and data science principles. Five PhD students graduated. The findings were presented in scientific forums in the form of publications or tutorials. Optimizing the management of big data may cut the storage costs and the energy consumption in data centers and allow users to manage more data with the existing cloud-based hardware infrastructure. The research achievements of this project may increase the utility of social and video surveillance data, specifically for tasks that require situational awareness. In the area of disaster response, responders may have a more integrated and holistic view of the situation, so they can better allocate their resources.
Last Modified: 09/10/2025
Modified by: Evangelos Christidis
Please report errors in award information by writing to: awardsearch@nsf.gov.