
NSF Org: |
OAC Office of Advanced Cyberinfrastructure (OAC) |
Recipient: |
|
Initial Amendment Date: | August 18, 2018 |
Latest Amendment Date: | August 18, 2018 |
Award Number: | 1835473 |
Award Instrument: | Standard Grant |
Program Manager: |
Alejandro Suarez
alsuarez@nsf.gov (703)292-7092 OAC Office of Advanced Cyberinfrastructure (OAC) CSE Directorate for Computer and Information Science and Engineering |
Start Date: | January 1, 2019 |
End Date: | August 31, 2023 (Estimated) |
Total Intended Award Amount: | $597,955.00 |
Total Awarded Amount to Date: | $597,955.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
2550 NORTHWESTERN AVE # 1100 WEST LAFAYETTE IN US 47906-1332 (765)494-1055 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
Young Hall West Lafayette IN US 47907-2114 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Data Cyberinfrastructure |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This project creates a science-oriented visual data service that facilitates the query of datasets based on visual content. The approach allows a user to search for data based on visual similarity, even in cases where a term for the failure or observation does not yet have a scientific name. The visual analysis data and application services will be deployed on a cloud-based platform. The results will produce a framework enabling access to and analysis of a large amount of imagery from diverse sources.
The research team creates VISER (Visual Structural Expertise Replicator), which will serve as a comprehensive cloud-based data analytics service and will facilitate the use of and integrate data and applications most needed by the user. The framework will implement two novel concepts: data-as-a-service and applications-as-a-service, which will bring data and applications to the user without the need to configure software systems or packages. The approach also employs artificial intelligence to interpret the contents of the images. VISER will use convolutional neural networks (CNNs) to train custom classifiers for new categories. Three applications will be developed and deployed within VISER: App1 will extract relevant visual context, App2 will facilitate similarity-based visual searching (through the use of a Siamese CNN), and App3 will help perform automatic extraction of pre-event/pre-disaster images based on Google Street View. The application of these tools would advance both the science of automated pattern recognition and of more effective construction techniques.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Goal: In the aftermath of each natural disaster, researchers travel to the site to collect perishable photographic evidence needed to document the performance of our infrastructure and learn from that event. In a past project we established the fundamental knowledge needed to successfully implement image classification methods on unstructured reconnaissance data to organize these data and accelerate their use for scientific purposes. A researcher-oriented tool, the Automated Reconnaissance Image Organizer (ARIO), was built by leveraging automated image classification in a domain-specific manner to assist users to automatically classify, sort and organize their image data.
This project, Integrating Human and Machine for Post-Disaster Visual Data Analytics: A Modern Media-Oriented Approach, extended ARIO from a tool to a comprehensive cloud-based data analytics service, Visual Structural Expertise Replicator (ViSER). In our work machine learning (ML) is used not to replace a human expert, but to augment and support the human expert by improving productivity and consistency while reducing tedious and time-consuming tasks. The recent explosion in funding in ML, AI, and computer science has encouraged rapid development and supported a transition from their use solely for idealized "toy" problems to practical implementations. Our establishment of a curated dataset composed of 140,000 real-world field images from disaster scenes enabled the VISER platform to be realized here. VISER includes specialized modules for concrete buildings and bridges.
ViSER consists of several components: a web-based post-disaster visual data organization and report generation application, an Infrastructure-as-Code based approach that seeks to create secure extend layer 2 services for provisioning bare systems (bare metal or virtual machines) over a wide area network, and a prototype image classification service that can be deployed on Purdue's community clusters. The fundamental applications (for buildings and bridges, and earthquakes and hurricanes) developed enable: (a) image localization within buildings to provide important context to the data; (b) similarity search to merge disparate data from a single structure; and (c) expanded the data schema and added relevant capabilities for organizing, searching, and filtering through image data.
VISER provides researchers with the power to exploit vast amount of data and to rapidly search across construction type, damage features, geographical region, etc. Engineers will be empowered to design and construct a more resilient physical infrastructure using more comprehensive knowledge about structural performance, directly incorporated into building codes and influencing practice and policy.
Partnerships: Our team's April 2023 workshop "Envisioning AI in a Decade" engaging 30+ industry/academic participants to discuss future opportunities and challenges of using AI in this domain. These discussions revealed important practical lessons. Practicing structural engineers are quite eager to use ML to make their work more efficient, while directing their time and energy toward resolving problems. Engineers feel confident about such results when they are given information to help them understand the rationale behind a classification decision, rather than simply accepting the outcome. Insufficient background and training in AI methods are major barriers to achieving this goal. Thus, now is the time to establish strong partnerships between in industry and academia to address these barriers and realize the potential of ML in engineering practice.
Partnering with a team of researchers from the Institute of Engineering-Mexico (UNAM) we integrated their post-earthquake survey experiences with our expertise on deep learning technologies. They shared the database they developed after the 2017 Puebla Earthquake with the Institute for Building Safety of Mexico City.
Furthermore, after the devastating Mw. 7.8 Kahramanmaraş Earthquake in Turkey, a member of our team travelled to Turkey, joining the ACI 133 Reconnaissance team. During this trip, she visited dozens of buildings and collected over 100GB of building imagery to support this research.
Impact on the Team: Research conducted by the graduate students at Purdue University includes work related to data collection, algorithm development and algorithm implementation. To participate in this process, they learned about the latest state of art on computer vision, building and infrastructure reconnaissance, visual inspection, data organization, technical writing, and the research process.
Impact on the Built Environment and Society:
The methods developed in this project converted unstructured data in the form of images, that would have been otherwise difficult to analyze into reliable and organized data with well-defined and structural engineering - oriented categories. These categories, meant to support engineers in their routine decision-making, will assist engineers in the creation of design guidelines and pre-standards grounded on the new knowledge generated by the organized data. Ultimately, these methods will directly impact the design codes to build a safer and more resilient infrastructure to support our communities and societies.
Last Modified: 09/28/2023
Modified by: Shirley J Dyke
Please report errors in award information by writing to: awardsearch@nsf.gov.