
NSF Org: |
CMMI Division of Civil, Mechanical, and Manufacturing Innovation |
Recipient: |
|
Initial Amendment Date: | August 7, 2016 |
Latest Amendment Date: | August 26, 2020 |
Award Number: | 1646162 |
Award Instrument: | Standard Grant |
Program Manager: |
Bruce Kramer
CMMI Division of Civil, Mechanical, and Manufacturing Innovation ENG Directorate for Engineering |
Start Date: | February 1, 2017 |
End Date: | July 31, 2021 (Estimated) |
Total Intended Award Amount: | $505,287.00 |
Total Awarded Amount to Date: | $607,287.00 |
Funds Obligated to Date: |
FY 2018 = $16,000.00 FY 2019 = $16,000.00 FY 2020 = $70,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
300 W. 12TH STREET ROLLA MO US 65409-1330 (573)341-4134 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
300 W 12th Street Rolla MO US 65409-6506 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
CM - Cybermanufacturing System, AM-Advanced Manufacturing, Special Initiatives, CPS-Cyber-Physical Systems |
Primary Program Source: |
01001819DB NSF RESEARCH & RELATED ACTIVIT 01001920DB NSF RESEARCH & RELATED ACTIVIT 01002021DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.041 |
ABSTRACT
Smart manufacturing integrates information, technology, and human ingenuity to inspire the next revolution in the manufacturing industry. Manufacturing has been identified as a key strategic investment area by the U.S. government, private sector, and university leaders to spur innovation and keep America competitive. However, the lack of new methodologies and tools is challenging continuous innovation in the smart manufacturing industry. This award supports fundamental research to develop a cyber-physical sensing, modeling, and control infrastructure, coupled with augmented reality, to significantly improve the efficiency of future workforce training, performance of operations management, safety and comfort of workers for smart manufacturing. Results from this research are expected to transform the practice of worker-machine-task coordination and provide a powerful tool for operations management. This research involves several disciplines including sensing, data analytics, modeling, control, augmented reality, and workforce training and will provide unique interdisciplinary training opportunities for students and future manufacturing engineers.
An effective way for manufacturers to tackle and outpace the increasing complexity of product designs and ever-shortening product lifecycles is to effectively develop and assist the workforce. Yet the current management of manufacturing workforce systems relies mostly on the traditional methods of data collection and modeling, such as subjective observations and after-the-fact statistics of workforce performance, which has reached a bottleneck in effectiveness. The goal of this project is to investigate an integrated set of cyber-physical system methods and tools to sense, understand, characterize, model, and optimize the learning and operation of manufacturing workers, so as to achieve significantly improved efficiency in worker training, effectiveness of behavioral operations management, and safety of front-line workers. The research team will instrument a suite of sensors to gather real-time data about individual workers, worker-machine interactions, and the working environment,develop advanced methods and tools to track and understand workers' actions and physiological status, and detect their knowledge and skill deficiencies or assistance needs in real time. The project will also establish mathematical models that encode the manufacturing process in the research sensing and analysis framework, characterize the efficiency of worker-machine-task coordination, model the learning curves of individual workers, investigate various multi-modal augmented reality-based visualization, guidance, control, and intervention schemes to improve task efficiency and worker safety, and deploy, test, and conduct comprehensive performance assessments of the Researched technologies.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This project is aimed at introducing, creating and developing an integrated set of cyber-physical methods and tools to sense, understand, characterize, model, and optimize the learning and operations of assembly workers, so as to achieve smart manufacturing with significantly improved efficiency of work training, effectiveness of operations management, and safety of front-line workers.
The project's outcomes in terms of intellectual merit include the following:
- We created a foundation for building multimodal sensor-based action recognition systems by fusing and refining convolutional neural network models. Based on this foundation, we developed a prototype multimodal sensor-based action recognition system and demonstrated using this system for worker activity recognition in human-centered mechanical assembly using data from an inertial measurement unit and a video camera.
- We developed a smart instructional system incorporating augmented reality, with the support of a deep learning network for detection of tools, parts and worker activities in manual assembly. We have demonstrated and evaluated this smart instructional system in assembling a CNC carving machine performed by a human worker.
- We developed a fog computing approach to bring computing power closer to the data source than cloud computing in order to achieve real-time worker assembly action recognition, and based on this approach we demonstrated a transfer learning model?s ability to achieve high recognition accuracy.
- We created a novel video-based human action recognition network which integrates discriminative feature pooling with a video segment attention model. This action recognition network has been shown to outperform the state-of-the-art action recognition networks when evaluated on four widely benchmarked datasets.
- We created a method to develop an individualized system of convolutional neural networks for assembly action recognition using human skeletal data. The system comprises CNN classifiers adapted to any new worker through transfer learning and iterative boosting, followed by an individualized fusion method that integrates the adapted classifiers into a real-time action recognition system. This individualized system of skeletal data-based CNN classifiers for action recognition improves the accuracy of action recognition compared to the CNN classifiers directly built with the skeletal data.
- We introduced a context and structure mining network for video object detection. This network includes an encoding module to encode the spatial-temporal context information in video frames into object features, and an aggregation module to better aggregate structure-based features with temporal information in support frames.
- We introduced a class-aware feature aggregation network for video object detection by putting the video object detection to the edge, and showed this network achieving state-of-the-art performance on the commonly used ImageNet VID dataset without use of any post-processing methods.
- We introduced a convolutional neural network that embeds a novel discriminative feature pooling mechanism and a novel video segment attention model, for video-based human action recognition from both trimmed and untrimmed videos. Based on this method, we developed an action recognition network and demonstrated that this network can be trained using both trimmed videos in a fully supervised way and untrimmed videos in a weakly supervised way.
- We introduced a novel method of weakly-supervised Action Completeness Modeling with Background Aware Networks (ACM-BANets) to address two main challenges in smart manufacturing: (1) how to design and train a weakly-supervised network that can suppress both highly discriminative and ambiguous frames in order to remove the false positives? and (2) how to design a temporal action localization framework in order to discover action instances in both highly discriminative and ambiguous action frames for the complete localization?
- We explored the unique characteristics of human trajectories and introduced a new approach, called reciprocal network learning, for human trajectory prediction. Extensive experimental results obtained using this approach on public benchmark datasets showed that this method outperforms the state-of-the-art human trajectory prediction methods.
The project's outcomes in terms of broader impacts include the following:
- This project has contributed to the smart manufacturing literature through the publication of 11 journal papers, 11 peer-reviewed conference papers, and 1 book chapter. The research results including the developed frameworks, methods, algorithms, and tools for multimodal sensing, data analytics, deep learning, predictive modeling, and augmented reality have contributed significantly to the field of smart manufacturing.
- Three senior investigators, including one female, were involved in this collaborative research project, which provided research training opportunities for 10 Ph.D. students, 1 M.S. student, and 14 undergraduate students over the project duration. The involved faculty and students were from multiple disciplines, and all the project personnel gained valuable experiences in teamwork and convergent research.
- The project has improved the research infrastructure of the participating universities, with laboratories built that enable further research on manufacturing cyber-physical systems involving multimodal sensor fusion, deep learning algorithms for data analytics, and development of augmented reality assistive systems for human-centered intelligent manufacturing.
Last Modified: 01/03/2022
Modified by: Ming C Leu
Please report errors in award information by writing to: awardsearch@nsf.gov.