Award Abstract # 1646162
CPS: Synergy: Collaborative Research: Cyber-Physical Sensing, Modeling, and Control with Augmented Reality for Smart Manufacturing Workforce Training and Operations Management

NSF Org: CMMI
Division of Civil, Mechanical, and Manufacturing Innovation
Recipient: UNIVERSITY OF MISSOURI SYSTEM
Initial Amendment Date: August 7, 2016
Latest Amendment Date: August 26, 2020
Award Number: 1646162
Award Instrument: Standard Grant
Program Manager: Bruce Kramer
CMMI
 Division of Civil, Mechanical, and Manufacturing Innovation
ENG
 Directorate for Engineering
Start Date: February 1, 2017
End Date: July 31, 2021 (Estimated)
Total Intended Award Amount: $505,287.00
Total Awarded Amount to Date: $607,287.00
Funds Obligated to Date: FY 2016 = $505,287.00
FY 2018 = $16,000.00

FY 2019 = $16,000.00

FY 2020 = $70,000.00
History of Investigator:
  • Ming Leu (Principal Investigator)
    mleu@mst.edu
  • Ruwen Qin (Former Principal Investigator)
  • Zhaozheng Yin (Former Principal Investigator)
  • Ming Leu (Former Co-Principal Investigator)
  • Ruwen Qin (Former Co-Principal Investigator)
Recipient Sponsored Research Office: Missouri University of Science and Technology
300 W. 12TH STREET
ROLLA
MO  US  65409-1330
(573)341-4134
Sponsor Congressional District: 08
Primary Place of Performance: Missouri University of Science and Technology
300 W 12th Street
Rolla
MO  US  65409-6506
Primary Place of Performance
Congressional District:
08
Unique Entity Identifier (UEI): Y6MGH342N169
Parent UEI:
NSF Program(s): CM - Cybermanufacturing System,
AM-Advanced Manufacturing,
Special Initiatives,
CPS-Cyber-Physical Systems
Primary Program Source: 01001617DB NSF RESEARCH & RELATED ACTIVIT
01001819DB NSF RESEARCH & RELATED ACTIVIT

01001920DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 016Z, 091Z, 116E, 152E, 1654, 9150, 9178, 9231, 9251, MANU
Program Element Code(s): 018Y00, 088Y00, 164200, 791800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.041

ABSTRACT

Smart manufacturing integrates information, technology, and human ingenuity to inspire the next revolution in the manufacturing industry. Manufacturing has been identified as a key strategic investment area by the U.S. government, private sector, and university leaders to spur innovation and keep America competitive. However, the lack of new methodologies and tools is challenging continuous innovation in the smart manufacturing industry. This award supports fundamental research to develop a cyber-physical sensing, modeling, and control infrastructure, coupled with augmented reality, to significantly improve the efficiency of future workforce training, performance of operations management, safety and comfort of workers for smart manufacturing. Results from this research are expected to transform the practice of worker-machine-task coordination and provide a powerful tool for operations management. This research involves several disciplines including sensing, data analytics, modeling, control, augmented reality, and workforce training and will provide unique interdisciplinary training opportunities for students and future manufacturing engineers.

An effective way for manufacturers to tackle and outpace the increasing complexity of product designs and ever-shortening product lifecycles is to effectively develop and assist the workforce. Yet the current management of manufacturing workforce systems relies mostly on the traditional methods of data collection and modeling, such as subjective observations and after-the-fact statistics of workforce performance, which has reached a bottleneck in effectiveness. The goal of this project is to investigate an integrated set of cyber-physical system methods and tools to sense, understand, characterize, model, and optimize the learning and operation of manufacturing workers, so as to achieve significantly improved efficiency in worker training, effectiveness of behavioral operations management, and safety of front-line workers. The research team will instrument a suite of sensors to gather real-time data about individual workers, worker-machine interactions, and the working environment,develop advanced methods and tools to track and understand workers' actions and physiological status, and detect their knowledge and skill deficiencies or assistance needs in real time. The project will also establish mathematical models that encode the manufacturing process in the research sensing and analysis framework, characterize the efficiency of worker-machine-task coordination, model the learning curves of individual workers, investigate various multi-modal augmented reality-based visualization, guidance, control, and intervention schemes to improve task efficiency and worker safety, and deploy, test, and conduct comprehensive performance assessments of the Researched technologies.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 21)
Al-Amin, M. and Tao, W. and Doell, D. and Lingard, R. and Yin, Z. and Leu, M.C. and & Qin, R. "Action recognition in manufacturing assembly using multimodal sensor fusion" The 25th International Conference on Production Research (ICPR19). , 2019 Citation Details
Al-Amin, Md. and Qin, Ruwen and Moniruzzaman, Md and Yin, Zhaozheng and Tao, Wenjin and Leu, Ming C. "An individualized system of skeletal data-based CNN classifiers for action recognition in manufacturing assembly" Journal of Intelligent Manufacturing , 2021 https://doi.org/10.1007/s10845-021-01815-x Citation Details
Al-Amin, Md. and Qin, Ruwen and Tao, Wenjin and Doell, David and Lingard, Ravon and Yin, Zhaozheng and Leu, Ming C "Fusing and refining convolutional neural network models for assembly action recognition in smart manufacturing" Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science , 2020 https://doi.org/10.1177/0954406220931547 Citation Details
Al-Amin, Md and Qin, Ruwen and Tao, Wenjin and Leu, Ming C. "Sensor Data Based Models for Workforce Management in Smart Manufacturing" Proceedings of the 2018 Institute of Industrial and Systems Engineers Annual Conference (IISE 2018) , 2018 Citation Details
Chen, Haodong and Leu, Ming C. and Tao, Wenjin and Yin, Zhaozheng "Design of a Real-Time Human-Robot Collaboration System Using Dynamic Gestures" Proceedings of the ASME 2020 International Mechanical Engineering Congress and Exposition (IMECE 2020) , 2020 https://doi.org/10.1115/IMECE2020-23650 Citation Details
Chen, Haodong and Tao, Wenjin and Leu, Ming C. and Yin, Zhaozheng "Dynamic Gesture Design and Recognition for Human-Robot Collaboration With Convolutional Neural Networks" Proceedings of the 2020 International Symposium on Flexible Automation (ISFA 2020) , 2020 https://doi.org/10.1115/ISFA2020-9609 Citation Details
Jiang, Wenchao and Yin, Zhaozheng "Combining passive visual cameras and active IMU sensors for persistent pedestrian tracking" Journal of Visual Communication and Image Representation , v.48 , 2017 10.1016/j.jvcir.2017.03.015 Citation Details
Jiang, Wenchao and Yin, Zhaozheng "Indoor localization with a signal tree" Multimedia Tools and Applications , v.76 , 2017 10.1007/s11042-017-4779-6 Citation Details
Karim, M.M. and Doell, D. and Lingard, R. and Yin, Z. and Leu, M.C. and & Qin, R. "A region-based deep learning algorithm for detecting and tracking objects in manufacturing plants" The 25th International Conference on Production Research (ICPR19) , 2019 Citation Details
Lai, Ze-Hao and Tao, Wenjin and Leu, Ming C. and Yin, Zhaozheng "Smart augmented reality instructional system for mechanical assembly towards worker-centered intelligent manufacturing" Journal of Manufacturing Systems , v.55 , 2020 https://doi.org/10.1016/j.jmsy.2020.02.010 Citation Details
Moniruzzaman, M and Yin, Z and He, Z and Qin, R and Leu, M "Action Completeness Modeling with Background Aware Networks for Weakly-Supervised Temporal Action Localization" Proceedings of ACM Multimedia Conference 2020 , 2020 https://doi.org/ Citation Details
(Showing: 1 - 10 of 21)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This project is aimed at introducing, creating and developing an integrated set of cyber-physical methods and tools to sense, understand, characterize, model, and optimize the learning and operations of assembly workers, so as to achieve smart manufacturing with significantly improved efficiency of work training, effectiveness of operations management, and safety of front-line workers.

The project's outcomes in terms of intellectual merit include the following:

  • We created a foundation for building multimodal sensor-based action recognition systems by fusing and refining convolutional neural network models. Based on this foundation, we developed a prototype multimodal sensor-based action recognition system and demonstrated using this system for worker activity recognition in human-centered mechanical assembly using data from an inertial measurement unit and a video camera.
  • We developed a smart instructional system incorporating augmented reality, with the support of a deep learning network for detection of tools, parts and worker activities in manual assembly. We have demonstrated and evaluated this smart instructional system in assembling a CNC carving machine performed by a human worker.
  • We developed a fog computing approach to bring computing power closer to the data source than cloud computing in order to achieve real-time worker assembly action recognition, and based on this approach we demonstrated a transfer learning model?s ability to achieve high recognition accuracy.
  • We created a novel video-based human action recognition network which integrates discriminative feature pooling with a video segment attention model. This action recognition network has been shown to outperform the state-of-the-art action recognition networks when evaluated on four widely benchmarked datasets.
  • We created a method to develop an individualized system of convolutional neural networks for assembly action recognition using human skeletal data. The system comprises CNN classifiers adapted to any new worker through transfer learning and iterative boosting, followed by an individualized fusion method that integrates the adapted classifiers into a real-time action recognition system. This individualized system of skeletal data-based CNN classifiers for action recognition improves the accuracy of action recognition compared to the CNN classifiers directly built with the skeletal data.
  • We introduced a context and structure mining network for video object detection. This network includes an encoding module to encode the spatial-temporal context information in video frames into object features, and an aggregation module to better aggregate structure-based features with temporal information in support frames.
  • We introduced a class-aware feature aggregation network for video object detection by putting the video object detection to the edge, and showed this network achieving state-of-the-art performance on the commonly used ImageNet VID dataset without use of any post-processing methods.
  • We introduced a convolutional neural network that embeds a novel discriminative feature pooling mechanism and a novel video segment attention model, for video-based human action recognition from both trimmed and untrimmed videos. Based on this method, we developed an action recognition network and demonstrated that this network can be trained using both trimmed videos in a fully supervised way and untrimmed videos in a weakly supervised way.
  • We introduced a novel method of weakly-supervised Action Completeness Modeling with Background Aware Networks (ACM-BANets) to address two main challenges in smart manufacturing: (1) how to design and train a weakly-supervised network that can suppress both highly discriminative and ambiguous frames in order to remove the false positives? and (2) how to design a temporal action localization framework in order to discover action instances in both highly discriminative and ambiguous action frames for the complete localization?
  • We explored the unique characteristics of human trajectories and introduced a new approach, called reciprocal network learning, for human trajectory prediction. Extensive experimental results obtained using this approach on public benchmark datasets showed that this method outperforms the state-of-the-art human trajectory prediction methods.

The project's outcomes in terms of broader impacts include the following:

  • This project has contributed to the smart manufacturing literature through the publication of 11 journal papers, 11 peer-reviewed conference papers, and 1 book chapter. The research results including the developed frameworks, methods, algorithms, and tools for multimodal sensing, data analytics, deep learning, predictive modeling, and augmented reality have contributed significantly to the field of smart manufacturing.
  • Three senior investigators, including one female, were involved in this collaborative research project, which provided research training opportunities for 10 Ph.D. students, 1 M.S. student, and 14 undergraduate students over the project duration. The involved faculty and students were from multiple disciplines, and all the project personnel gained valuable experiences in teamwork and convergent research.
  • The project has improved the research infrastructure of the participating universities, with laboratories built that enable further research on manufacturing cyber-physical systems involving multimodal sensor fusion, deep learning algorithms for data analytics, and development of augmented reality assistive systems for human-centered intelligent manufacturing.  

 


Last Modified: 01/03/2022
Modified by: Ming C Leu

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page