Award Abstract # 1531065
US Ignite: Collaborative Research: Track 1: Industrial Cloud Robotics across Software Defined Networks

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: RECTOR & VISITORS OF THE UNIVERSITY OF VIRGINIA
Initial Amendment Date: August 24, 2015
Latest Amendment Date: August 24, 2015
Award Number: 1531065
Award Instrument: Standard Grant
Program Manager: Bruce Kramer
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2015
End Date: August 31, 2019 (Estimated)
Total Intended Award Amount: $424,261.00
Total Awarded Amount to Date: $424,261.00
Funds Obligated to Date: FY 2015 = $424,261.00
History of Investigator:
  • Malathi Veeraraghavan (Principal Investigator)
    mv5g@virginia.edu
  • Shaun Edwards (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Virginia Main Campus
1001 EMMET ST N
CHARLOTTESVILLE
VA  US  22903-4833
(434)924-4270
Sponsor Congressional District: 05
Primary Place of Performance: University of Virginia
POB 400743, Thornton Hall C222
CHARLOTTESVILLE
VA  US  22904-4743
Primary Place of Performance
Congressional District:
05
Unique Entity Identifier (UEI): JJG6HU8PA4S5
Parent UEI:
NSF Program(s): CISE Research Resources
Primary Program Source: 01001516DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 015Z, 082E, 6840
Program Element Code(s): 289000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Currently, industrial robots are cost-effective for repetitive and high-volume tasks such as welding and painting, but not for lower-volume, mixed-part production. The need for robotic part handling for unstructured industrial applications is diverse. In manufactured-goods distribution centers, where multiple bins are presented to an operator, a human is required to handle a range of parts that must be boxed and shipped. In the reclamation and recycling industry, humans sort waste streams of mixed products on conveyor belts. Assembly and kitting operations in manufacturing are termed robotic opportunities but they require a solution for handling many part types in the same work-cell. This project will research and integrate technologies to enable the use of industrial robots for low-volume mixed-part production tasks. The proposed solution will include 3D image sensors and high-speed flexible networking, cloud computing, and industrial robots. The inclusion of cutting-edge new software such as the Robot-Operating System Industrial (ROS-I) and Cloud Computing platforms offer excellent educational opportunities for both undergraduate and graduate students. The software developed in this project will be widely distributed to enable further innovations by other teams.

The project objective is to develop cloud robotics applications that leverage high-performance computing and high-speed software-defined networks (SDN). Specifically, the target applications combine big-data analytics of sensor data (of the type collected from factory floors) with the control of industrial robots for low-volume, mixed-part production tasks. Cloud computers located at a remote facility relative to the factory floor on which industrial robots operate can be used for compute-intensive applications such as object identification from 3D sensor data, and grasp planning for the robots to perform object manipulation. The project methods will consist of (i) integrating ROS-I components and developing new software as required to transmit the 3D sensor data to remote computers, running the object identification and grasp planning applications, and returning robot instructions to the original site, (ii) running this software on geographically distributed compute clouds, (iii) collecting measurements and enhancing the software to meet real-time delay requirements. The technical challenge lies in meeting these stringent real-time requirements. For example, high-speed networks with the flexibility to connect arbitrary factory floors and datacenters are needed to transfer the 3D sensor data quickly to the remote cloud computers and to deliver the computed robot instructions(hence, SDN).

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

F. Alali and M. Veeraraghavan "A cross-layer design for large transfers in SDNs" 2016 Eighth International Conference on Ubiquitous and Future Networks (ICUFN) , 2016
Lianjun Li, Yizhe Zhang, Michael Ripperger, Jorge Nicho, Malathi Veeraraghavan and An-drea Fumagalli "Autonomous Object Pick-and-Sort Procedure for Industrial Robotics Application" International Journal of Semantic Computing (IJSC) , v.13 , 2019
Reza Rahimi, M. Veeraraghavan,Y. Nakajima, H. Takahashi,Y. Nakajima, S. Okamoto & N. Yamanaka "A High-Performance OpenFlow Software Switch" IEEE HPSR , 2016
R. Rahimi , C. Shao , M. Veeraraghavan, A. Fumagalli, J. Nicho, J. Meyer, S. Edwards, C. Flannigan and P. Evans "An Industrial Robotics Application with Cloud Computing and High-Speed Networking" 2017 First IEEE International Conference on Robotic Computing (IRC) , 2017
Xiaoyu Wang, Xiao Lin, Weiqiang Sun, Malathi Veeraraghavan "Comparison of Two Sharing Modes for a Proposed Optical Enterprise-Access SDN Architecture" 2018 28th IEEE International Telecommunication Networks (ITNAC), Sydney, Australia , 2018
Yizhe Zhang, Lianjun Li, Jorge Nicho, Michael Ripperger, Andrea Fumagalli and Malathi Veeraraghavan "Gilbreth 2.0: An Industrial Cloud Robotics Pick-and-Sort Application" Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, pp. 38-45. , 2019
Yizhe Zhang, Lianjun Li, Michael Ripperger, Jorge Nicho, Malathi Veeraraghavan and An-drea Fumagalli "Gilbreth: A Conveyor-Belt Based Pick-and-Sort Industrial Robotics Application" IEEE International Conference on Robotic Computing , 2018

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

A visitor to a typical manufacturing plant is likely to see robots used for high-volume welding or painting of a low-mix (few types) of products. This is because it is feasible for a manufacturer to pre-program robots for a highly repetitive operation. But, visitors to a typical assembly plant (automotive or aerospace) where a very large variety of parts are handled, or to a small or medium sized manufacturer that produces high-mix low-volume goods, are not very likely to see robots. This is because robots do not currently have the capability to recognize a wide variety of parts and plan how to grasp and manipulate the parts.

This project developed a high-mix low-volume industrial cloud robotics application called Gilbreth that leveraged cloud computing resources. The goal of this application was to enable a robot arm to pick and sort random objects arriving on a conveyor belt.

Different types of objects (industrial parts such as gears and piston rods) arrive at random, in arbitrary position and orientation (referred to as “pose”) on a conveyor belt. A UR10 robot arm is mounted on a rail that is setup in parallel with the conveyor belt. A Kinect sensor keeps taking 3D pictures of objects arriving on the belt, while a break beam sensor is triggered every time an object crosses the beam (enters the workspace of the robot arm). While the belt moves the object from the location of the break beam sensor to the position where the UR10 robot arm waits to pick up the object, the sensor data is used by the object recognition software to identify the object type and the motion planning algorithm to compute trajectories for all 7 joints of the robot arm. These trajectories were used to move the arm and its vacuum gripper between various poses, such as pick pose (to pick up the object), place pose (to drop the object in the appropriate bin) and home pose (where the robot arm waits for the next object).

The two challenging operations in this application are object recognition and motion planning. A machine-learning based 3D object recognition algorithm proved to provide the best performance, offering a ten-fold improvement in performance over a simpler algorithm in which one-to-one correspondences are checked between stored model information and captured information from a newly arriving object. However, the machine-learning method requires significant training (e.g., 3 hours with just 13 object types). The availability of cloud computing resources enables the use of this machine-learning approach to 3D object recognition.

For motion planning, we used the MoveIt! ROS package. ROS stands for Robot Operating System. ROS and ROS-I (ROS-Industrial) are flexible frameworks that have been developed by large numbers of robotics programmers who create a varied set of open-source software packages. The availability of these packages enabled us to design and implement such a complex application with just 4 developers within 2 years. We then used an open-source robotics simulation environment called Gazebo to evaluate Gilbreth. This evaluation study led to further improvements. For example, while most trajectory computations for the robot arm joints were completed within 0.5 seconds, sometimes these computations took a long time to complete. When the computation was long, invariably, the resulting trajectory was flawed, e.g., the robot arm did not move to the target pose directly, but instead wandered, and even rotated for a while, before reaching the desired pose. By simply limiting the trajectory computation time, we found motion plans that resulted in fewer robot execution failures and smaller coefficient of variance (e.g., 8.5% instead of 20.8% for the piston rod) in robot execution times.

Our first conclusion is that the variety of available ROS and ROS-I software packages allowed us to assemble and evaluate a complex application within just 1.5 years with four developers. Second, machine learning algorithms for 3D object recognition offer impressive speedups when compared to traditional methods; however, a computational cost is incurred for model training. Here, cloud robotics offers an answer for both the computing and storage required in the training phase. Third, our evaluation showed that motion planning and grasping remain complex tasks, and current ROS/ROS-I packages could be improved to reduce failure rates.

The Gilbreth software package is available on this Website:  https://github.com/swri-robotics/gilbreth/

In Year 1, another application, called Godel, for automating the task of metal surface blending using an industrial robot ABB IRB 2400 was implemented and evaluated. Blending is the operation of smoothing metal surfaces down to an even finish. This application used a similar combination of machine vision for object identification and motion planning.

 


Last Modified: 11/30/2019
Modified by: Malathi Veeraraghavan

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page