Skip to feedback

Award Abstract # 1317926
NRI: Small: Collaborative Research: Learning from Demonstration for Cloud Robotics

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: GEORGIA TECH RESEARCH CORP
Initial Amendment Date: September 6, 2013
Latest Amendment Date: February 9, 2016
Award Number: 1317926
Award Instrument: Standard Grant
Program Manager: Reid Simmons
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2013
End Date: September 30, 2018 (Estimated)
Total Intended Award Amount: $426,060.00
Total Awarded Amount to Date: $426,060.00
Funds Obligated to Date: FY 2013 = $426,060.00
History of Investigator:
  • Sonia Chernova (Principal Investigator)
    chernova@cc.gatech.edu
  • Andrea Thomaz (Former Principal Investigator)
Recipient Sponsored Research Office: Georgia Tech Research Corporation
926 DALNEY ST NW
ATLANTA
GA  US  30318-6395
(404)894-4819
Sponsor Congressional District: 05
Primary Place of Performance: Georgia Institute of Technology
225 North Ave NW
Atlanta
GA  US  30332-0280
Primary Place of Performance
Congressional District:
05
Unique Entity Identifier (UEI): EMW9FC8J3HN4
Parent UEI: EMW9FC8J3HN4
NSF Program(s): NRI-National Robotics Initiati
Primary Program Source: 01001314DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 8086
Program Element Code(s): 801300
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

The proposed work seeks to leverage cloud computing to enable robots to efficiently learn from remote human domain experts - "Cloud Learning from Demonstration." Building on RobotsFor.Me, a remote robotics research lab, this research will unite Learning from Demonstration (LfD) and Cloud Robotics to enable anyone with Internet access to teach a robot household tasks. The value of this work stems from three aspects. First is the remote system that can learn task models from a series of remote demonstrations from a single user, focusing on learning high-level tasks as opposed to low-level motor skills. The second is the extension of learning from demonstration to multiple teachers. This represents an important relaxation of a limiting assumption to focus on evaluating teacher strengths and effectively handling distinct task solutions. Finally, transparency mechanisms to allow a remote user to develop a correct mental model about the robot?s learning process.

The long term goal of this research is to one day make personal robots accessible to everyday people. The interactive learning framework based on RobotsFor.Me provides unique opportunities for education and outreach. Thomaz and Chernova will outreach to K-12 teachers and students by creating an education portal surrounding RobotsFor.Me containing hands-on workshop curricula. This material will be integrated with the WPI Frontiers program for middle school students, and the GT ePDN professional education network for teachers. A key impact on students at GT and WPI will be direct involvement in this research agenda, and integration with AI, robotics and HRI courses. Chernova is the Diversity Coordinator in the Robotics Engineering Program, and faculty advisor for Women In Robotics Engineering and Women in Technology student groups which will enable braod exposure. Thomaz mentors the RoboWomen graduate women?s group. Software components will also be made available as open source and the PIs have a collaboration plan in place with researchers at Willow Garage, and through student internships will transfer technology to their labs.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Adrian Boteanu, Aaron St. Clair, Anahita Mohseni-Kabir, Carl Saldanha and Sonia Chernova "Leveraging Large-Scale Semantic Networks for Adaptive Robot Task Learning and Execution%" Big Data Journal , 2016
Bullard, Kalesha, Chernova, Sonia, Thomaz, Andrea Lockerd "Human-Driven Feature Selection for a Robotic Agent Learning Classification Tasks from Demonstration" IEEE International Conference on Robotics and Automation (ICRA) , 2018
Bullard, Kalesha; Thomaz, Andrea Lockerd; Chernova, Sonia "Towards Intelligent Arbitration of Diverse Active Learning Queries" IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , 2018
David Kent, Carl Saldanha and Sonia Chernova "A Comparison of Remote Robot Teleoperation Interfaces for General Object Manipulation" IEEE/ACM International Conference on Human-Robot Interaction (HRI) , 2017
David Kent, Siddhartha Banerjee, Sonia Chernova "Learning Sequential Decision Tasks for Robot Manipulation with Abstract Markov Decision Processes and Demonstration-Guided Exploration" IEEE-RAS International Conference on Humanoid Robots , 2018
Kalesha Bullard, Baris Akgun, Sonia Chernova and Andrea L. Thomaz "Grounding Action Parameters from Demonstration" In preparation, 25th IEEE International Symposium on Human and Robot InteractiveCommunication (RO-MAN 2016) , 2016

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Service robots hold the promise of helping solve issues facing our society, ranging from eldercare to education. A critical issue is that we cannot preprogram these robots with every skill needed to play a useful role in society, they will need to acquire new relevant skills after they are deployed. The field of robot learning from demonstration aims to enable everyday people to expand robot capabilities through demonstrations of desired behavior instead of explicit programming. 

Prior LfD techniques have focused on teachers that are co-located with the robot. However, co-located users may not always be able to provide the data required for robust learning and operation. Users may have limited abilities to perform demonstrations due to a physical impairment. They may not have sufficient expertise to demonstrate the task, or the time to generate the diversity and number of examples that state of the art Machine Learning algorithms require to build general models of tasks. Or there may not be a co-located user at all, and the robot must be taught a new task remotely, e.g., while located in a fully automated factory or a hazardous site.

This project has advanced the state of the art through:

(i) The development of innovative remote teleoperation interfaces for complex robotic systems.  The user's ability to quickly and effectively control a robot manipulator is key to learning from a remote user.  Our work has introduced remote manipulation interfaces that are highly robust to high latency and limited bandwidth requirements, as commonly encountered in remote applications.  Our interface is now being validated for use by NASA to control the free-flying AstroBee robot on the International Space Station.

(ii) The development of student-driven learning techniques for autonomous systems. Within the paradigm of learning from demonstration, the burden is typically placed on the user to determine what training data the robot needs.  Our work has contributed techniques that enable robots to identify what to pay attention to when learning, as well as means for actively asking a wide range of questions that can help guide the learning process and make learning more efficient.  Such techniques may be particularly useful in remote scenarios, where the user's situational awareness is more limited.  

(iii) The development of a novel learning representation that leverages object information and hierarchical task structures to improve the efficiency and scalability of exploration-based learning techniques.  

All of the above learning methods, although aimed at remote learning scenarios, can also be applied in co-located interactions.

 


Last Modified: 02/21/2019
Modified by: Sonia Chernova

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page