
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | September 6, 2013 |
Latest Amendment Date: | February 9, 2016 |
Award Number: | 1317926 |
Award Instrument: | Standard Grant |
Program Manager: |
Reid Simmons
IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | October 1, 2013 |
End Date: | September 30, 2018 (Estimated) |
Total Intended Award Amount: | $426,060.00 |
Total Awarded Amount to Date: | $426,060.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
926 DALNEY ST NW ATLANTA GA US 30318-6395 (404)894-4819 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
225 North Ave NW Atlanta GA US 30332-0280 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | NRI-National Robotics Initiati |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
The proposed work seeks to leverage cloud computing to enable robots to efficiently learn from remote human domain experts - "Cloud Learning from Demonstration." Building on RobotsFor.Me, a remote robotics research lab, this research will unite Learning from Demonstration (LfD) and Cloud Robotics to enable anyone with Internet access to teach a robot household tasks. The value of this work stems from three aspects. First is the remote system that can learn task models from a series of remote demonstrations from a single user, focusing on learning high-level tasks as opposed to low-level motor skills. The second is the extension of learning from demonstration to multiple teachers. This represents an important relaxation of a limiting assumption to focus on evaluating teacher strengths and effectively handling distinct task solutions. Finally, transparency mechanisms to allow a remote user to develop a correct mental model about the robot?s learning process.
The long term goal of this research is to one day make personal robots accessible to everyday people. The interactive learning framework based on RobotsFor.Me provides unique opportunities for education and outreach. Thomaz and Chernova will outreach to K-12 teachers and students by creating an education portal surrounding RobotsFor.Me containing hands-on workshop curricula. This material will be integrated with the WPI Frontiers program for middle school students, and the GT ePDN professional education network for teachers. A key impact on students at GT and WPI will be direct involvement in this research agenda, and integration with AI, robotics and HRI courses. Chernova is the Diversity Coordinator in the Robotics Engineering Program, and faculty advisor for Women In Robotics Engineering and Women in Technology student groups which will enable braod exposure. Thomaz mentors the RoboWomen graduate women?s group. Software components will also be made available as open source and the PIs have a collaboration plan in place with researchers at Willow Garage, and through student internships will transfer technology to their labs.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Service robots hold the promise of helping solve issues facing our society, ranging from eldercare to education. A critical issue is that we cannot preprogram these robots with every skill needed to play a useful role in society, they will need to acquire new relevant skills after they are deployed. The field of robot learning from demonstration aims to enable everyday people to expand robot capabilities through demonstrations of desired behavior instead of explicit programming.
Prior LfD techniques have focused on teachers that are co-located with the robot. However, co-located users may not always be able to provide the data required for robust learning and operation. Users may have limited abilities to perform demonstrations due to a physical impairment. They may not have sufficient expertise to demonstrate the task, or the time to generate the diversity and number of examples that state of the art Machine Learning algorithms require to build general models of tasks. Or there may not be a co-located user at all, and the robot must be taught a new task remotely, e.g., while located in a fully automated factory or a hazardous site.
This project has advanced the state of the art through:
(i) The development of innovative remote teleoperation interfaces for complex robotic systems. The user's ability to quickly and effectively control a robot manipulator is key to learning from a remote user. Our work has introduced remote manipulation interfaces that are highly robust to high latency and limited bandwidth requirements, as commonly encountered in remote applications. Our interface is now being validated for use by NASA to control the free-flying AstroBee robot on the International Space Station.
(ii) The development of student-driven learning techniques for autonomous systems. Within the paradigm of learning from demonstration, the burden is typically placed on the user to determine what training data the robot needs. Our work has contributed techniques that enable robots to identify what to pay attention to when learning, as well as means for actively asking a wide range of questions that can help guide the learning process and make learning more efficient. Such techniques may be particularly useful in remote scenarios, where the user's situational awareness is more limited.
(iii) The development of a novel learning representation that leverages object information and hierarchical task structures to improve the efficiency and scalability of exploration-based learning techniques.
All of the above learning methods, although aimed at remote learning scenarios, can also be applied in co-located interactions.
Last Modified: 02/21/2019
Modified by: Sonia Chernova
Please report errors in award information by writing to: awardsearch@nsf.gov.