Award Abstract # 1652561
CAREER: Robots that Help People

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: BROWN UNIVERSITY
Initial Amendment Date: March 2, 2017
Latest Amendment Date: June 19, 2020
Award Number: 1652561
Award Instrument: Continuing Grant
Program Manager: Jie Yang
jyang@nsf.gov
 (703)292-4768
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: March 15, 2017
End Date: February 28, 2022 (Estimated)
Total Intended Award Amount: $549,437.00
Total Awarded Amount to Date: $549,437.00
Funds Obligated to Date: FY 2017 = $102,696.00
FY 2018 = $106,018.00

FY 2019 = $109,444.00

FY 2020 = $231,279.00
History of Investigator:
  • Stefanie Tellex (Principal Investigator)
    stefie10@cs.brown.edu
Recipient Sponsored Research Office: Brown University
1 PROSPECT ST
PROVIDENCE
RI  US  02912-9100
(401)863-2777
Sponsor Congressional District: 01
Primary Place of Performance: Brown University
Office of Sponsored Projects
Providence
RI  US  02912-9093
Primary Place of Performance
Congressional District:
01
Unique Entity Identifier (UEI): E3FDXZ6TBHW3
Parent UEI: E3FDXZ6TBHW3
NSF Program(s): Robust Intelligence
Primary Program Source: 01001718DB NSF RESEARCH & RELATED ACTIVIT
01001819DB NSF RESEARCH & RELATED ACTIVIT

01001920DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 7495
Program Element Code(s): 749500
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

As robots become more prevalent, it is crucial to develop ways for people to collaborate with them. This proposal aims to create collaborative robots through a combination of communication, perception, and action. Existing approaches are typically tailored to specific applications; yet people want to talk to robots about everything they can see and do. To address such limitations, this project will create a unified framework to enable robots to communicate with people to learn their needs, plan how to achieve them, and then perceive and act in the world in order to meet those needs. The research will be demonstrated with robots that can assist with household tasks, such as cooking and cleaning, as well as in manufacturing settings, such as collaborative assembly. The project will expose many people to collaborative robotics through an internship program with local high schools, a regional robotics conference, and the Million Object Challenge.

This project will create a model, the Human-Robot Collaborative Partially Observable Markov Decision Process, that enables robots to 1) automatically acquire object-oriented models of objects in the physical world; 2) communicate with people to understand their needs and how to meet them; and 3) act to change the world in ways that meet people's needs. Creating a unified framework requires bridging gaps between different aspects of the robotic system. This project focuses on creating a single probabilistic graphical model to represent the robot's states and actions in a hierarchical framework, allowing the robot to make plans that take into account its own uncertainty and to communicate with a person about everything it can see and everything it can do. Focusing on collaboration leads to reformulations of traditional problems in computer vision, planning, and natural language understanding enabling the robot to collaborate in new and more natural ways.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Wandzel, Arthur L.S. and Oh, Yoonseon and Fishman, Michael and Kumar, Nishanth and Wong, Lawson and Tellex, Stefanie "Multi-Object Search using Object-Oriented POMDPs" IEEE International Conference on Robotics and Automation (ICRA) , 2019 10.1109/ICRA.2019.8793888 Citation Details
Yoonseon Oh, Roma Patel "Planning with State Abstractions for Non-Markovian Task Specifications" Robotics science and systems , 2019 Citation Details
Zheng, Kaiyu and Bayazit, Deniz and Mathew, Rebecca and Pavlick, Ellie and Tellex, Stefanie "Spatial Language Understanding for Object Search in Partially Observed City-scale Environments" ICRA , 2021 https://doi.org/10.1109/RO-MAN50785.2021.9515426 Citation Details
Zheng, Kaiyu and Sung, Yoonchang and Konidaris, George and Tellex, Stefanie "Multi-Resolution POMDP Planning for Multi-Object Search in 3D" IROS , 2021 https://doi.org/10.1109/IROS51168.2021.9636737 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

As robots become more powerful and more autonomous, it is crucial to develop ways for people to collaborate with them.  The aim of this research program was to create robots that collaborate with people to meet their needs by creating a multimodal decision-theoretic model, the Human-Robot Collaborative POMDP (Partially Observable Markov Decision Process), that unifies perception, action, and communication.  Existing approaches to human-robot collaboration rely on models that are tailored to specific domains, and that do not span from perception to action to communication; yet people want to talk toa robot about everything it can see and everything it can do.  To address these limitations, this project enables a robot to acquire models for detecting, localizing, and manipulating objects in the world, plan in very large spaces to find appropriate actions, and communicate with people to learn about their needs.  Integrating communication, perception, and planning in a unified framework has enabled a robot to interpret a person's requests at different levels of abstraction, plan a sequence of actions to fulfill that request, and recover from failure when unexpected events occur.  

This proposal as resulted in advances in perception for object search, language understanding for natural language commands at different levels of abstraction that also incorporate the robot's history or past behavior as well as its goal, and mapping English to rich multimodal semantic spaces. The work has been published at top conferences including ICRA and RSS, and resulted in the training of multiple undergraduate and Ph.D. students. 

 


Last Modified: 07/08/2022
Modified by: Stefanie Tellex

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page