
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | August 14, 2020 |
Latest Amendment Date: | October 13, 2020 |
Award Number: | 2024878 |
Award Instrument: | Standard Grant |
Program Manager: |
Wendy Nilsen
wnilsen@nsf.gov (703)292-2568 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | October 1, 2020 |
End Date: | September 30, 2024 (Estimated) |
Total Intended Award Amount: | $748,723.00 |
Total Awarded Amount to Date: | $748,723.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1000 HILLTOP CIR BALTIMORE MD US 21250-0001 (410)455-3140 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
Baltimore MD US 21250-0001 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | NRI-National Robotics Initiati |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This project will enable robots to learn to perform tasks with human teammates from language and other human modalities, and then transfer the learned knowledge across heterogeneous platforms and tasks. This will ultimately allow human-robot teaming in domains where people use varied language and instructions to complete complex tasks. As robots become more capable and ubiquitous, they are increasingly moving into complex, human-centric environments such as workplaces and homes. Being able to deploy useful robots in settings where human specialists are stretched thin, such as assistive technology, elder care, and education, has the potential to have far-reaching impacts on human quality of life. Achieving this will require the development of robots that learn, from natural interaction, about an end user's goals and environment. This work is intended to make robots more accessible and usable for non-specialists. In order to verify success and involve the broader community, tasks will be drawn from and tested in conjunction with community Makerspaces, which are strongly linked with both education and community involvement. The award includes an education and outreach plan designed to increase participation by and retention of women and underrepresented minorities (URM) in robotics and computing, engaging with UMBC's large URM population and world-class programs in this space.
This award addresses how collaborative learning and successful performance during human-robot interactions can be accomplished by learning from and acting on grounded language. To accomplish this, this project will revolve around learning structured representations of abstract knowledge with goal-directed task completion, grounded in a physical context. There are three high-level research thrusts. In the first, new perceptual models to learn an alignment among a robot's multiple, heterogeneous sensor and data streams will be developed. In the second, synchronous grounded language models will be developed to better capture both general linguistic and implicit contextual expectations that are needed for completing tasks. In the third, a deep reinforcement learning framework will be developed that can leverage the advances achieved by the first two thrusts, allowing the development of techniques for learning conceptual knowledge. Taken together, these advances will allow an agent to achieve domain adaptation, improve its behaviors in new environments, and transfer conceptual knowledge among robotic agents.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
As robots become more available, the big questions facing researchers have changed. Today's robots are smaller, more affordable, and more able to perform a variety of tasks than ever before. It is now possible to imagine near-term deployment of useful robots in traditionally human-centric environments such as homes, schools, and workplaces. However, if these robots are limited to tasks they are pre-programmed to understand, they cannot work with people or each other in dynamic human environments. To be useful, ubiquitous robots must be able to learn about new environments and tasks from people, and they must be able to act on that new knowledge in order to adapt to tasks as required by the people around them.
When people come together in ad hoc teams for a shared undertaking, they use language to assign tasks and communicate information. This kind of natural language is intuitive, informative, and richly contextual. This has led to work on grounded language learning in robotics, where robots use the semantics of words and sentences to understand the noisy, perceptual world in which they operate. The ability to understand language lets non-specialist users to teach robots about their expectations. In this project, we have addresses questions of how learning from and acting according to grounded language can support successful, collaborative learning and performance during human-robot interactions.
In the course of this work, we have explored how people can use language to guide and be guided by robots for a variety of interactive tasks in real and virtual maker-spaces. These "robot teammates" work successfully together with people to perform tasks like building a simple circuit or assembling a small model toy, with language as the medium for understanding desired behaviors, such as "Bring the battery case closer to me." Robots that can understand and follow these spoken instructions have the potential to be more useful helpers in a variety of human settings. We have also worked on how robots can go from spoken intentions (such as "I would like the room to be clean") to a series of tasks that can be carried out ("I will put away the book and clean the dish towel"), which will underpin future robots that can receive higher-level commands and operate on them. Finally, our research has involved developing new machine learning approaches to integrating language with vision and other sensor inputs. This work will ultimately support robots that can learn from language, work together with people on tasks, and take high level instructions have the potential to serve as real helpers and collaborators in human spaces.
Last Modified: 03/25/2025
Modified by: Cynthia Matuszek
Please report errors in award information by writing to: awardsearch@nsf.gov.