
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | August 25, 2020 |
Latest Amendment Date: | October 15, 2020 |
Award Number: | 2007011 |
Award Instrument: | Standard Grant |
Program Manager: |
Todd Leen
tleen@nsf.gov (703)292-7215 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | October 1, 2020 |
End Date: | September 30, 2023 (Estimated) |
Total Intended Award Amount: | $495,116.00 |
Total Awarded Amount to Date: | $495,116.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
4333 BROOKLYN AVE NE SEATTLE WA US 98195-1016 (206)543-4043 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
185 Stevens Way, CSE101 Seattle WA US 98195-0001 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | HCC-Human-Centered Computing |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Nearly 56.7 million (18.7%) of the non-institutionalized US population had a disability in 2010. Among them, about 12.3 million needed assistance with one or more activities of daily living (ADLs), such as feeding, bathing, or dressing. Robots have the potential to help with these activities of daily living but every user is different and they have diverse needs and preferences. For long-term care, it is essential that such an assistive system can adapt to diverse situations and user preferences. This project focuses on the feeding activity and is based on this central tenet that by leveraging user feedback and contexts from previous feeding attempts, a robot should be able to learn online how to adapt to new food items, user preferences, and environments. Through improved access to independent living, the results of this project can positively impact millions of people worldwide. The long-term promise of this research is to have robots in society that can seamlessly and fluently perform complex manipulation tasks in cluttered, complex, and dynamic human environments in real homes.
This project formalizes robot-assisted feeding using a general framework based on contextual bandits that allows directly optimizing for user preferences online. The online contextual bandit framework applied to acquiring and transferring food items provides the foundation to leverage user feedback to benchmark, learn, and develop methods for a natural dining experience, and exploring different contexts for generalizing bite acquisition. The models directly optimize for the user experience through user feedback, and adapt to a range of social and environmental factors with the intelligent use of embedded sensing. The project explores solutions which balance the trade-off between high quality but costly expert assistance and cheaper learned solutions in the form of a shared-autonomy system. Critical issues include the diversity of user preferences both temporally and ethnographically, designing for the experience across the entire learning procedure, and processing high-dimensional contextual information. The tangible result will be an intelligent assistive feeding robot whose performance can generalize to different activities and adapt to user preferences. An intelligent assistive feeding robot that relies on user feedback and rich sensor information will advance integrating complex user experiences and social environments into a coherent learning robotic system. Contextual bandits, a highly optimized generalization of multiple hypothesis testing, have broad potential in human-robotic, and human-AI systems in general to efficiently adapt to specific user needs in real time.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This work proposed a general machine-learning framework to enable online adaptation to user preferences in a robot-assisted feeding system (a Kinova JACO robotic arm mounted on a wheelchair) for users with upper-extremity mobility impairments. We focused on the specific tasks of bite acquisition (i.e. how the robot picks up a bite of food from a plate) and bite transfer (i.e. how the robot moves a bite of food to a user’s mouth). Different kinds of foods and varying user preferences require different robot strategies, and we focus on creating a robust system to acquire and transfer the myriad types of food that each user may want to eat. Our research resulted in several key findings, which have improved the robot-assisted feeding system’s performance and revealed insights into the human factors inherent to such a project.
Our investigations into improving the performance of bite acquisition resulted in eleven different actions that can be used to pick up a variety of different foods. This was achieved by developing a parameterized action space for bite acquisition and collecting data from people using a trackable fork to acquire and transfer food. This data collection and specialized action space enabled the creation of the eleven actions our system currently uses to acquire food. Additionally, the robot can use both food images and the forces put on the fork when acquiring food to learn which actions should be used for each kind of food. From the look and ``feel" of a new food item, the robot can, over time, learn which action will most likely result in a successful bite acquisition.
The physical system has also been improved. Previously, the assistive feeding robot was tied to a stationary external computer, which made the system non-portable and made it very difficult to transfer to different arms/wheelchairs. The system is now fully self-contained and portable, paving the way for a wider variety of user studies. Furthermore, we sped up the robotic feeding system by 33%, from around 45 seconds per bite to 30 seconds. This will improve acceptance and adoption of the technology, and enables us to run user studies where the robot is feeding users an entire meal, not just individual bites.
In addition to technical system improvements, we have also investigated how users may want to interact with the system. For example, when should the robot act autonomously, and when should it request assistance or input from a user? We demonstrated that robots can extend their abilities by asking for human help, although they have to do so intelligently to ensure the human provides the best help possible and is willing to continue helping the robot in the future. We also demonstrated that users wanted to control the robot in some dimensions of robot feeding during social feeding, but not in others. Users were also willing to provide interventions when the robot failed. These outcomes are important for determining what parts of the robot feeding process should be automated, and which should be controlled by the user.
We also learned that to create a useful feeding system, we cannot just optimize objective measures such as ``efficiency,'' but also must incorporate subjective measures such as ``comfort.'' We learned that it is possible to develop quantitative heuristics for such subjective measures and that incorporating those heuristics into the robot's planning algorithm can significantly improve user experience. We developed quantitative heuristics for bite transfer trajectories that reward a balance of ease-of-transfer and user comfort. When applied to state-of-the-art heuristic planning algorithms, the resulting trajectories were preferred by users over the fixed actions used in previous work. This work was done in collaboration with Cornell University and Stanford University.
The results of our work have resulted in multiple publications, permanent contributions to open-source libraries used throughout the robotics community (including AprilTags and ROS controllers), and provided training opportunities for graduate, undergraduate, and high school students, as well as postdoctoral scholars. In addition, we continue to actively engage the K-12 community by running demos and outreach events that familiarize them with robotics, let them interact with our robot feeding system, and pique their interests in STEM careers.
Through improved access to independent living, the results of this project can positively impact millions of people worldwide. Given the vast variability in our target population, customizing to the unique needs and preferences of users is transformational in the scalability of assistive robotics for self-care. Although the project focuses on feeding systems for individuals with upper limb limitations, the developed tools and design framework could impact individuals with other disabilities as well as able-bodied individuals.
Last Modified: 01/13/2024
Modified by: Siddhartha Srinivasa
Please report errors in award information by writing to: awardsearch@nsf.gov.