
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | July 8, 2016 |
Latest Amendment Date: | June 14, 2019 |
Award Number: | 1619273 |
Award Instrument: | Standard Grant |
Program Manager: |
Dan Cosley
dcosley@nsf.gov (703)292-8832 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 1, 2016 |
End Date: | July 31, 2020 (Estimated) |
Total Intended Award Amount: | $495,628.00 |
Total Awarded Amount to Date: | $495,628.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1050 STEWART ST. LAS CRUCES NM US 88003 (575)646-1590 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
NM US 88003-8002 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | HCC-Human-Centered Computing |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Unmanned robotic systems are set to revolutionize a number of vital human activities, including disaster response, public safety, citizen science, and agriculture, yet such systems are complex and require multiple pilots. As algorithms take over, and controls are simplified, workers benefit from directing, rather than controlling, these systems. Such simplifications could enable workers to use their hands and focus their perception in the physical world, relying on wearable interfaces (e.g., chording keyboards, gesture inputs) to manage teams of unmanned vehicles. Adaptive autonomy, in which unmanned systems alter their need for human attention in response to complexities in the environment, offers a solution in which workers can use minimal input to enact change. The present research combines wearable interfaces with adaptive autonomy to direct teams of software agents, which simulate unmanned robotic systems. The outcomes will support next-generation unmanned robotic system interfaces.
The objective of this project is to develop wearable interfaces for the direction of a team of software agents that make use of adaptive autonomy and ascertain the effectiveness of interface designs to direct agents. This research develops a testbed for wearable cyber-human system designs that uses software agents as unmanned robotic system simulations and uses adaptive-autonomy algorithms to drive the agents. The research develops a framework connecting wearable interface modalities to the activities they best support. Developed systems will be validated through mixed reality environments in which participants will direct software agents while acting in the physical world. The principal hypothesis is that a set of interconnected interfaces can be developed that, through appropriate control algorithms, maximizes an operator's control span over a team of agents and optimizes the operator's physical workload, mental workload, and situation awareness.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This is the outcomes report for project 1619273: CHS: Small: Wearable Interfaces to Direct Agent Teams with Adaptive Autonomy. This work was aimed at supporting future disaster response scenarios where operatives would benefit from multiple drones providing information. We explored how to direct those drones through wearable technology that did not inhibit movement or awareness. Specifically, we were interested in developing more intelligent multi-agent systems (i.e., software simulations of drones) and better user interfaces for disaster contexts (e.g., wearable computer configurations). The work made use of a simulation environment to enable human participants to don wearable computers, move and act outdoors, and interact with virtual drones.
The grant has produced a set of reusable software and hardware configurations to investigate the value of wearable computers to support human-robot teams in the field (https://pixllab.github.io/URSDocumentation/). The system connects together multiple pieces of software to enable a mixed reality experience of working with virtual drones. In our scenarios to date, we look at using a game that is an analog of urban search and rescue. The player moves around in the real world while their location is tracked by the system. The player seeks out virtual goals. To assist the player, multiple drones can be deployed to find goals the player cannot reach (e.g., on top of buildings).
To provide game logic and a first-person gameworld user interface through an avatar, we use an engine built on Unity. A drone simulation platform, Gazebo, tracks and simulates the virtual drones, accounting for avoiding obstacles. A customized planner, taking inputs described by the Planning Domain Description Language (PDDL), provides the drones? intelligence and communicates with Gazebo using the Robot Operating System. Finally, a set of hardware and software interfaces connect with these components to create a user interface that provides information about the gameworld and the ability to interact with the virtual drones. In our first studies, we use NASA WorldWind for a map visualization, displayed on a wrist-worn touchscreen, and provide drone data on an HMD.
The project produced a design framework to guide building composite wearable computers. Constructing the framework involved qualitatively analyzing over 100 sources for information about components that could be used to build wearable computers. It identifies four dimensions that are essential to choosing devices: type of interactivity provided, associated output modalities, mobility, and body location for each device identified for study. The framework supports designers in ensuring a resulting composite computer supports the types of interaction necessary for the designed tasks and that the combination of devices will be compatible on the human body. We expect the framework to support researchers and designers in building new wearable computers in a number of domains, enabling the selection of the right devices to support various contexts.
The simulated environment for controlling teams of drones provides solutions for issues that arise from hybrid human-drone team coordination and planning. To the best of our knowledge, this is the first simulation environment that allows a single human to control a team of robots at the same time through wearable devices. We undertook a number of user studies, starting with early prototypes and concluding with a test of multiple wearable configurations identified through the framework. Overall, participants were able to use our mixed reality system as planned. We identified needs for further training and our studies show ways in which future wearable user interfaces might be improved.
Last Modified: 11/23/2020
Modified by: Zachary O Toups
Please report errors in award information by writing to: awardsearch@nsf.gov.