Award Abstract # 1337722
MRI: Development of Large-Scale Dense Scene Capture and Tracking Instrument

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: GEORGE WASHINGTON UNIVERSITY (THE)
Initial Amendment Date: August 15, 2013
Latest Amendment Date: March 9, 2016
Award Number: 1337722
Award Instrument: Standard Grant
Program Manager: Rita Rodriguez
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2013
End Date: August 31, 2018 (Estimated)
Total Intended Award Amount: $500,000.00
Total Awarded Amount to Date: $500,000.00
Funds Obligated to Date: FY 2013 = $500,000.00
History of Investigator:
  • James Hahn (Principal Investigator)
    hahn@gwu.edu
  • John Philbeck (Co-Principal Investigator)
  • Taeyoung Lee (Co-Principal Investigator)
  • Sergio Almecija (Co-Principal Investigator)
  • Gabriel Sibley (Co-Principal Investigator)
  • Brian Richmond (Former Co-Principal Investigator)
Recipient Sponsored Research Office: George Washington University
1918 F ST NW
WASHINGTON
DC  US  20052-0042
(202)994-0728
Sponsor Congressional District: 00
Primary Place of Performance: George Washington University
Washington
DC  US  20052-0058
Primary Place of Performance
Congressional District:
00
Unique Entity Identifier (UEI): ECR5E2LU5BL6
Parent UEI:
NSF Program(s): Major Research Instrumentation
Primary Program Source: 01001314DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1189
Program Element Code(s): 118900
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Proposal #: 13-37899
PI(s): Hahn, James K.
Lee, Taeyoung; Philbeck, John W.; Rickmond, Brian G.; Townsend, Gabe Sibley
Institution: George Washington University
Title: MRI/Dev.: Large-Scale Dense Scene Capture and Tracking Instrument
Project Proposed:
This project, developing a large-scale, dense 3D measurement instrument for capturing dynamic environments, integrates devices such as range-and-color sensing devices like depth cameras (RGB-D sensors) by designing and developing key technical methodologies to fuse the data received from remote networked sensors. The instrument will collectively cover a large space at a sampling resolution of at least 1cm with submillimeter resolution in localized regions. These data are then fused into a single underlying representation. The work involves developing a system that possesses both large-scale and real-time dense capture capabilities. Specifically,
- Experimentally validating perception, planning and control algorithms of agile mobile robots (particularly those that operate with deformable objects) requires ground truth representation of those environments.
- Validating computational tools for tether dynamics and control for flexible multibody systems requires the capture of their environment in a large environment.
- Study of human motion for biomechanics, physical therapy, and exercise science applications requires accurate capture of dynamically changing deformable human shapes in a large environment.
- Image-guided surgical procedures require capture of localized dense patient anatomical surface registered in a larger surgical environment.
- Human visual perception and navigation require a dense model of the surrounding environments that include object in motion, thus advancing the state of eye movement analysis by enabling fast, automated and objective coding of object analysis by enabling fast, automated, and objective coding of objects people see as they move through the environment.
- The study of foot deformations enabled by dense shape capture during running and walking on real sediments will shed light on the evolution of gait and human anatomy, and the biomechanics of barefoot walking and running.
Thus, facilitating new research, the developed system enables rapid capture and construction of large dynamic high-resolution virtual environments that duplicate specific real-world environments, including deformable objects, with unprecedented density of detail.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 58)
Aviles-Rivero, A., Alsaleh, S. M., Philbeck, J. W., Raventos, S. P., Younes, N., Hahn, J. K., & Casals, A. "Sensory substitution for force feedback recovery: A perception experimental study" Transactions on Applied Perception , v.15 , 2018
AVILES RIVERO, ANGELICA; Alsaleh, Samar; Sobrevilla, Pilar; Hahn, James; Casals, Alicia "Towards Retrieving Force Feedback in Robotic-Assisted Surgery: A Supervised Neuro-Recurrent-Vision Approach" Transactions on Haptics , v.13 , 2017
DAI Lei, JIANG Dai-Hong, DING Bin, James K Hahn "Improved Digital Image Restoration Algorithm Based on Criminisi" Journal of Digital Infgormation Management , v.14 , 2016
E. Kaufman and K. Takami and T. Lee and Z. Ai "Autonomous Exploration with Exact Inverse Sensor Models" Journal of Intelligent and Robotic Systems , 2018 10.1007/s10846-017-0710-7
E. Kaufman and T. Lovell and T. Lee "Minimum Uncertainty JPDA Filters and Coalescence Avoidance for Multiple Object Tracking" Journal of the Astronautical Sciences , v.63 , 2016 , p.308--334 10.1007/s40295-016-0092-2
E. Kaufman, T. Lee, Z. Ai, and I. Moskowitz "Bayesian occupancy grid mapping via an exact inverse sensor model" Proceedings of the American Control Conference , 2016
F. Goodarzi and D. Lee and T. Lee "Geometric Control of a Quadrotor UAV Transporting a Payload Connected to a Quadrotor UAV via Flexible Cable" International Journal of Control, Automation, and Systems , v.13 , 2015 , p.1--13 10.1007/s12555-014-0304-0
F. Goodarzi and T. Lee "Extended kalman lter on se(3) for geometriccontrol of a quadrotor uav" Proceedings of the IEEE International Con-ference on Unmanned Aircraft Systems , 2016
F. Goodarzi and T. Lee "Global Formulation of an Extended Kalman Filter on SE(3) for Geometric Control of a Quadrotor UAV" Journal of Intelligent and Robotic Systems , v.88 , 2017 , p.395--416 10.1007/s10846-017-0525-6
F. Goodarzi and T. Lee "Global Formulation of an Extended Kalman Filter on SE(3) for Geometric Control of a Quadrotor UAV" Journal of Intelligent and Robotic Systems , 2017 10.1007/s10846-017-0525-6
F. Goodarzi and T. Lee "Stabilization of a rigid body payload with multiple cooperative quadrotors" ASME Journal of Dynamic Systems, Measurement, and Control , v.138 , 2016 , p.121001--1 10.1115/1.4033945
(Showing: 1 - 10 of 58)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The goal of this project was to develop a novel instrument for large-scale, real-time, dense 3D surface and motion-capture of dynamic scenes. One of the key components of the instrument is a number of low-cost commodity RGB-D devices. This makes it possible to scale up the coverage area by adding a large number of relatively cheap devices. This instrument can measure, in real-time, dense 3D scene-structure and color-appearance information over a large volume at sub-centimeter resolution with focused volumes at sub-millimeter resolution. This gives users the ability to track, at unprecedented scale and resolution, scene-appearance, 3D-structure and their temporal evolution. In a broad range of fields, knowledge of the evolution of ground truth 3D structure and scene appearance is invaluable for validating scientific theories.

3D surface and motion capture is still in its infancy. However, the technology has the potential to impact every segment of society. Smart phones are beginning to incorporate depth cameras (RGB-D cameras), such as the iPhone X. Commodity RGB-D devices such as the Microsoft Kinect is able to capture scenes at high resolution. Aside from the development of the instrument, the basic technology that was developed is bound to impact the future uses of these commodity devices.

The instrument is housed in the Motion Capture and Analysis Laboratory (MOCA) at the George Washington University. MOCA has been used in robotics to track ground truth motions of flocks of unmanned flying robots. Anthropologists have used MOCA to track the human motion involved in locomotion, tool making, and throwing to study the biomechanics of early hominids. MOCA instrument and technology has also been used in the capture of motions involved in medical procedures, such as endotracheal intubation and intravenous injections. Psychologists have used MOCA to study the cognitive factors involved in human navigation.

The instrument has also been used in non-technical domains. We are using the instrument in collaboration with the Jane Goodall Institute to create avatars of Jane Goodall. Dancers from Maida Withers Dance Company were scanned using the instrument. The data was then visualized during a live dance performance.

 


Last Modified: 10/10/2018
Modified by: James K Hahn

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page