Award Abstract # 1318392
RI: Small: Robust and Long-Term Visual Mapping and Localization

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: MASSACHUSETTS INSTITUTE OF TECHNOLOGY
Initial Amendment Date: August 26, 2013
Latest Amendment Date: June 21, 2014
Award Number: 1318392
Award Instrument: Continuing Grant
Program Manager: Reid Simmons
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2013
End Date: August 31, 2017 (Estimated)
Total Intended Award Amount: $373,293.00
Total Awarded Amount to Date: $373,293.00
Funds Obligated to Date: FY 2013 = $120,691.00
FY 2014 = $252,602.00
History of Investigator:
  • John Leonard (Principal Investigator)
    jleonard@mit.edu
Recipient Sponsored Research Office: Massachusetts Institute of Technology
77 MASSACHUSETTS AVE
CAMBRIDGE
MA  US  02139-4301
(617)253-1000
Sponsor Congressional District: 07
Primary Place of Performance: Massachusetts Institute of Technology
Cambridge
MA  US  02139-4301
Primary Place of Performance
Congressional District:
07
Unique Entity Identifier (UEI): E2NYLCDML6V1
Parent UEI: E2NYLCDML6V1
NSF Program(s): Robust Intelligence
Primary Program Source: 01001314DB NSF RESEARCH & RELATED ACTIVIT
01001415DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7495, 7923
Program Element Code(s): 749500
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This project develops robust and persistent algorithms for mapping and localization using low-cost visual/depth cameras and inertial sensors. New map representations and algorithms are developed to provide computationally efficient long-term 3D mapping and navigation. Topics of investigation include incremental non-Gaussian inference techniques, dense mapping, change detection in dynamic environments, and semantic understanding. A lack of robustness has been a key shortcoming of previous techniques for localization, and thwarted the development of persistently autonomous mobile robot systems. Extension to multimodal distributions poses significant intellectual difficulties. Dense methods are transforming robotic perception, enabling sophisticated physical interaction with objects, traversal of stairs, and safe maneuvering in cluttered and confined spaces. Whereas most past research in robotic mapping has assumed a static world, the approach being developed in this grant exploits the dynamics of the world to discover information about objects and places. These advances are being tested for robotic and man-portable sensing systems operating in indoor, outdoor, and underwater environments. The expected impacts span a broad range of applications, from robotic manufacturing, medical robotics, agriculture, and space and underwater exploration, in which perception is a key requirement. Other potential spin-offs include human-portable mapping applications in real estate, construction, and facility maintenance, health and safety. MIT Online Robotics Education provides a set of online course materials for core topics in robotics, targeted to a broad audience for high school and college education. Open source software modules provide positioning capabilities for low-cost robots for education and service robotics applications.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

D. Fourie, John Leonard and Michael Kaess. "A Nonparametric Belief Solution to the Bayes Tree." International Conference on Robots and Systems (IROS) , 2016
G. Huang, M. Kaess, and J. J. Leonard "Consistent Unscented Incremental Smoothing for Multi-robot Cooperative Target Tracking" Robotics and Autonomous Systems , 2014 DOI:10.1016/j.robot.2014.08.007
Liam Paull, Jacopo Tani, Heejin Ahn, Javier Alonso-Mora, Luca Carlone, Michal Cap, Yu Fan Chen, Changhyun Choi, Jeff Dusek, Yajun Fang, Daniel Hoehener, Shih-Yuan Liu, Michael Novitzky, Igor Franzoni Okuyama, Jason Pazis, Guy Rosman, Valerio Varricchio, H "Duckietown: an Open, Reproducible, Inexpensive, and Friendly Platform for Autonomy Education and Research" IEEE International Conference on Robotics and Automation , 2017
Sudeep Pillai and John Leonard "Towards Visual Ego-motion Learning in Robots" International Conference on Robots and Systems (IROS) , 2017
T. Whelan and M. Kaess and H. Johannsson and M. Fallon and J. J. Leonard and J. McDonald "Real-time large scale dense {RGB-D} {SLAM} with volumetric fusion" Intl. J. of Robotics Research , v.34 , 2015 , p.598-626
T. Whelan, M. Kaess, H. Johannsson, M. Fallon, J. J. Leonard and J. McDonald "Real-time Large Scale Dense RGB-D SLAM with Volumetric Fusion" International Journal of Robotics Research , v.34 , 2014 , p.598 10.1177/0278364914551008

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This research project developed new robust algorithms for improving the abilities of mobile robots to build maps of unknown environments, while concurrently using those maps to navigate.  This capability, known as Simultaneous Localization and Mapping (SLAM) is a cornerstone of robust perception and autonomy for a wide range of mobile robots, such as self-driving cars, unmanned air vehicles, and undersea robots. Our work made advances in the fundamental underlying techniques for SLAM in a number of ways, producing several products with a high intellectual merit in relation to the mobile robotics literature.  A key focus was increasing robustness, thereby improving long-term autonomy. For example, we developed a novel solution for performing "loop closing", in which a robot corrects its map by revisiting a place that it has been before, to apply a geometric correction to its map, using rich dense 3D representations extracted from RGB-D camera data [Whelan et al., "Real-time Large Scale Dense RGB-D SLAM with Volumetric Fusion", IJRR, Vol 34, Issue 4-5, 2015]. Another advance was the development of a new approach to enable a robot to maintain multimodal beliefs about the locations of entities in the world; this is helpful for highly ambiguous situations in which sensor measurements and location estimates are not well represented with a Gaussian distribution [Fourie et al., "A nonparametric belief solution to the Bayes tree", IROS 2016].  This problem is challenging due to computational complexity, but important because it can improve the ability of mobile robots to avoid making mistakes and to more easily recover from failures.  In addition, we developed some of the first "certifiably correct" algorithms for SLAM [Rosen et al., WAFR 2016, Best Paper Award.]  Finally, our work also lead to some of the first SLAM algorithms that uses a Deep Learning architecture to perform visual odometry and loop closing, with self-supervised techniques for motion estimation and map correction [Pillai and Leonard, "Towards visual ego-motion learning in robots", IROS 2017.]
The broader impacts of our project include a number of contributions to mentoring of young roboticists and the development of educational materials for widespread use.  Perhaps the most notable broader impacts outcome of our project was the creation of the highly successful "Duckietown" http://duckietown.mit.edu/ which was developed and taught at MIT in Spring, 2016 by the PIs junior colleagues Andrea Censi and Liam Paull, under partial support from this NSF grant.  Duckietown is an open, reproducible, in-expensive, and friendly platform for autonomy education and research.  Duckietown is explicitly designed to make robotics more approachable, enhancing diversity and inclusion. The system comprises small autonomous vehicles (“Duckiebots”) built from off-the-shelf components, and a city (“Duckietown”) complete of roads, signage, traffic lights, obstacles, and citizens (duckies) in need of transportation. The Duckietown platform is designed to be “minimal” in terms of cost, while providing a large range of autonomy behaviors. The Duckiebots are able to: follow lanes while avoiding obstacles, pedestrians (duckies) and other Duckiebots; localize within a global map; navigate a city; and coordinate at intersections to avoid collisions. A Duckiebot senses the world with only one monocular camera and performs all processing onboard a Raspberri Pi 2. To perform all the functionality required (compensate for variable illumination, detect lane markings, signage, traffic lights, obstacles, etc.), we implemented explicit resource management for computation, memory, and bandwidth. Since all of the code and instructions are fully available as open source and the platform is low-cost, the hope is that others in the community will adopt the platform for education and research.  Duckietown has now grown into a global effort, involving approximately ten universities on four continents.  See http://duckietown.org/ for more information.
Overall, this project supported and involved a wide of MIT graduate and undergraduate students, as well as MIT postdocs, who contributed both to fundamental research and educational curriculum developed, generating a considerable number of papers that were presented at leading international conferences and workshops in Robotics. 


Last Modified: 12/16/2017
Modified by: John J Leonard

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page