Abstract
Simultaneous Localization and Mapping (SLAM) is not a problem with a one-size-fits-all solution. The literature includes a variety of SLAM approaches targeted at different environments, platforms, sensors, CPU budgets, and applications. We propose OmniMapper, a modular multimodal framework and toolbox for solving SLAM problems. The system can be used to generate pose graphs, do feature-based SLAM, and also includes tools for semantic mapping. Multiple measurement types from different sensors can be combined for multimodal mapping. It is open with standard interfaces to allow easy integration of new sensors and feature types. We present a detailed description of the mapping approach, as well as a software framework that implements this, and present detailed descriptions of its applications to several domains including mapping with a service robot in an indoor environment, large- scale mapping on a PackBot, and mapping with a handheld RGBD camera.
Authors
Alexander J. B. Trevor College of Computing, Georgia Tech atrevor [at] cc.gatech.edu |
John G. Rogers III United States Army Research Lab, Adelphi, MD john.g.rogers59.civ [at] mail.mil |
Henrik Christensen College of Computing, Georgia Tech hic [at] cc.gatech.edu |
Paper
Video
Code/Data
The software framework is available as open source at http://www.omnimapper.org
Acknowledgement
This work was financially supported by the Boeing Corporation and the US Army Research Lab.
Citation
[bibtex key=trevor14:_omnim]