OmniMapper: A Modular Multimodal Mapping Framework


Simultaneous Localization and Mapping (SLAM) is not a problem with a one-size-fits-all solution. The literature includes a variety of SLAM approaches targeted at different environments, platforms, sensors, CPU budgets, and applications. We propose OmniMapper, a modular multimodal framework and toolbox for solving SLAM problems. The system can be used to generate pose graphs, do feature-based SLAM, and also includes tools for semantic mapping. Multiple measurement types from different sensors can be combined for multimodal mapping. It is open with standard interfaces to allow easy integration of new sensors and feature types. We present a detailed description of the mapping approach, as well as a software framework that implements this, and present detailed descriptions of its applications to several domains including mapping with a service robot in an indoor environment, large- scale mapping on a PackBot, and mapping with a handheld RGBD camera.


Alexander J. B. Trevor
College of Computing,
Georgia Tech
atrevor [at]
John G. Rogers III
United States Army Research Lab,
Adelphi, MD
john.g.rogers59.civ [at]
Henrik Christensen
College of Computing,
Georgia Tech
hic [at]


Download (PDF, Unknown)



The software framework is available as open source at


This work was financially supported by the Boeing Corporation and the US Army Research Lab.


[bibtex key=trevor14:_omnim]

Posted in Conference, ICRA, Multi-robot Semantic Mapping, Publications and tagged , , , .