3D Pose Estimation of Daily Objects Using an RGB-D Camera


In this paper, we present an object pose estimation algorithm exploiting both depth and color information. While many approaches assume that a target region is cleanly segmented from background, our approach does not rely on that assumption, and thus it can estimate pose of a target object in heavy clutter. Recently, an oriented point pair feature was introduced as a low dimensional description of object surfaces. The feature has been employed in a voting scheme to find a set of possible 3D rigid transformations between object model and test scene features. While several approaches using the pair features require an accurate 3D CAD model as training data, our approach only relies on several scanned views of a target object, and hence it is straightforward to learn new objects. In addition, we argue that exploiting color information significantly enhances the performance of the voting process in terms of both time and accuracy. To exploit the color information, we define a color point pair feature, which is employed in a voting scheme for more effective pose estimation. We show extensive quantitative results of comparative experiments between our approach and a state-of-the-art.


Changhyun Choi
College of Computing,
Georgia Tech
heanylab [at] gmail.com

Henrik Christensen
College of Computing,
Georgia Tech
hic [at] cc.gatech.edu



Download (PDF, Unknown)



This work has in part been sponsored by the Boeing Corporation. The support is gratefully acknowledged.


[bibtex key=choi12:_pose_rgb_d]

Posted in Conference, IROS, Multi-robot Semantic Mapping, Publications.