3D Pose Estimation of Daily Objects Using an RGB-D Camera

Abstract

In this paper, we present an object pose estimation algorithm exploiting both depth and color information. While many approaches assume that a target region is cleanly segmented from background, our approach does not rely on that assumption, and thus it can estimate pose of a target object in heavy clutter. Recently, an oriented point pair feature was introduced as a low dimensional description of object surfaces. The feature has been employed in a voting scheme to find a set of possible 3D rigid transformations between object model and test scene features. While several approaches using the pair features require an accurate 3D CAD model as training data, our approach only relies on several scanned views of a target object, and hence it is straightforward to learn new objects. In addition, we argue that exploiting color information significantly enhances the performance of the voting process in terms of both time and accuracy. To exploit the color information, we define a color point pair feature, which is employed in a voting scheme for more effective pose estimation. We show extensive quantitative results of comparative experiments between our approach and a state-of-the-art.

Continue reading

Planar Surface SLAM with 3D and 2D Sensors

Abstract

We present an extension to our feature based mapping technique that allows for the use of planar surfaces such as walls, tables, counters, or other planar surfaces as landmarks in our mapper. These planar surfaces are measured both in 3D point clouds, as well as 2D laser scans. These sensing modalities compliment each other well, as they differ significantly in their measurable fields of view and maximum ranges. We present experiments to evaluate the contributions of each type of sensor.
Continue reading

Robust 3D visual tracking using particle filtering on the special Euclidean group: A combined approach of keypoint and edge features

Abstract

We present a 3D model-based visual tracking approach using edge and keypoint features in a particle filtering framework. Recently, particle-filtering-based approaches have been proposed to integrate multiple pose hypotheses and have shown good performance, but most of the work has made an assumption that an initial pose is given. To ameliorate this limitation, we employ keypoint features for initialization of the filter. Given 2D–3D keypoint correspondences, we randomly choose a set of minimum correspondences to calculate a set of possible pose hypotheses. Based on the inlier ratio of correspondences, the set of poses are drawn to initialize particles. After the initialization, edge points are employed to estimate inter-frame motions. While we follow a standard edge-based tracking, we perform a refinement process to improve the edge correspondences between sampled model edge points and image edge points. For better tracking performance, we employ a first-order autoregressive state dynamics, which propagates particles more effectively than Gaussian random walk models. The proposed system re-initializes particles by itself when the tracked object goes out of the field of view or is occluded. The robustness and accuracy of our approach is demonstrated using comparative experiments on synthetic and real image sequences.

Continue reading