Online learning and fusion of orientation appearance models for robust rigid object trackingTools Marras, Ioannis, Medina, Joan Alabort, Tzimiropoulos, Georgios, Zafeiriou, Stefanos and Pantic, Maja (2013) Online learning and fusion of orientation appearance models for robust rigid object tracking. In: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2013), 22-26 April 2013, Shanghai, China. Full text not available from this repository.AbstractWe present a robust framework for learning and fusing different modalities for rigid object tracking. Our method fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depths cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our method combines image gradient orientations as extracted from intensity images with the directions of surface normal computed from dense depth fields provided by the Kinect. To incorporate these features in a learning framework, we use a robust kernel based on the Euler representation of angles. This kernel enables us to cope with gross measurement errors, missing data as well as typical problems in visual tracking such as illumination changes and occlusions. Additionally, the employed kernel can be efficiently implemented online. Finally, we propose to capture the correlations between the obtained orientation appearance models using a fusion approach motivated by the original AAM. Thus the proposed learning and fusing framework is robust, exact, computationally efficient and does not require off-line training. By combining the proposed models with a particle filter, the proposed tracking framework achieved robust performance in very difficult tracking scenarios including extreme pose variations.
Actions (Archive Staff Only)
|