Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detectionTools Almaev, Timur, Martinez, Brais and Valstar, Michel F. (2015) Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection. In: ICCV15, International Conference on Computer Vision, 11-18 Dec 2015, Santiago, Chile. Full text not available from this repository.
Official URL: http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Almaev_Learning_to_Transfer_ICCV_2015_paper.pdf
AbstractIn this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very easily elicited, annotated, and automatically detected, while others are much harder to elicit and to annotate. We thus consider a novel problem: all AU models for the tar- get subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU. In order to design such a model, we propose a novel Multi-Task Learning and the associated Transfer Learning framework, in which we con- sider both relations across subjects and AUs. That is to say, we consider a tensor structure among the tasks. Our approach hinges on learning the latent relations among tasks using one single reference AU, and then transferring these latent relations to other AUs. We show that we are able to effectively make use of the annotated data for AU12 when learning other person-specific AU models, even in the absence of data for the target task. Finally, we show the excellent performance of our method when small amounts of annotated data for the target tasks are made available.
Actions (Archive Staff Only)
|