Fusing deep learned and hand-crafted features of appearance, shape, and dynamics for automatic pain estimationTools Egede, Joy Onyekachukwu, Valstar, Michel F. and Martinez, Brais (2017) Fusing deep learned and hand-crafted features of appearance, shape, and dynamics for automatic pain estimation. In: 12th IEEE Conference on Face and Gesture Recognition (FG 2017), 30 May-3 June 2017, Washington, D.C., U.S.A.. Full text not available from this repository.
Official URL: http://ieeexplore.ieee.org/abstract/document/7961808/
AbstractAutomatic continuous time, continuous value assessment of a patient's pain from face video is highly sought after by the medical profession. Despite the recent advances in deep learning that attain impressive results in many domains, pain estimation risks not being able to benefit from this due to the difficulty in obtaining data sets of considerable size. In this work we propose a combination of hand-crafted and deep-learned features that makes the most of deep learning techniques in small sample settings. Encoding shape, appearance, and dynamics, our method significantly outperforms the current state of the art, attaining a RMSE error of less than 1 point on a 16-level pain scale, whilst simultaneously scoring a 67.3% Pearson correlation coefficient between our predicted pain level time series and the ground truth.
Actions (Archive Staff Only)
|