Face2Multi-modal: in-vehicle multi-modal predictors via facial expressionsTools Huang, Zhentao, Li, Rongze, Jin, Wangkai, Song, Zilin, Zhang, Yu, Peng, Xiangjun and Sun, Xu (2020) Face2Multi-modal: in-vehicle multi-modal predictors via facial expressions. In: 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM . UNSPECIFIED, pp. 30-33. ISBN 9781450380669
Official URL: http://dx.doi.org/10.1145/3409251.3411716
AbstractTowards intelligent Human-Vehicle Interaction systems and innovative Human-Vehicle Interaction designs, in-vehicle drivers' physiological data has been explored as an essential data source. However, equipping multiple biosensors is considered the limited extent of user-friendliness and impractical during the driving procedure. The lack of a proper approach to access physiological data has hindered wider applications of advanced biosignal-driven designs in practice (e.g. monitoring systems and etc.). Hence, the demand for a user-friendly approach to measuring drivers' body statuses has become more intense. In this Work-In-Progress, we present Face2Multi-modal, an In-vehicle multi-modal Data Streams Predictors through facial expressions only. More specifically, we have explored the estimations of Heart Rate, Skin Conductance, and Vehicle Speed of the drivers. We believe Face2Multi-modal provides a user-friendly alternative to acquiring drivers' physiological status and vehicle status, which could serve as the building block for many current or future personalized Human-Vehicle Interaction designs. More details and updates about the project Face2Multi-modal is online at https://github.com/unnc-ucc/Face2Multimodal/.
Actions (Archive Staff Only)
|