TY - GEN
T1 - Tactile facial action units toward enriching social interactions for individuals who are blind
AU - McDaniel, Troy
AU - Devkota, Samjhana
AU - Tadayon, Ramin
AU - Duarte, Bryan
AU - Fakhri, Bijan
AU - Panchanathan, Sethuraman
PY - 2018/1/1
Y1 - 2018/1/1
N2 - Social interactions mediate our communication with others, enable development and maintenance of personal and professional relationships, and contribute greatly to our health. While both verbal cues (i.e., speech) and non-verbal cues (e.g., facial expressions, hand gestures, and body language) are exchanged during social interactions, the latter encompasses more information (~65%). Given their inherent visual nature, non-verbal cues are largely inaccessible to individuals who are blind, putting this population at a social disadvantage compared to their sighted peers. For individuals who are blind, embarrassing social situations are not uncommon due to miscommunication, which can lead to social avoidance and isolation. In this paper, we propose a mapping between visual facial expressions, represented as facial action units, which may be extracted using computer vision algorithms, to haptic (vibrotactile) representations, toward discreet and real-time perception of facial expressions during social interactions by individuals who are blind.
AB - Social interactions mediate our communication with others, enable development and maintenance of personal and professional relationships, and contribute greatly to our health. While both verbal cues (i.e., speech) and non-verbal cues (e.g., facial expressions, hand gestures, and body language) are exchanged during social interactions, the latter encompasses more information (~65%). Given their inherent visual nature, non-verbal cues are largely inaccessible to individuals who are blind, putting this population at a social disadvantage compared to their sighted peers. For individuals who are blind, embarrassing social situations are not uncommon due to miscommunication, which can lead to social avoidance and isolation. In this paper, we propose a mapping between visual facial expressions, represented as facial action units, which may be extracted using computer vision algorithms, to haptic (vibrotactile) representations, toward discreet and real-time perception of facial expressions during social interactions by individuals who are blind.
KW - Assistive technology
KW - Facial action units
KW - Sensory substitution
KW - Social assistive aids
KW - Visual-to-tactile mapping
UR - http://www.scopus.com/inward/record.url?scp=85058511419&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85058511419&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-04375-9_1
DO - 10.1007/978-3-030-04375-9_1
M3 - Conference contribution
AN - SCOPUS:85058511419
SN - 9783030043742
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 3
EP - 14
BT - Smart Multimedia - 1st International Conference, ICSM 2018, Revised Selected Papers
A2 - Berretti, Stefano
A2 - Basu, Anup
PB - Springer Verlag
T2 - 1st International Conference on Smart Multimedia, ICSM 2018
Y2 - 24 August 2018 through 26 August 2018
ER -