A bayesian approach to automated creation of tactile facial images

Zheshen Wang, Baoxin Li

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


Portrait photos (facial images) play important social and emotional roles in our life. This type of visual media is unfortunately inaccessible by users with visual impairment. This paper proposes a systematic approach for automatically converting human facial images into a tactile form that can be printed on a tactile printer and explored by a user who is blind. We propose a deformable Bayesian Active Shape Model (BASM), which integrates anthropometric priors with shape and appearance information learnt from a face dataset. We design an inference algorithm under this model for processing new face images to create an input-adaptive face sketch. Further, the model is enhanced by input-specific details through semantic-aware processing. We report experiments on evaluating the accuracy of face alignment using the proposed method, with comparison with other state-of-the-art results. Furthermore, subjective evaluations of the produced tactile face images were performed by 17 persons including six visually-impaired users, confirming the effectiveness of the proposed approach in conveying via haptics vital visual information in a face image.

Original languageEnglish (US)
Article number5437233
Pages (from-to)233-246
Number of pages14
JournalIEEE Transactions on Multimedia
Issue number4
StatePublished - Jun 2010


  • Image matching
  • Image shape analysis
  • Pattern recognition
  • Tactile graphics

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'A bayesian approach to automated creation of tactile facial images'. Together they form a unique fingerprint.

Cite this